New VPS Location: Hub Europe
Written by Simone
Last evening, at around 22:00 CET, Contabo migrated my VPS from old Hub at Nuremberg to the newly built "Hub Europe".
They rebooted it and everything came back up as usual.. I wasn't at home when it happened, didn't even notice anything until I connected and found no tmux
session running. Even my phone which was connected to wireguard
, quietly continued working and sending notifications 😎
Meanwhile, my friends and I watched "The Creator" on Disney+
Can't really say I do recommend it, but for some casual evening/entertainment, it wasn't half bad.
Changes for blog posts' license
Written by Simone
I've been rethinking about the publication license of these blog posts.
Changes I made:
- Some "public" posts were turned to "CC BY-NC-SA"
- Some "all-rights-reserved" posts were turned to "CC BY-NC-SA"
At the moment there are still other public and all-rights-reserved posts.. Most of the public ones are from me, while some are from friends who told me to publish them with such license.
The all-rights-reserved ones are mostly from content I grabbed over the net and published here, where I wasn't able to contact the original authors (so credits are to themselves) and some others are e.g. posts with pictures I took myself.
So.. From now on, all these types of licenses will co-exist and each post will be tagged with the correct/relative one (hopefully).
Disclaimer: If not specified, you can assume it's "CC BY-NC-SA" by Simone "roughnecks" Canaletti
prosodyctl commands and examples
Written by Simone
prosodyctl shell
Launch the shell:
# prosodyctl shell
Delete pubsub node (the ">" sign at the beginning is important and also dangerous, as it lets you do anything!):
>prosody.hosts["pubsub.example.tld"].modules.pubsub.service:delete("blog", true)
Delete ALL pubsub nodes
>local service = prosody.hosts["pubsub.example.tld"].modules.pubsub.service; for node in pairs(select(2, assert(service:get_nodes(true)))) do service:delete(node, true); end
Check subscription by user:
>prosody.hosts["pubsub.example.tld"].modules.pubsub.service.subscriptions["user@example.tld"]
Change affiliation on pubsub nodes (make user owner):
>prosody.hosts["pubsub.example.tld"].modules.pubsub.service:set_affiliation("blog",true,"user@example.tld","owner")
Unsubscribe from node
>prosody.hosts["pubsub.example.tld"].modules.pubsub.service:remove_subscription("blog",true,"user@example.tld")
Subscribe to node
>prosody.hosts["pubsub.example.tld"].modules.pubsub.service:add_subscription("blog",true,"user@example.tld")
prosodyctl commands
Asking for help:
# prosodyctl shell help
List registered users:
# prosodyctl shell user list example.tld
List existing MUCs:
# prosodyctl shell muc list [component name]
Activate a component:
# prosodyctl shell host activate some.component.example.tld
Generate Invites: create a new invite using an ad-hoc command in an XMPP client connected to your admin account, or use the command line:
# prosodyctl mod_invites generate example.tld
Reset forgot passsword: "doesn't seem to work - see below"
# prosodyctl mod_invites generate example.tld --reset <USERNAME>
Automatic Certificates Import: prosodyctl has the ability to import and activate certificates in one command:
# prosodyctl --root cert import HOSTNAME /path/to/certificates
Certificates and their keys are copied to /etc/prosody/certs (can be changed with the certificates option) and then it signals Prosody to reload itself. –root lets prosodyctl write to paths that may not be writable by the prosody user, as is common with /etc/prosody. Multiple hostnames and paths can be given, as long as the hostnames are given before the paths.
This command can be put in cron or passed as a callback to automated certificate renewal programs such as certbot or other Let's Encrypt clients.
Import All:
# prosodyctl --root cert import /etc/letsencrypt/live
Reset forgot password
# prosodyctl install --server=https://modules.prosody.im/rocks/ mod_password_reset
Reload prosody configuration then use ad-hoc commands to generate a reset link for given JID
eggdrop script: search on SearXNG instance, by cage
Written by Simone
You can try this script on #fediverso at irc.libera.chat, where me, cage, ndo and other friends hang out
bot: "verne", running on @wpn
SearXNG instance: https://search.woodpeckersnest.space/
Thanks to cage for the script and ndo for creating the channel o/
# © cage released under CC0, public domain
# https://creativecommons.org/publicdomain/zero/1.0/
# Date: 16-08-2024
# Version: 0.1
# Package description: do a web search using your searxng instance
# Public ones won't probably work because of "limiter"
# Authorize your channel from the partyline with:
# .chanset +searxng #your-channel
# Do a search
# .search <query> | .search paris (this query goes to default engine)
# .search +<engine> <query> | .search +wp paris (this query goes to
# wikipedia)
# .search !images paris | this query search only paris' images
# List of engines: https://docs.searxng.org/user/configured_engines.html
# tcllib is required
############## configuration directives ############################
# url of the HTTP(S) server of the search engine
set searxconfig(website_url) "https://example.com/searxng"
# serach command to trigger the search
set searxconfig(cmd) ".search"
# default search engine
set searxconfig(default_engine) "ddg"
# maximum number of search results printed
set searxconfig(max_results) 3
# time tracker file
# NB: when this script runs any file with the same name within the path in the
# working directory (depending of what is considered the working
# directory of the script) will be erased and overwritten!
set searxconfig(file_millis) "searx_millis.tmp"
# Minimum search frequency in milliseconds.
# This is the minimum time that must pass between two consecutive
# search
set searxconfig(max_freq) 30000
############## configuration ends here #############
# tcllib is required
package require csv
setudef flag searxng
if { !([info exists searxconfig(lastmillis)]) } {
set searxconfig(lastmillis) 0
}
bind pub - $searxconfig(cmd) search:searxng
proc send_message {message} {
putserv "PRIVMSG $message"
}
proc slurp_file {path} {
set fp [open $path r]
set file_data [read $fp]
close $fp
return $file_data
}
proc process_csv {csv channel} {
global searxconfig
set rows [split $csv "\n"]
set count 0
#remove the header
set rows [lrange $rows 1 [llength $rows]]
if {[llength $rows] < 1} {
send_message "$channel :Something gone wrong."
} else {
foreach row $rows {
if {$count < $searxconfig(max_results)} {
set row_splitted [csv::split $row]
set title [lindex $row_splitted 0]
set url [lindex $row_splitted 1]
send_message "$channel :$title $url"
incr count
} else {
break
}
}
}
}
proc encode {query} {
set query [regsub -all { } $query "%20"]
set query [regsub -all {&} $query "%26"]
set query [regsub -all {=} $query "%3D"]
set query [regsub -all {!} $query "%21"]
}
proc get_query_results {engine query} {
global searxconfig
set query [encode $query]
set engine [encode $engine]
set url "$searxconfig(website_url)search?q=$query&format=csv&engines=$engine"
## decomment the line below for debugging purposes
# putlog $url
return [exec curl -sS $url]
}
proc get_last_millis { } {
global searxconfig
if {[file exists $searxconfig(file_millis)]} {
set searxconfig(lastmillis) [slurp_file $searxconfig(file_millis)]
} else {
set fp [open $searxconfig(file_millis) w]
puts $fp 0
close $fp
get_last_millis
}
}
proc set_last_millis { } {
global searxconfig
set fp [open $searxconfig(file_millis) w]
puts $fp [clock milliseconds]
close $fp
}
proc search:searxng {nick host hand chan text} {
global searxconfig
if {!([channel get $chan searxng])} {
send_message "$chan :This script has not been authorized to run in this channel."
return 0
}
set millis [clock milliseconds]
get_last_millis
if { [expr $millis - $searxconfig(lastmillis)] > $searxconfig(max_freq) } {
## test antiflood superato
set_last_millis
set text_splitted [split $text " {}"]
set engine [lindex $text_splitted 0]
set text_length [llength $text_splitted]
set query [lrange $text_splitted 1 $text_length]
if {![regexp {^\+} $engine]} {
set engine $searxconfig(default_engine)
set query $text_splitted
} else {
set engine [string range $engine 1 [string length $engine]]
}
if {$query == {}} {
send_message "$chan :Missing search criteria."
} else {
set csv [get_query_results $engine $query]
process_csv $csv $chan
}
return 1;
}
send_message "$chan :Try again later."
return 0
}
putlog "SearXNG Loaded"`
It was about time!
Written by Simone
@wpn gemini server gets an HTTP proxy
Written by Simone
Yet another small update about gemini.
You can now browse gemini://woodpeckersnest.space
even from regular HTTP, here: https://gemini.woodpeckersnest.space/
I've applied some fixes (like) to HTML and CSS (the latter is pretty much the same used by the @wpn onboarding page, but obviously customized). As for accessibility, I think it should work well for desktop and also mobile browsers; CGIs work as well.
The proxy I used is Loxy. I also already opened an issue on their repo for a problem with query strings
, still waiting for someone to reply. Apart from that, everything checks out.
@wpn gemini capsule changes home
Written by Simone
Hello,
just a brief update on gemini here at @wpn.
I switched TLD from ".eu" to ".space": seemed more appropriate for gemini.
@wpn onboarding: updates
Written by Simone
I was running 2 separate apps for shell/XMPP account registration at @wpn, so far..
This night I made some changes to the original code (provided by Schimon) and I got just one app with account choice - meaning that you must choose what account type you want in the form. Shell accounts are for friends only, as it's always been.
As a consequence of that, I shut down the old address for XMPP account onboarding and left only the main one, which is:
Summer Recap at WPN
Written by Simone
I'm always a bit busy when it comes to pandora
(the VPS running WPN: woodpeckersnest.space/eu
). I like experimenting new things, fixing/improving existing things.. I cannot stay still 😀
After migrating the homepage to homarr
- which took really no time for the initial setup, but a lot of work afterward to fix layouts for mobile devices and non-full-hd screens for desktop PCs - I started messing up with a brand new toy: gemini!!
Not even a week since I installed molly-brown
, the actual gemini server, to today, I can count lots of improvements..
- Installed terminal gemini browser client
amfora
for wpn's shell users and alsogtl
, a tinylog reader, always for the shell. - Configured a local tinylog which groups together all wpn's capsuleer tinylogs, so it's easy to follow all of the local server users in one single place; the log is generated by
gtl
itself, refreshed and published every 5 minutes: can't miss a thing! - Initially configured
gemlog mentions
starting from a script by @bacardi55, who is the author of many gemini-related things, like the before mentionedgtl
software. When I realized it lacked multi-capsules support, I started modifying it and came up with some spaghetti code, which is working surprisingly well and it was deployed earlier today.
Onboarding on WPN didn't go as well as I thought, but at least the first user (hey, Mario, I'm looking at you! :) registered and, I believe, everything is working fine for them! On this topic, the onboarding page was migrated from PHP and Email to Python and XMPP, thanks to my friend Schimon! He also kept the UI pretty much intact, so I think most people who looked at it before and after, wouldn't even notice the changes under the hood.
https://hello.woodpeckersnest.space/
Something else I've been doing was setting up: https://invite.woodpeckersnest.space/
which is a landing page to allow people to join an XMPP MUC or add an XMPP contact from a web interface, which will also guide them in choosing a client for their platform. It's rather simple but very useful at the same time.
The chatmail
server was upgraded (more or less) at the beginning of August and running smoothly so far; it got some cool new improvements like automatic account deletion after #amount of days from last login and lots of fixes. Total number of registered accounts, so far, is 117.
https://chatmail.woodpeckersnest.space/
Services which I dismissed include:
Jitsi Meet
(wasn't really using it and it was wasting quite a lot of resources just to be running)Isso
comments service, which powered the old homepage contact section and also ashaarli
instance, which is still running but it's more a private thing, rather than a public one.
One more proposition: from now on, I will be publishing these (B)log posts in both protocols, HTTP here as you're reading and gemini on roughnecks' gemlog. I will be probably publishing less often than usual though, at least in this format, and send more status updates through the tinylog on WPN, the microlog at Station and my fediverse account.
In the next days I will be monitoring how everything goes and relax a bit, if I manage.. Today I didn't feel so good after a few stressful days, too much computing and too less sleep hours - it's 01:40 AM right now, so yeah, tomorrow will be another of “those” days, I guess.