Posts tagged with “misadventures”

Some news for XMPP: support MUC and anonymous login

Written by Simone

Simple instructions on how to join @wpn's XMPP server  anonymously

More news on the previously announced bridge for support MUC: we got another bridge set up. This time around, IRC joins the ballet, so matterbridge is now bridging @wpn support MUC for 3 different protocols.. XMPP and MATRIX being the previous other two.

https://health.woodpeckersnest.space/

On this matter I was thinking about making a "tombstone" of the current XMPP MUC, which is called status and move to wpn - In fact that's the name of the MATRIX and IRC rooms. This is not really something I want to do soon, though, because it implies changing several things.

Last, but not least, I have also set up an anonymous VirtualHost in Prosody. You can now login to anon.woodpeckersnest.space with a disposable account (whose data gets deleted from the server as soon as the account is not connected anymore) and participate/discuss in @wpn MUCs - access to external servers is not permitted to anonymous users, for obvious security concerne.

You can follow the steps in this GIF to connect anonymously with Gajim; Dino should also support it but I don't know how it's done.. Finally if you want to join via Android, these are the instructions provided by Daniel Gultsch, from Conversations IM:

you can just add the account something@anon.woodpeckersnest.space with an empty password to #Conversations_im and it will login anonymously. Anonymous logins don't require registration. When I say 'something@anon...' you can use anything as that username. Doesn't matter (it's only used to get through the client side jid validation check)

TIP: If the server tells you can't join a MUC, make sure you have added a nickname in your profile's details (Android).

Downtimes

Written by Simone

It's been a few days now that I'm experiencing downtimes at night, early mornings.

When I wake up, connect to the VPS and attach to tmux, I am welcomed by these messages in console:

        Message from syslogd@pandora at Nov 3 05:37:13 ...
        kernel:[1586232.350737] Dazed and confused, but trying to continue

        Message from syslogd@pandora at Nov 3 05:37:24 ...
        kernel:[1586235.049143] Uhhuh. NMI received for unknown reason
        30 on CPU 1.

        Message from syslogd@pandora at Nov 3 05:37:24 ...
        kernel:[1586235.049145] Dazed and confused, but trying to continue

        Message from syslogd@pandora at Nov 3 05:37:55 ...
        kernel:[1586273.642163] watchdog: BUG: soft lockup - CPU#2 stuck
        for 27s! [dockerd:526408]

        Message from syslogd@pandora at Nov 3 05:38:00 ...
        kernel:[1586278.545172] watchdog: BUG: soft lockup - CPU#1 stuck
        for 24s! [systemd-journal:257]

        Message from syslogd@pandora at Nov 3 05:38:02 ...
        kernel:[1586281.187611] watchdog: BUG: soft lockup - CPU#3 stuck
        for 35s! [lua5.4:1702]

There's no need to say that when this happens, the server is completely frozen and doesn't respond to anything.

I already contacted support, but they didn't investigate at all, I believe. They manually restarted my VPS once and did some pings and connection tests (VNC, SSH) afterwards.. "everything is working fine!"

This last Saturday I was up when it happened, so I did a mtr from my PC to the VPS's IP and logged it, then I sent another email with the output to support.. Still waiting for them to reply, I guess tomorrow (Monday).

Friends like lorenzo and shai are having difficulties too, with the same provider, so I'm not imagining things.

Well, that's all I got to say, will keep you posted if any news.

@wpn gemini capsule changes home

Written by Simone

Hello,

just a brief update on gemini here at @wpn.

I switched TLD from ".eu" to ".space": seemed more appropriate for gemini.

gemini://woodpeckersnest.space/

gemlog

Summer Recap at WPN

Written by Simone

I'm always a bit busy when it comes to pandora (the VPS running WPN: woodpeckersnest.space/eu). I like experimenting new things, fixing/improving existing things.. I cannot stay still 😀

After migrating the homepage to homarr - which took really no time for the initial setup, but a lot of work afterward to fix layouts for mobile devices and non-full-hd screens for desktop PCs - I started messing up with a brand new toy: gemini!!

Not even a week since I installed molly-brown, the actual gemini server, to today, I can count lots of improvements..

  • Installed terminal gemini browser client amfora for wpn's shell users and also gtl, a tinylog reader, always for the shell.
  • Configured a local tinylog which groups together all wpn's capsuleer tinylogs, so it's easy to follow all of the local server users in one single place; the log is generated by gtl itself, refreshed and published every 5 minutes: can't miss a thing!
  • Initially configured gemlog mentions starting from a script by @bacardi55, who is the author of many gemini-related things, like the before mentioned gtl software. When I realized it lacked multi-capsules support, I started modifying it and came up with some spaghetti code, which is working surprisingly well and it was deployed earlier today.

gemini@wpn

Onboarding on WPN didn't go as well as I thought, but at least the first user (hey, Mario, I'm looking at you! :) registered and, I believe, everything is working fine for them! On this topic, the onboarding page was migrated from PHP and Email to Python and XMPP, thanks to my friend Schimon! He also kept the UI pretty much intact, so I think most people who looked at it before and after, wouldn't even notice the changes under the hood.

https://hello.woodpeckersnest.space/

Something else I've been doing was setting up: https://invite.woodpeckersnest.space/

which is a landing page to allow people to join an XMPP MUC or add an XMPP contact from a web interface, which will also guide them in choosing a client for their platform. It's rather simple but very useful at the same time.

The chatmail server was upgraded (more or less) at the beginning of August and running smoothly so far; it got some cool new improvements like automatic account deletion after #amount of days from last login and lots of fixes. Total number of registered accounts, so far, is 117.

https://chatmail.woodpeckersnest.space/

Services which I dismissed include:

  • Jitsi Meet (wasn't really using it and it was wasting quite a lot of resources just to be running)
  • Isso comments service, which powered the old homepage contact section and also a shaarli instance, which is still running but it's more a private thing, rather than a public one.

One more proposition: from now on, I will be publishing these (B)log posts in both protocols, HTTP here as you're reading and gemini on roughnecks' gemlog. I will be probably publishing less often than usual though, at least in this format, and send more status updates through the tinylog on WPN, the microlog at Station and my fediverse account.

In the next days I will be monitoring how everything goes and relax a bit, if I manage.. Today I didn't feel so good after a few stressful days, too much computing and too less sleep hours - it's 01:40 AM right now, so yeah, tomorrow will be another of “those” days, I guess.

gemlog

re: CardDAV Plugin for Roundcube Installed

Written by Simone

Continuing from the previous article..

Today while trying to install yet another plugin (Calendar this time), I had a lil incident and destroyed everything 😃

Some hours later I restored a backup and we're up again. BUT! In the process I discovered some SQL errors which I believe were there since a lot ago, always gone unseen.

To make a long story short, I had to disable the standard "Personal Address Book" for everyone, because it was impossible to save any contact in there anyway.. And we are now relying on CardDAV, which is way better.

At one point I had the Calendar plugin working too, alongside CardDAV, but I had (wrongfully) installed it as a local one, so no sync to the cloud with CalDAV; it was later that I tried the CalDAV way by changing the config and shit got me.

Now I asked the people of libera.chat about the plugin, to see if it really supports any CalDAV implementation or not - and then I'll try again :) Feel free to check it out and leave a comment if you know better than me..

https://git.kolab.org/diffusion/RPK/browse/master/plugins/calendar/

and here's the configuration: https://git.kolab.org/diffusion/RPK/browse/master/plugins/calendar/config.inc.php.dist$28

I believe I'm done for today tho.. Looked like a full day's job. Ooh, yes.. Was already forgetting. I also updated the services blob in my website with all the new stuff.

More on WebDAV - Connecting a remote WebDAV folder in Windows

Written by Simone

After some failed attempt at this, I think I found the right way to "mount" a remote WebDAV folder under Windows' Explorer.

Initially my baby steps took me here: https://note.woodpeckersnest.space/share/0TJT81fgI8Jy

After following that tutorial I didn't succeed, so I investigated further. I can say that everythig looks correct until you get to point 9.

The address they tell you have to enter isn't correct in my experience and they aren't even using https for the URL. What worked for me was instead something like:

\\webdav.woodpeckersnest.space@SSL\folder

You have to input the network-path-stile address which is common in Windows, as in: double backslash, FQDN of your WebDAV server, "@SSL" and then the path (folder) where you have access to files in your WebDAV server, with a backslash preceding it.

That's it, a prompt will ask for username and password and then a new Network Path (WebFolder) will be connected in Explorer, just below your local drives.

You can then browse, copy, upload, delete (and so on) whatever content you like.

EDIT: Just found out I couldn't rename files/folders from Windows or Total Commander (Android)

Fixed by setting nginx virtualhost like this:

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name  webdav.woodpeckersnest.space;


    # HTTPS configuration
    ssl_certificate /etc/letsencrypt/live/webdav.woodpeckersnest.space/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/webdav.woodpeckersnest.space/privkey.pem;

    access_log /var/log/nginx/webdav/access.log;
    error_log /var/log/nginx/webdav/error.log;

  location / {
    set $destination $http_destination;

    if ($destination ~* ^https(.+)$) {
         set $destination http$1;
    }

    proxy_set_header   Destination $destination;
    proxy_set_header   X-Real-IP $remote_addr;
    proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   Host $host;
    proxy_pass         http://127.0.0.1:17062/;
    proxy_http_version 1.1;
    proxy_set_header   Upgrade $http_upgrade;
    proxy_set_header   Connection "upgrade";
  }

  client_max_body_size 0;

}

Now I'm quite happy 😀

self-hosting di ente (photos)

Written by Simone

L'applicazione "photos" della società "ente" è una alternativa alla più rinomata Google Foto, ossia un'app che conserva le vostre immagini (anche chiamate "memories", ricordi) sul cloud. A seconda del profilo scelto potrete immagazzinare tot dati a determinati prezzi - a dire il vero esiste anche un piano di prova per un anno con 1GB di spazio, che naturalmente non soddisferà nessuno.

Da qualche settimana però ente ha deciso di rilasciare come codice aperto la sua versione del server, che può essere liberamente fatto girare sulla propria macchina insieme alle relative app (photos per Android, web app), tramite alcune modifiche alle stesse.

Io e lorenzo @lorenzo@fedi.bobadin.icu ci siamo allora messi in gioco per avere delle istanze personali di ente server ed uno spazio in cloud dove poter caricare il materiale (l'Object Storage S3 che abbiamo utilizzato è totalmente sponsorizzato da lorenzo, il quale ringrazio)

Sin da subito abbiamo avuto diverse difficoltà, perché la documentazione, ahimè, è divisa tra blog e repository github, con molte incosistenze e diverse lacune. Al momento però siamo riusciti ad avere un servizio funzionante con l'app photos; per la parte web invece ci sono ancora alcuni problemi.

In questo post riassumerò le operazioni effettuate.

Lo stack docker per i servizi ente è costituito da:

  • server (museum)
  • database pgsql
  • minio (object storage locale)
  • socat

Per utilizzare un S3 locale alla vostra macchina avete bisogno di tutti questi componenti, mentre utilizzando uno storage S3 esterno, basteranno solamente il server museum ed un database (docker o di sistema) Nella seguente guida parlerò solo di un sistema Debian con database installato localmente (quindi niente immagine docker) e S3 esterno (quindi niente minio/socat)

La guida di riferimento seguita è questa: https://github.com/ente-io/ente/blob/main/server/docs/docker.md e di seguito elencherò solo le modifiche apportate.

Utilizzeremo solo 3 file: compose.yaml e museum.yaml che andranno creati nella directory "ente", come da guida linkata sopra, mentre il percorso per credentials.yaml sarà scripts/compose/credentials.yaml, sempre all'interno della dir base "ente".

compose.yaml

Il file compose.yaml sarà il seguente:

services:
  museum:
    image: ghcr.io/ente-io/server
    restart: always
    network_mode: "host"
    environment:
      # Pass-in the config to connect to the DB and MinIO
      ENTE_CREDENTIALS_FILE: /credentials.yaml
    volumes:
      - custom-logs:/var/logs
      - ./museum.yaml:/museum.yaml:ro
      - ./scripts/compose/credentials.yaml:/credentials.yaml:ro
      - ./data:/data:ro

volumes:
  custom-logs:

Il network è stato cambiato da "internal" ad "host" per permettere al server ente nel contenitore docker di dialogare col database in localhost sul sistema operativo. La porta esposta sul sistema operativo non può essere modificata con network=host (se avviate il container con qualsiasi porta specificata otterrete un warning - Published ports are discarded when using host network mode), quindi è stata rimossa dal compose. Assicuratevi che la "8080" non sia già in uso da altri applicativi:

sudo netstat -lnp | grep 8080

museum.yaml

Il file museum.yaml, ridotto all'essenziale (più commenti), sarà il seguente:

# Configuring museum
# ------------------
#
#
# By default, museum logs to stdout when running locally. Specify this path to
# get it to log to a file instead.
#
# It must be specified if running in a non-local environment.
log-file: ""

# HTTP connection parameters
http:
    # If true, bind to 443 and use TLS.
    # By default, this is false, and museum will bind to 8080 without TLS.
    # use-tls: true

# Key used for encrypting customer emails before storing them in DB
#
# To make it easy to get started, some randomly generated values are provided
# here. But if you're really going to be using museum, please generate new keys.
# You can use `go run tools/gen-random-keys/main.go` for that.
key:
    encryption: <chiave>
    hash: <chiave>
    
# JWT secrets
#
# To make it easy to get started, a randomly generated values is provided here.
# But if you're really going to be using museum, please generate new keys. You
# can use `go run tools/gen-random-keys/main.go` for that.
jwt:
    secret: <chiave>

# SMTP configuration (optional)
#
# Configure credentials here for sending mails from museum (e.g. OTP emails).
#
# The smtp credentials will be used if the host is specified. Otherwise it will
# try to use the transmail credentials. Ideally, one of smtp or transmail should
# be configured for a production instance.
smtp:
    host:
    port:
    username:
    password:

# Various low-level configuration options
internal:
    # If false (the default), then museum will notify the external world of
    # various events. E.g, email users about their storage being full, send
    # alerts to Discord, etc.
    #
    # It can be set to true when running a "read only" instance like a backup
    # restoration test, where we want to be able to access data but otherwise
    # minimize external side effects.
    silent: false
    # If provided, this external healthcheck url is periodically pinged.
    health-check-url:
    # Hardcoded verification codes, useful for logging in when developing.
    #
    # Uncomment this and set these to your email ID or domain so that you don't
    # need to peek into the server logs for obtaining the OTP when trying to log
    # into an instance you're developing on.
    #hardcoded-ott:
    #     emails:
    #         - "example@example.org,123456"
    #     # When running in a local environment, hardcode the verification code to
    #     # 123456 for email addresses ending with @example.org
    #     local-domain-suffix: "@example.org"
    #     local-domain-value: 123456
    # List of user IDs that can use the admin API endpoints.
    admins: []

# Replication config
#
# If enabled, replicate each file to 2 other data centers after it gets
# successfully uploaded to the primary hot storage.
replication:
    enabled: false
    # The Cloudflare worker to use to download files from the primary hot
    # bucket. Must be specified if replication is enabled.
    worker-url:
    # Number of go routines to spawn for replication
    # This is not related to the worker-url above.
    # Optional, default value is indicated here.
    worker-count: 6
    # Where to store temporary objects during replication v3
    # Optional, default value is indicated here.
    tmp-storage: tmp/replication

# Configuration for various background / cron jobs.
jobs:
    cron:
        # Instances run various cleanup, sending emails and other cron jobs. Use
        # this flag to disable all these cron jobs.
        skip: false
    remove-unreported-objects:
        # Number of go routines to spawn for object cleanup
        # Optional, default value is indicated here.
        worker-count: 1
    clear-orphan-objects:
        # By default, this job is disabled.
        enabled: false
        # If provided, only objects that begin with this prefix are pruned.
        prefix: ""
        
stripe:
    path:
        success: ?status=success&session_id={CHECKOUT_SESSION_ID}
        cancel: ?status=fail&reason=canceled

Come (quasi) scritto nel commento, per generare chiavi e segreto jwt bisognerà clonare in una directory a piacere il server ente, entrare nella directory "server" e semplicemente digitare il comando:

go run tools/gen-random-keys/main.go

La configurazione di queste chiavi, come pure tutta la sezione SMTP è opzionale; se decidete di compilare i campi per smtp, la vostra istanza sarà di fatto aperta alle registrazioni utente, cosa che probabilmente non vorrete fare.

Se decidete di continuare, avrete bisogno di installare "go" in Debian; oppure se siete in fase di test utilizzate le chiavi di default presenti nel file: https://github.com/ente-io/ente/blob/main/server/configurations/local.yaml#L151-L161

sudo apt install golang-1.19-go

Altra dipendenza prima di eseguire il comando è: libsodium

sudo apt install libsodium23:amd64 libsodium-dev:amd64

credentials.yaml

Ultimo file di configurazione è credentials.yaml

db:
    host: localhost
    port: 5432
    name: ente_db
    user: ente
    password: password
    
s3:
    are_local_buckets: false
    use_path_style_urls: true
    b2-eu-cen:
        key: <key>
        secret: <secret>
        endpoint: link_allo_storage
        region: eu-central-2
        bucket: <bucket_name>

Questo file si compone dei dati del database postgres e dello storage esterno S3: la modifica essenziale per il databse è "host", che andrà impostato su "localhost"; per S3 è importante cambiare solamente "key", "secret", "endpoint" e "bucket"; lasciate gli altri valori intatti o avrete problemi di connessione; da quello che ho capito sono hard-coded.

E' giunta l'ora di creare il database:

sudo su - postgres

psql

CREATE DATABASE ente_db;
CREATE USER ente WITH PASSWORD 'sekret';
GRANT ALL PRIVILEGES ON DATABASE ente_db TO ente;
ALTER DATABASE ente_db OWNER TO ente;

Modificate i nome di db, user e password a piacere (basta che siano uguali a quelli specificati in crednetials.yaml). Battete "enter" ad ogni riga. Postgres risponderà ad ogni comando con una conferma circa le operazioni eseguite.

Per uscire dalla cli di pg:

\quit

Per tornare al vostro utente:

exit

E lanciamo docker:

sudo docker compose up

Controllate i log, l'importante è che il database sia correttamente connesso; comparirà qualche errore in seguito, ma se il server rimane in esecuzione senza fermarsi dovrebbe essere tutto a posto.

Il server rimane in ascolto su localhost, porta 8080. Per poter raggiungerlo dall'esterno bisognerà configurare un reverse proxy nel vostro web server - Apache, nginx o caddy.

Lascio qui una configurazione di esempio per nginx:

map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
}
server {
    server_name sottodominio.dominio.tld;

    access_log /var/log/nginx/sottodominio/access.log;
    error_log /var/log/nginx/sottodominio/error.log;

    location / {
           proxy_pass http://[::1]:8080;
           proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto $scheme;
    }

    listen [::]:443 ssl http2;
    listen 443 ssl http2;
    ssl_certificate /etc/letsencrypt/live/sottodominio.dominio.tld/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/sottodominio.dominio.tld/privkey.pem;
}

La mia configurazione prevede supporto a IPv6; in caso voleste usare solamente IPv4, cambiate l'host in proxy_pass e togliete il listen [::]:443 ssl http2;

Ora, scaricate l'app ente photos dal loro repository github, poiché quella sul Play Store non è stata ancora aggiornata coi cambiamenti necessari al funzionamento su server self-hosted.

Avviata l'app sul telefono, tappate 7 o più volte in rapida successione sulla schermata di avvio finché non apparirà un campo di testo: riempitelo con l'indirizzo che avete configurato per il reverse proxy:

https://sottodominio.dominio.tld

Connettete e dovreste vedere un messaggio di conferma se tutto è andato bene, con sotto scritto l'indirizzo a cui siete collegati.

Una volta nell'app dovrete registrare un account. Inserite email e password ed alla schermata successiva vi verrà chiesto un codice OTP di conferma. L'app vi dirà che il codice è stato inviato alla vostra casella email, ma questo è vero solo nel caso abbiate impostato i campi per SMTP, come spiegato sopra (altra cosa da sapere è che il mittente delle email è ente <verification@ente.io>, quindi non vi spaventate). In caso contrario il codice OTP può essere recuperato facilmente dai log docker che staranno scorrendo nel terminale. verrà scritto qualcosa come "OTT per id sfilza di caratteri: e qui un codice di 6 cifre". Ricopiatelo nell'app e proseguite.

Dopo alcuni istanti per il setup della crittografia arriverà una schermata con una passphrase che dovrete conservare in luogo sicuro, per il ripristino dell'account in caso di perdita della password di accesso.

Scegliete ora le cartelle sul vostro telefono di cui fare backup e cominciate!

Se lo storage è connesso correttamente vedrete i vostri ricordi in caricamento con un numero progressivo; in caso contario, se qualcosa non sta funzionando, sulle vostre foto rimarrà una nuvoletta barrata, che indica il mancato caricamento. Se questo fosse il caso, andate di nuovo a leggere i log del container per eventuali errori.

Sbloccare il limite di 1GB su profilo di prova

Nonostante stiamo usando un server personale, noterete che l'app photos mostra ancora i profili/sottoscrizioni a pagamento dell'app originale.

Per poter sbloccare lo storage pressoché infinito, dopo aver selezionato il profilo di prova ad 1GB, bisognerà utilizzare uno strumento a riga di comando - ente cli.

E' necessario installare alcuni pacchetti, generalmente utilizzati in versione Linux Desktop e lanciare alcuni comandi per attivare la "secret service dbus interface", necessaria alla cli ente.

sudo apt install gnome-keyring libsecret-1-0 dbus-x11

Eseguite questi comandi, copio incollo perché è inutile soffermarsi oltre:

eval "$(dbus-launch --sh-syntax)"

mkdir -p ~/.cache
mkdir -p ~/.local/share/keyrings # where the automatic keyring is created

# 1. Create the keyring manually with a dummy password in stdin
eval "$(printf '\n' | gnome-keyring-daemon --unlock)"

# 2. Start the daemon, using the password to unlock the just-created keyring:
eval "$(printf '\n' | /usr/bin/gnome-keyring-daemon --start)"

A questo punto l'ambiente è pronto. Scaricate ente cli dal repository github: https://github.com/ente-io/ente/releases/tag/cli-v0.1.13

Nel nostro caso, per Debian x64, scaricheremo: https://github.com/ente-io/ente/releases/download/cli-v0.1.13/ente-cli-v0.1.13-linux-amd64.tar.gz

Estraete il contenuto della "tarpalla" in una directory a piacere, magari in ~/bin Dentro alla stessa directory, create un file chiamato config.yaml, contenente le seguenti righe:

endpoint:
    api: "http://localhost:8080"

Dalla stessa directory, lanciamo ./ente help per verificarne il corretto funzionamento e.. leggere l'help 😀

Per prima cosa aggiungiamo il nostro utente creato precedentemente:

./ente account add

Vi verrano chieste alcune info, in particolare indirizzo email con cui vi siete registrati su photos, relativa password e una dir dove poter esportare le vostre immagini (la dir deve esistere prima di poterla specificare).

Una volta aggiunto l'account, digitate:

./ente account list

E dovreste vedere le info del vostro account, tra le quali un "ID". Copiatelo ed aprite nuovamente il file "museum.yaml" visto in precedenza. In questa sezione modificate la voce "admins" inserendo tra le parentesi quadre l'ID utente copiato. Salvate e riavviate il contenitore docker; da questo momento siete admin dell'istanza.

internal:
    # If false (the default), then museum will notify the external world of
    # various events. E.g, email users about their storage being full, send
    # alerts to Discord, etc.
    #
    # It can be set to true when running a "read only" instance like a backup
    # restoration test, where we want to be able to access data but otherwise
    # minimize external side effects.
    silent: false
    # If provided, this external healthcheck url is periodically pinged.
    health-check-url:
    # Hardcoded verification codes, useful for logging in when developing.
    #
    # Uncomment this and set these to your email ID or domain so that you don't
    # need to peek into the server logs for obtaining the OTP when trying to log
    # into an instance you're developing on.
    #hardcoded-ott:
    #     emails:
    #         - "example@example.org,123456"
    #     # When running in a local environment, hardcode the verification code to
    #     # 123456 for email addresses ending with @example.org
    #     local-domain-suffix: "@example.org"
    #     local-domain-value: 123456
    # List of user IDs that can use the admin API endpoints.
    admins: []

Torniamo alla cli:

./ente admin list-users

Dovreste vedere voi stessi e leggere che siete effettivamente admin.

Ultimo comando per sbloccare lo storage da 100TB e abbiamo finito:

./ente admin update-subscription -u email@vostrodominio.tld --no-limit true

The Internet doesn't like VPNs

Written by Simone

It's been a week or so that I started using Wireguard on my desktop too, browsing the Internet and doing the usual stuff I do, but this time connecting both via IPv4 and IPv6 through my VPS.

Results? I've already been banned (or to better state it, my VPS's IPv4 has) from 3 popular hosts: reddit, imgur and alienwarearena. Reason? I don't really know, looks like everyone doesn't like VPNs.

For the time being I resorted to replace reddit.com with old.reddit.com (even in my SearxNG instance) to be able to browse that shit, which unfortunately is sometimes useful. "imgur" is even more trickier, since I was still able to upload images (and also display them) via their API on Glowing-Bear.. But if I try to curl imgur.com from my VPS shell I get this:

{"data":{"error":"Imgur is temporarily over capacity. Please try again later."},"success":false,"status":403}

"Over capacity", yeah, but it's a 403, you liar!

So, a few moments ago I set my Wireguard service in Windows to manual start, stopped it and now I'm back with Hurricane Electric IPv6 tunnel - I would like to avoid being banned from the rest of the world, if possible.

Thanks for all the fish.

Young me doing IT stuff

Written by Simone

written notes about msdos

Obsolete major version 13 (uh oh!)

Written by Simone

A package configuration text screen by Debian apt which is informing the user that postgresql packages are old and need to be upgraded.

Yesterday after installing some new packages I was greeted by this kind reminder 😀

I began stopping services which use a Postgresql database and even forgot about Dendrite.. Nothing so bad as I imagined, tho, just run the suggested commands and everything got up and running in a few minutes.

Debian rocks! 😍