Dumping a PostgreSQL Database without Docker

When migrating systems, you might have been hit with the problem of migrating a PostgreSQL database from a read-only filesystem snapshot.
If you think this sounds like a really specific skill issue, I can assure you that it absolutely is. Anyways, since the switch to NixOS for my main server in August, there were a bunch of docker volumes in my filesystem snapshot, some are psql databases. I don’t have docker/podman installed, so how am I going to export a dump of the databases?
Well, welcome to this post.

Let’s rule out things that won’t work.

  1. copying the entire directory and paste it right into the main psql database

and that’s about all the things that I can think about.

For the purposes of this guide, we will assume the following variables:

1
VOLUME=/directory/to/the/docker/volume

Get the database out of a volume

First and foremost, we need to get the database directory out of the volume first. Since the filesystem is read-only, we have no choice but to move it out. You can alternatively do some cool insane stuff like mounting an overlayfs on the volumes directory, but for the purpose of this guide we are not going to do that to keep this simple.
We can simply copy the entire database to an empty temporary database:

1
2
$ TMPDIR=$(mktemp -d)
$ cp $VOLUME/_data $TMPDIR -r # in postgres:15-alpine, the data dir is _data inside the volume. change it accordingly if it is different for you.

Now that we have the data dir, change the ownership of $TMPDIR recursively to a user that’s not root, as pg_ctl refuses to run as root.

1
2
3
$ sudo -s 
# chown -R postgres:postgres $TMPDIR
# # ^~~~~~~~~~~~~~~~~ we use postgres because the original db is ran with postgres. I have not tested anything else.

Set some config so postgres doesn’t explode

We then have to replace the port of our working shadow server, so as to not conflict with the main psql database. You can skip this part if you don’t have one running.

1
2
3
$ sed -i "s/^#\?listen_addresses.*$/listen_addresses = '' # disable listening on tcp ports/" _data/postgresql.conf
$ sed -i "s/^#\?\(port =\).*$/\1 31280/" ./_data/postgresql.conf
$ # ^^^^^ this can be any port that's not the main psql port.

Additionally, harden the access configuration for the peace of mind.

1
2
3
$ cat > _data/pg_hba.conf <<EOF
local all all trust
EOF

Run the server

Now we are finally ready to run the server. As explained above, we cannot run the database server as root.

1
2
# alias pg_run="sudo -u postgres"
# pg_run pg_ctl -D _data start

Dump the database

Chances are the default user (usually the only user) in the database isn’t postgres. If you use docker compose, just figure it out from your compose.yaml.
I unfortunately cannot help you at the moment if you don’t know the database and/or user.

Afterwards, you can simply do the following to get a db dump.

1
2
$ pg_run pg_dump -p 31280 -U [user] [database] -s  | nix run n#pv > [database].schema.dump.sql
$ pg_run pg_dump -p 31280 -U [user] [database] -ab | nix run n#pv > [database].data.dump.sql

The dump is split into 2 files for easier checking, and easier editing as the owner role might not be the same.
If that is the case, simply change all the old owners in the schema.dump.sql to the new one.

1
$ sed -i 's/\(OWNER TO\) .*\;$/\1 "[final-user]" ;/' [database].schema.dump.sql

Dump the dump to the data dumpster (database)

This part is easy. Maybe.

Up to this point, in no commands have we told psql to dump users and the databases. We will now have to create them before loading the database dumps.

1
2
3
4
$ # create the user
$ pg_run createuser -eDlPRS [user]
$ # create the database
$ pg_run createdb -eO [user] [database_name]

We can now import the dump.

1
2
$ pg_run psql -1 -v ON_ERROR_STOP=1 -d [database] -f [database].schema.dump.sql # load the schema first
$ pg_run psql -1 -v ON_ERROR_STOP=1 -d [database] -f [database].data.dump.sql

At this point we should have a fully recovered database. We can then move on to removing the temporary files.

Removing the dump

First, pull the plug on the shadow server.

1
2
3
$ pg_run pg_ctl -D _data stop
waiting for server to shut down.... done
server stopped

Now, just remove the temporary directory with the same method you normally use.

1
$ nix run n#srm $TMPDIR -- -RxCvv

That’s all!

openssh update

Another month, another update. This time, OpenSSH decided to add a key-exchange
algorithm that just so happens to break my GPG+YubiKey ssh setup.

Said kex algorithm is sntrup761x25519-sha512@openssh.com. If you don’t remove
it, any attempts to connect to the server with the “current” yubikey (I have a
5.2.7 yubikey) will result in the scdaemon spitting a not-at-all helpful error
message at you:

1
gpg-agent[1074]: scdaemon[1074]: app_auth failed: Invalid value

This was a bit confusing and took me a while to figure out why it happens^2, even
with gpg-agent set to output all information possible.

Solution^1:

1
2
# /etc/ssh/sshd_config
+ KexAlgorithms -sntrup761x25519-sha512@openssh.com

It’s worth noting that this config only applies to sshd. If you’re a client trying
to connect to a remote server that doesn’t have this, it will not work. I’m not sure
if this is patched yet.

we love hp

sometimes linux just doesn’t work and you have a hard time finding why. this happened to my friend today because their
pc doesn’t work well with linux. it took them like 3 days before figuring out a possible solution.

their problem was every single distro shows a black screen and nothing else. They tried Ubuntu, Endeavour and Manjaro
and all gave them the black screen treatment.

Solution: add noapic to kernel command line.

Domain change

I lost my cloudflare and namecheap account after i reset my yubikey’s fido applet because macos gpg tools was being extremely bad at handling
ssh fido keys and gpg, resulting in my yubikey not being usable as a smartcard in macos, which was the only os i had access to
(found out about t2linux much MUCH later). i had to ssh into multiple servers and my key was the only option. fuck you apple.

anyways, i bought a new domain after the kcomain.dev domain was about to expire coincidentally (like a week earlier). it is this domain, soopy.moe.
i don’t control the kcomain.dev domain anymore, please do not send any more emails to that address because i will not be reading them (improvmx only
allows 1 domain at a time and setting it up is a major chore)

haven’t got the time to setup all the old stuff back to my current domain (tbh most of the old stuff are not working anyways so you shouldn’t need to care)

a list of my currently open services can be found here

moment

Gotta love losing motivation for about everything in like a singular day

timetable!

timetable is now updated to 2021, see it here

time waste

Spent an entire day figuring out how to do cython but turns out a shell script is sufficient.

hello

Perhaps I will ditch twitter after I set this up correctly.

Your browser is out-of-date!

Update your browser to view this website correctly.&npsb;Update my browser now

×