

the easier a statement is to disprove, the more of a power move it is to say it, as it symbolizes how far you’re willing to go. - ie “faith” in religion.


the easier a statement is to disprove, the more of a power move it is to say it, as it symbolizes how far you’re willing to go. - ie “faith” in religion.


I run nearly all my Docker workloads with their data just in the home directory of the VM (or LXC actually since that’s how I roll) I’m running them in, but a few have data on my separate NAS via and NFS share - so through a switch etc with no problems - just slowish.


Great. There’s two volumes there - firefly_iii_upload & firefly_iii_db.
You’ll definitely want to docker compose down first (to ensure the database is not being updated), then:
docker run --rm \
-v firefly_iii_db:/from \
-v $(pwd):/to \
alpine sh -c "cd /from && tar cf /to/firefly_iii_db.tar ."
and
docker run --rm \
-v firefly_iii_upload:/from \
-v $(pwd):/to \
alpine sh -c "cd /from && tar cf /to/firefly_iii_upload.tar ."
Then copy those two .tar files to the new VM. Then create the new empty volumes with:
docker volume create firefly_iii_db
docker volume create firefly_iii_upload
And untar your data into the volumes:
docker run --rm \
-v firefly_iii_db:/to \
-v $(pwd):/from \
alpine sh -c "cd /to && tar xf /from/firefly_iii_db.tar"
docker run --rm \
-v firefly_iii_upload:/to \
-v $(pwd):/from \
alpine sh -c "cd /to && tar xf /from/firefly_iii_upload.tar"
Then make sure you’ve manually brought over the compose file and those two .env files, and you should be able to docker compose up and be in business again. Good choice with Proxmox in my opinion.


I’m not clear from your question, but I’m guessing you’re talking about data stored in Docker volumes? (if they are bind mounts you’re all good - you can just copy it). The compose files I found online for FireflyIII use volumes, but Hammond looked like bind mounts. If you’re not sure, post your compose files here with the secrets redacted.
To move data out of a Docker volume, a common way is to mount the volume into a temporary container to copy it out. Something like:
docker run --rm \
-v myvolume:/from \
-v $(pwd):/to \
alpine sh -c "cd /from && tar cf /to/myvolume.tar ."
Then on the machine you’re moving to, create the new empty Docker volume and do the temporary copy back in:
docker volume create myvolume
docker run --rm \
-v myvolume:/to \
-v $(pwd):/from \
alpine sh -c "cd /to && tar xf /from/myvolume.tar"
Or, even better, just untar it into a data directory under your compose file and bind mount it so you don’t have this problem in future. Perhaps there’s some reason why Docker volumes are good, but I’m not sure what it is.



Season 2
I’m local first - stuff I’m testing, playing with, or “production” stuff like Jellyfin, Forgeo, AudioBookshelf, Kavita etc etc. Local is faster, more secure, and storage is cheap. But then some of my other stuff that needs 24/7 access from the internet - websites and web apps - they go on the VPS.
I just do one Docker container per LXC. All the convenience of compose, plus those sweet Proxmox snapshots.


Sorry to have nostalgia in a post about games not needing it, but wow - the enjoyable hours I put into LocoRoco! Totally agree though - unique mechanic, and chef’s kiss execution.


Is there a reason not to use Tailscale for this?


Great job on the banner - I could hear the theme in my head.


When I switched to webdev, I dropped $20 on a system admin Linux course on Udemy. I highly recommended this approach.





Forgejo - actively developed open source. It’s what powers Codeberg. Easy to set up and manage with Docker. I moved to it from Gogs and skipped Gitea after reading about the forks.


+1 for Uptime Kuma. I use it in conjunction with a tiny Go endpoint that exposes memory, disk and cpu. And, like @iii I use ntfy for notifications. I went down the Grafana/Influx etc route - and had a heap of fun making a dashboard, but then never looked at it. With my two Kuma instances (one on a VPS and one in my homelab) in browser tabs, and ntfy for notifications on my watch, I feel confidently across the regular things that can go wrong.


It is only resolving for devices in the Tailnet. Kuma is checking they are all up, and this Ansible playbook is checking they have all their updates. I wouldn’t have thought that was an unusual arrangement - and it’s worked perfectly for about a year till about three weeks ago.
> go to the cinema
> empty.jpg
> Jay Kay comes in and sits directly in front of me


> afterallwhynot.jpg


Yes, this.


Thanks yes - that’s exactly what I needed.
100% this. And Lenovos and HPs designed for the business market generally are a pleasure to work on (in the hardware sense) if you need, with good manuals and secondhand spare parts.