I have 2 servers both running a Debian VM each. The old VM was one of the first o installed several years ago when I knew lityle and its messed up and has little space left. It running on Truenas Scale and has a couple of docker apps that I’m very dependent on (Firefly, Hammond). I want to move the datasets for these docker apps to a newer VM running on Proxmox server. It a Debian 13 VM with loads of space. What are my options for moving the data given neither Firefly nor Hammond have the appropriate export / import functions? I could migrate the old VM that that wouldn’t resolve my space issue. Plus it Debian 10 and it would take a lot to being it up to Trixie.
NFS+rsync
I’m not clear from your question, but I’m guessing you’re talking about data stored in Docker volumes? (if they are bind mounts you’re all good - you can just copy it). The compose files I found online for FireflyIII use volumes, but Hammond looked like bind mounts. If you’re not sure, post your compose files here with the secrets redacted.
To move data out of a Docker volume, a common way is to mount the volume into a temporary container to copy it out. Something like:
docker run --rm \ -v myvolume:/from \ -v $(pwd):/to \ alpine sh -c "cd /from && tar cf /to/myvolume.tar ."
Then on the machine you’re moving to, create the new empty Docker volume and do the temporary copy back in:
docker volume create myvolume docker run --rm \ -v myvolume:/to \ -v $(pwd):/from \ alpine sh -c "cd /to && tar xf /from/myvolume.tar"
Or, even better, just untar it into a data directory under your compose file and bind mount it so you don’t have this problem in future. Perhaps there’s some reason why Docker volumes are good, but I’m not sure what it is.
Here is my docker compose file below. I think I used the standard file that the developer ships, simply because I was keen to get firefly going without fully understanding the complexity of docker storage in volumes.
The Firefly III Data Importer will ask you for the Firefly III URL and a "Client ID". # You can generate the Client ID at http://localhost/profile (after registering) # The Firefly III URL is: http://app:8080/ # # Other URL's will give 500 | Server Error # services: app: image: fireflyiii/core:latest hostname: app container_name: firefly_iii_core networks: - firefly_iii restart: always volumes: - firefly_iii_upload:/var/www/html/storage/upload env_file: .env ports: - '84:8080' depends_on: - db db: image: mariadb:lts hostname: db container_name: firefly_iii_db networks: - firefly_iii restart: always env_file: .db.env volumes: - firefly_iii_db:/var/lib/mysql importer: image: fireflyiii/data-importer:latest hostname: importer restart: always container_name: firefly_iii_importer networks: - firefly_iii ports: - '81:8080' depends_on: - app env_file: .importer.env cron: # # To make this work, set STATIC_CRON_TOKEN in your .env file or as an environment variable and replace REPLACEME below # The STATIC_CRON_TOKEN must be *exactly* 32 characters long # image: alpine container_name: firefly_iii_cron restart: always command: sh -c "echo \"0 3 * * * wget -qO- http://app:8080/api/v1/cron/XTrhfJh9crQGfGst0OxoU7BCRD9VepYb;echo/" | crontab - && crond -f -L /dev/stdout" networks: - firefly_iii volumes: firefly_iii_upload: firefly_iii_db: networks: firefly_iii: driver: bridge
Great. There’s two volumes there -
firefly_iii_upload
&firefly_iii_db
.You’ll definitely want to
docker compose down
first (to ensure the database is not being updated), then:docker run --rm \ -v firefly_iii_db:/from \ -v $(pwd):/to \ alpine sh -c "cd /from && tar cf /to/firefly_iii_db.tar ."
and
docker run --rm \ -v firefly_iii_upload:/from \ -v $(pwd):/to \ alpine sh -c "cd /from && tar cf /to/firefly_iii_upload.tar ."
Then copy those two .tar files to the new VM. Then create the new empty volumes with:
docker volume create firefly_iii_db docker volume create firefly_iii_upload
And untar your data into the volumes:
docker run --rm \ -v firefly_iii_db:/to \ -v $(pwd):/from \ alpine sh -c "cd /to && tar xf /from/firefly_iii_db.tar" docker run --rm \ -v firefly_iii_upload:/to \ -v $(pwd):/from \ alpine sh -c "cd /to && tar xf /from/firefly_iii_upload.tar"
Then make sure you’ve manually brought over the compose file and those two .env files, and you should be able to docker compose up and be in business again. Good choice with Proxmox in my opinion.
Interestingly, I did something similar with Linkwarden where I installed the datasets in /home/user/linkwarden/data. The dam thing caused my VM to run out of space because it started downloading pages for the 4000 bookmarks I had. It went into crisis mode so I stopped it. I then created a dataset on my Truenas Scale machine and NFS exported to the VM on the same server. I simply cp -R to the new NFS mountpoint, edited the yml file with the new paths and voila! It seems to be working. I know that some docker container don’t like working off NFS share so we’ll see. I wonder ho well this will work when the VM is on a different machine as the there is a network cable, a switch, etc. in between. If for any reason the nas goes down, the docker containers on the Proxmox VM will be crying as they’ll lose the link to their volumes? Can anything be done about this? I guess it can never be as risilient as having VM and has on the same machine.
The first rule of containers is that you do not store any data in containers.
The second rule of containers is that you run them from a versioned config with proper volumes and tagging. Always.
If you obey these rules, then it’s as simple as moving the volumes to another host and starting your containers. They’re fully portable that way.
The first rule of containers is that you do not store any data in containers.
Do you mean they should be bind mounts? From here, a bind mount should look like this:
version: ‘3.8’
services: my_container: image: my_image:latest volumes: - /path/on/host:/path/in/container
So referring to my Firefly compose above, then I shoudl simply be able to copy over the
/var/www/html/storage/upload
for the main app data and the database stored in here/var/lib/mysql
can just be copied over? but then why does my local folder not have anystrorage/upload
folders?user@vm101:/var/www/html$ ls index.html