Not containers and data, but the images. The point would be reproducability in case a remote registry does not contain a certain image anymore. Do you do that and how?

  • wersooth@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 days ago

    if there are some custom image, e.g. additional stuff I added to an existing image, then I backup the dockerfile. you can rebuild the image anytime and the size is smaller than a binary.

    • brvslvrnst@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      Came here to say that: the most economic solution is to backup any dockerfiles themselves, though that also has the caveat that any installs within the build steps might also depend on external resources that could also be dropped.

      There are methods of adding a local caching layer between your system and external, which might be what OP is after, but that involves investing in the additional space needed to back them up

  • HumanPerson@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I used to but then I switched out the server I was doing backups to and have been thinking “I’ll get to it later” for many months. If anything goes wrong I’m screwed. I’ll get to it later ¯_(ツ)_/¯

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    I mean…you have the container right there on your machine. If you’re concerned, just run your own registry and push copies there when needed. This of course is all unnecessary, as you only need the Dockerfile to build a clean image from scratch, and it will obviously work if it’s already been published.

  • r0ertel@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    I’ve been looking to do this, but haven’t found a good, easy to use pull thru proxy for docker, ghcr.io and some other registries. Most support docker only.

    This one looks promising but overly complicated to set up.

    A few times now, I’ve gone to restart a container and the repo’s been moved, archived or paywalled. Other times, I’m running a few versions behind and the maintainer decided to not support it, but upgrading would mean a complete overhaul of my Helm values file. Ugh!

    I was considering a docker registry on separate ports for each upstream registry I’d like to proxy/cache.

  • GreenKnight23@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    yes. all of the images I use are cached and stored in my locally hosted gitlab registry.

    I think I’ve got around 120-140 images. a lot of what I have is just in case of an emergency.

    I’ve always imagined I could build and run technological infrastructure after a social collapse or something, so I have a lot of images that could be a good basis to start with. Most OS images, popular DB images, etc. it would probably never work, but I’d rather have the option than not.