Hello, fellow Linux users!

My question is in the titel: What is a good approach to deploy docker images on a Raspberry Pi and run them?

To give you more context: The Raspberry Pi runs already an Apache server for letsencrypt and as a reverse proxy, and my home grown server should be deployed in a docker image.

To my understanding, one way to achieve this would be to push all sources over to the Raspberry Pi, build the docker image on the Raspberry Pi, give the docker image a ‘latest’ tag and use Systemd with Docker or Podman to execute the image.

My questions:

  • Has anyone here had a similar problem but used a different approach to achieve this?
  • Has anyone here automated this whole pipeline that in a perfect world, I just push updated sources to the Raspberry Pi, the new docker image gets build and Docker/Podman automatically pick up the new image?
  • I would also be happy to be pointed at any available resources (websites/books) which explain how to do this.

At the moment I am using Raspbian 12 with a Raspberry Pi Zero 2 W and the whole setup works with home grown servers which are simply deployed as binaries and executed via systemd. My Docker knowledge is mostly from a developer perspective, so I know nearly nothing about deploying Docker on a production machine. (Which means, if there is a super obvious way to do this I might not even be aware this way exists.)

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    5 months ago

    I wouldn’t build anything significant on the RPi Zero and instead would try to build elsewhere, namely on a more powerful machine with the same architecture, or cross-build as others suggested.

    That being said, what’s interesting IMHO with container image building is that you can rely on layers. So… my advice would be to find the existing image supported by your architecture then rely on it to layer on top of it. This way you only build on the RPi what is truly not available elsewhere.

  • assaultpotato@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    5 months ago

    You can export images to tarballs and import them to your local docker daemon if you want.

    Not sure how podman manages local images.


    Idea:

    1. Setup inotifywait on your sources directory and rsync to keep directories in sync, and on change, build and run latest image
    2. Setup inotifywait on your image tarball directory/file and on change import and run latest
    3. Mount your source directory to a docker server image that supports hot reloading.
  • Domi@lemmy.secnd.me
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    Why not run the image registry on the Raspberry Pi itself? Then you can do your builds on your regular machine and push them to your Raspberry Pi when done.

  • Voytrekk@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    Perhaps a compose file on the raspberry pi. You can have it build the container as part of the compose file, then you can just start your services up with docker-compose up -d --build. The only things you would need to do is update the git repos and rerun the up command. You could also script it to pull the git repos before building.

    • wolf@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Thanks for the idea! I try to keep as little ‘moving’ parts as possible, so hosting gitlab is something I would want to avoid if possible. The Raspberry Pi is supposed to be sole hardware for the whole deployment of the project.

      • CameronDev@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        Its definitely not a lightweight solution. Is the pi dedicated to the application? If so, is it even worth involving docker?

        • wolf@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          You are asking exactly the right questions!

          I have an Ansible playbook to provision the Pi (or any other Debian/Ubuntu machine) with everything need to run a web application, as long as the web application is a binary or uses one of the interpreters of the machine. (Well, I have also playbooks to compile Python/Ruby from source or get an Adoptium JDK repository etc.)

          Right now I am flirting with the idea of using Elixir for my next web application, and it just seems unsustainable for me to now add Erlang/OTP and Elixir to my list of playbooks to compile from source.

          The Debian repositories have quite old versions of Erlang/OTP/Elixir and I doubt there are enough users to keep security fixes/patches up to date.

          Combined with the list of technologies I already use, it seems to reduce complexity if I use Docker containers as deployment units and should be future proof for at least the next decade.

          Writing about it, another solution might simply be to have something like Distrobox on the PI and use something like the latest Alpine.

          • CameronDev@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Up-to-date runtimes definitely makes sense, that is where docker shines.

            Gitlab is obviously a bit overkill, but maybe you could just create some systemd timers and some scripts to auto-pull, build and deploy?

            The script would boil down to:

            cd src
            git pull
            docker compose down
            docker compose up --build
            

            Your welcome to steal whatever you can from the repo I linked before.

            • wolf@lemmy.zipOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 months ago

              Thanks a lot!

              Yeah, if I go down that road, I’ll probably just add a git commit hook on the repo for the Raspberry Pi, so that I’ll have a ‘push to deploy’ workflow!