I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

  • @ck_@discuss.tchncs.de
    link
    fedilink
    English
    19
    edit-2
    2 years ago

    The main downside of docker images is app developers don’t tend to play a lot of attention to the images that they produce beyond shipping their app. While software installed via your distribution benefits from meticulous scrutiny of security teams making sure security issues are fixed in a timely fashion, those fixes rarely trickle down the chain of images that your container ultimately depends on. While your distributions package manager sets up a cron job to install fixes from the security channel automatically, with Docker you are back to keeping track of this by yourself, hoping that the app developer takes this serious enough to supply new images in a timely fashion. This multies by number of images, so you are always only as secure as the least well maintained image.

    Most images, including latest, are piss pour quality from a security standpoint. Because of that, professionals do not tend to grab “off the shelve” images from random sources of the internet. If they do, they pay extra attention to ensure that these containers run in sufficient isolated environment.

    Self hosting communities do not often pay attention to this. You’ll have to decide for yourself how relevant this is for you.

    • @fruitycoder@sh.itjust.works
      link
      fedilink
      English
      12 years ago

      For sure! Most seem to be random git repo level of reviewed instead of being seriously tested and hardened. I really wish we had more of an source for reliable audits of containers, and flatpaks. Just someone trusted or collectively running trivy, clair, sonarqube, etc, posting the results publicly, and having tools like podman/K3s/etc have sane defaults for checkibg it against containers on pull.

  • Outcide
    link
    fedilink
    English
    152 years ago

    Another old school sysadmin that “retired” in the early 2010s.

    Yes, use docker-compose. It’s utterly worth it.

    I was intensely irritated at first that all of my old troubleshooting tools were harder to use and just generally didn’t trust it for ages, but after 5 years I wouldn’t be without.

    • @DasGurke@feddit.de
      link
      fedilink
      English
      42 years ago

      I’m a little younger but in the same boat. There is some friction having filesystems, ports and processes “hidden” from your hosts programs that you typically rely on. But I needed them sooooo much less now that all my services are in Docker with exactly matching dependencies instead of rolling my eyes about running two PostgreSQL servers in different versions or juggling Python / node / Ruby versions with ASDF.

      • Outcide
        link
        fedilink
        English
        22 years ago

        Yeah, so worth it! The first time I moved a service to a new box and realised all I had to do was copy the compose file and docker-compose up -d … I was sold.

        Now I’m moving everything to Docker Swarm which is a new adventure. :-)

  • @ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    152 years ago

    I’m gonna play devil’s advocate here.

    You should play around with it. But I’ve been a Linux server admin for a long time and — this might be unpopular — I think Docker is unimportant for your situation. I use Docker daily at work and I love it. But I didn’t bother with it for my home server. I’ll never need to scale it or deploy anything repeatedly or where I need 100% uptime.

    At home, I tend to try out new things and my old docker-compose files are just not that valuable. Docker is amazing at work where I have different use cases but it mostly just adds needless complexity on a home server.

    • Great Blue HeronOP
      link
      fedilink
      English
      8
      edit-2
      2 years ago

      That’s exactly how I feel about it. Except (as noted in my post…) the software availability issue. More and more stuff I want is “docker first” and I really have to go out of my way to install and maintain non docker versions. Case in point - I’m trying to evaluate Immich so I can move off Google photos. It looks really nice, but it seems to be effectively “docker only.”

    • @Shdwdrgn@mander.xyz
      link
      fedilink
      English
      22 years ago

      This is kinda where I’m at as well. I have always run my home services each in their own VM. There’s no fuss to set up a new one, if I want to move it to a different server I just copy the *.img file over and launch it. Sure I run a lot of internet services across my various machines but it all just works so I don’t understand what purpose there would be to converting all the custom configurations over to docker. It might make sense if I was trying to run all my services directly on the bare metal, but who does that?

  • @buedi@feddit.de
    link
    fedilink
    English
    132 years ago

    I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.

    Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.

    Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.

    Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.

    I really started to love Docker, especially in my Homelab.

    Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.

    • shastaxc
      link
      fedilink
      English
      12 years ago

      The backup and easy set up on other servers is not necessarily super useful for a homelab but a huge selling point for the enterprise level. You can make a VM template of your host with docker set up in it, with your Compose definitions but no actual data. Then spin up as many of those as you want and they’ll just download what they need to run the images. Copying VMs with all the images in them takes much longer.

      And regarding the memory footprint, you can get that even lower using podman because it’s daemonless. But it is a little more work to set things up to auto start because you have to manually put it into systemd. But still a great option and it also works in Windows and is able to parse Compose configs too. Just running Docker Desktop in windows takes up like 1.5GB of memory for me. But I still prefer it because it has some convenient features.

    • @MaximilianKohler@lemmy.world
      link
      fedilink
      English
      11 year ago

      It seems like docker would be heavy on resources since it installs & runs everything (mysql, nginx, etc.) numerous times (once for each container), instead of once globally. Is that wrong?

      • @buedi@feddit.de
        link
        fedilink
        English
        21 year ago

        You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.

  • dblsaiko
    link
    fedilink
    English
    122 years ago

    No. (Of course, if you want to use it, use it.) I used it for everything on my server starting out because that’s what everyone was pushing. Did the whole thing, used images from docker hub, used/modified dockerfiles, wrote my own, used first Portainer and then docker-compose to tie everything together. That was until around 3 years ago when I ditched it and installed everything normally, I think after a series of weird internal network problems. Honestly the only positive thing I can say about it is that it means you don’t have to manually allocate ports for those services that can’t listen on unix sockets which always feels a bit yucky.

    1. A lot of images comes from some random guy you have to trust to keep their images updated with security patches. Guess what, a lot don’t.
    2. Want to change a dockerfile and rebuild it? If it’s old and uses something like “ubuntu:latest” as a base and downloads similar “latest” binaries from somewhere, good luck getting it to build or work because “ubuntu:latest” certainly isn’t the same as it was 3 years ago.
    3. Very Linux- and x86_64-centric. Linux is of course not really a problem (unless on Mac/Windows developer machines, where docker runs a Linux VM in the background, even if the actual software you’re working on is cross-platform. Lmao.) but I’ve had people complain that Oracle Free Tier aarch64 VMs, which are actually pretty great for a free VPS, won’t run a lot of their docker containers because people only publish x86_64 builds (or worse, write dockerfiles that only work on x86_64 because they download binaries).
    4. If you’re using it for the isolation, most if not all of its security/isolation features can be used in systemd services. Run systemd-analyze security UNIT.

    I could probably list more. Unless you really need to do something like dynamically spin up services with something like Kubernetes, which is probably way beyond what you need if you’re hosting a few services, I don’t think it’s something you need.

    If I can recommend something instead if you want to look at something new, it would be NixOS. I originally got into it because of the declarative system configuration, but it does everything people here would usually use Docker for and more (I’ve seen it described it as “docker + ansible on steroids”, but uses a more typical central package repository so you do get security updates for everything you have installed, and your entire system as a whole is reproducible using a set of config files (you can still build Nix packages from the 2013 version of the repository I think, they won’t necessarily run on modern kernels though because of kernel ABI changes since then). However, be warned, you need to learn the Nix language and NixOS configuration, which has quite a learning curve tbh. But on the other hand, setting up a lot of services is as easy as adding one line to the configuration to enable the service.

  • Caveman
    link
    fedilink
    English
    112 years ago

    I started using docker myself for stuff at home and I really liked it. You can create a setup that’s easy to reproduce or just download.

    Easy to manage via docker CLI, one liner to run on startup unless stopped, tons of stuff made for docker becomes available. For non docker things you can always login to the container.

    Tasks such as running, updating, stopping, listing active servers, finding out what ports are being used and automation are all easy imo.

    You probably have something else you use for some/all of these tasks but docker makes all this available to non-sysadmin people and even has GUI for people who like clicking their mouse.

    I think next time you find something that provides a docker compose file you should try it. :)

  • @krash@lemmy.ml
    link
    fedilink
    English
    102 years ago

    Welcome to the party 😀

    If you want a good video tutorial that explains the inner workings of docker so you understand what’s going on beneath the surface(without drowning in the details), let me know and I’ll paste it tomorrow. Writing from bed atm 😴

  • @BCsven@lemmy.ca
    cake
    link
    fedilink
    English
    102 years ago

    Docker is great. I learned it from aetting up an Openmediavault server that had a built in docker extension, so now lots of servers running off that one server. Also portainer can be very handy for working with containers , basically a gui for the command line stuff or compose files you’d normally use in docker cli

  • @rsolva@lemmy.world
    link
    fedilink
    English
    92 years ago

    Yes! Well, kinda. You can skip Docker and go straight to Podman, which is an open source and more integrated solution. I configure my containers as systemd services (as quadlets).

      • @rsolva@lemmy.world
        link
        fedilink
        English
        5
        edit-2
        2 years ago

        There are still edge cases, but things have improved rapidly the last year or two, to the point that most docker-compose.yaml files can be run unmodified with podman-compose.

        I have however moved away from compose in favor of running containers and pods as systemd services, which I really like. If you want to try it, make sure your distro has a reasonably new version of Podman, at least v4.4 ot newer. Debian stable has an older version, so I had to use the testing repos to get quadlets working.

      • the_weez
        link
        fedilink
        English
        42 years ago

        I’m no expert, but as far as I can tell yes. It also seems a bit easier to have a rootless setup.

      • @Anonymouse@lemmy.world
        link
        fedilink
        English
        32 years ago

        It depends on what you do with Docker. Podman can replace many of the core docker features, but does not ship with a Docker Desktop app (there may be one available). Also, last I checked, there were differences in the docker build command.

        That being said, I’m using podman at home and work, doing development things and building images must fine. My final images are built in a pipeline with actual Docker, though.

        I jumped ship from Docker (like the metaphor?) when they started clamping down on unregistered users and changed the corporate license. It’s my personal middle finger to them.

  • @Swarfega@lemm.ee
    link
    fedilink
    English
    9
    edit-2
    2 years ago

    I’m a VMware and Windows admin in my work life. I don’t have extensive knowledge of Linux but I have been running Raspberry Pis at home. I can’t remember why but I started to migrate away from installed applications to docker. It simplifies the process should I need to reload the OS or even migrate to a new Pi. I use a single docker-compose file that I just need to copy to the new Pi and then run to get my apps back up and running.

    linuxserver.io make some good images and have example configs for docker-compose

    If you want to have a play just install something basic, like Pihole.

      • @MalReynolds@slrpnk.net
        link
        fedilink
        English
        22 years ago

        Concur, podman doesn’t (have to) have root, and has autoupdate and podman-compose to use docker files. Containers are cool, Docker less so.

        • @PopeRigby@lemmy.world
          link
          fedilink
          English
          22 years ago

          My biggest issue with podman is that podman-compose isn’t officially recommended or supported, and the alternatives (kubernetes YAML and Quadlet) kind of suck compared to using a compose file. It makes me way to pour a bunch lf work into switching to using podman-compose. I have no clue why they didn’t just use the compose spec for their official orchestration method.

          • @MalReynolds@slrpnk.net
            link
            fedilink
            English
            12 years ago

            once the containers are running after podman-compose you can use podman-generate-systemd to create a systemd services. Helped me move a rather large compose file to a bunch of services. My notes weren’t the best, sorry, but that’s the gist.It got me moved. I’ve now moved on to .container files for new stuff, which generates them on the fly. Need to move my old services over, but they work and who’s got the time…

            • @PopeRigby@lemmy.world
              link
              fedilink
              English
              12 years ago

              How do you like the .container files? I hate the idea of having different files for each container, and each volume. They also don’t even support pods and the syntax is just terrible compared to YAML.

              • @MalReynolds@slrpnk.net
                link
                fedilink
                English
                12 years ago

                Not sure yet, agree it’s not as nice to look at as YAML, but at least it’s prettier than the alternative systemd.service implementation, and it’s been rock solid so far. Time will tell, I’m sure pods will come and it seems to be what redhat sees as their direction. A method for automatically generating them from docker YAML (and hopefully vice-versa) would go a looong way towards speeding adoption.

                • @PopeRigby@lemmy.world
                  link
                  fedilink
                  English
                  12 years ago

                  How do you feel about having to specify a different file for all of your containers and volumes? Has that annoyed you at all? I agree that pods are really nice, and they should give you a way to generate them from compose YAML.

      • @EnderMB@lemmy.world
        link
        fedilink
        English
        12 years ago

        I’ve regrettably only heard of Podman in passing. At work we use docker containers with kubernetes, is this something we could easily transition to without friction?

    • Great Blue HeronOP
      link
      fedilink
      English
      52 years ago

      Why not? Because I’ve never heard of it until this thread - lots of people mentioning it so obviously I’ll look into it.

  • 520
    link
    fedilink
    8
    edit-2
    2 years ago

    It’s very, very useful.

    For one thing, its a ridiculously easy way to get cross-distro support working for whatever it is you’re doing, no matter the distro-specific dependency hell you have to crawl through in order to get it set up.

    For another, rather related reason, it’s an easy way to build for specific distros and distro versions, especially in an automated fashion. Don’t have to fuck around with dual booting or VMs, just use a Docker command to fire up the needed image and do what you gotta do.

    Cleanup is also ridiculously easy too. Complete uninstallation of a service running in Docker simply involves removal of the image and any containers attached to it.

    A couple of security rules you should bear in mind:

    1. expose only what you need to. If what you’re doing doesn’t need a network port, don’t provide one. The same is true for files on your host OS, RAM, CPU allocation, etc.
    2. never use privileged mode. Ever. If you need privileged mode, you are doing something wrong. Privileged mode exposes everything and leaves your machine ripe for being compromised, as root if you are using Docker.
    3. consider podman over docker. The former does not run as root.
  • @elscallr@lemmy.world
    link
    fedilink
    English
    72 years ago

    Yes. Containers are awesome in that they let you use an application inside a sandbox, but beyond that you can deploy it anywhere.

    If you’re in the sysadmin world you should not only embrace Docker but I’d recommend learning k8s, too, if you still enjoy those things.