For the vast majority of docker images, the documentation only mention a super long and hard to understand “docker run” one liner.

Why nobody is placing an example docker-compose.yml in their documentation? It’s so tidy and easy to understand, also much easier to run in the future, just set and forget.

If every image had an yml to just copy, I could get it running in a few seconds, instead I have to decode the line to become an yml

I want to know if it’s just me that I’m out of touch and should use “docker run” or it’s just that an “one liner” looks much tidier in the docs. Like to say “hey just copy and paste this line to run the container. You don’t understand what it does? Who cares”

The worst are the ones that are piping directly from curl to “sudo bash”…

  • OmltCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    79
    ·
    1 year ago

    Because it’s “quick start”. Least effort to get a taste of it. For actual deployment I would use compose as well.

    Many project also have a example docker-compose.yml in the repository if you dig not so deep into it

    There is https://www.composerize.com to convert run command to compose. Works ~80% of the time.

    I honestly don’t understand why anyone would make “curl and bash” the officially installation method these days, with docker around. Unless this is the ONLY thing you install on the system, so many things can go wrong.

    • Anony Moose@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Out of curiosity, is there much overhead to using docker than installing via curl and bash? I’m guessing there’s some redundant layers that docker uses?

      • Shrek@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Of course, but the amount of overhead completely depends per container. The reason I am willing to accept the -in my experience- very small amount of overhead I typically get is that the repeatability is amazing with docker.

        My first server was unRAID (freebsd, not Linux), I setup proxmox (debian with a webui) later. I took my unRAID server down for maintenance but wanted a certain service to stay up. So I copied a backup from unRAID to another server and had the service running in minutes. If it was a package, there is no guarantee that it would have been built for both OSes, both builds were the same version, or they used the same libraries.

        My favorite way to extend the above is Docker Compose. I create a folder with a docker-compose.yml file and I can keep EVERYTHING for that service in a single folder. unRAID doesn’t use Docker Compose in its webui. So, I try to stick to keeping things in Proxmox for ease of transfer and stuff.

        • Anony Moose@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Makes sense! I have a bunch of services (plex, radarr, sonarr, gluetun, etc) on my media server on Armbian running as docker containers. The ease of management is just something else! My HC2 doesn’t seem to break a sweat running about a dozen containers, so the overhead can’t be too bad.

          • Shrek@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Yeah, that’s going to come completely down to the containers you’re running and the people who designed them. If the container is built on Alpine Linux, you can pretty much trust that it’s going to have barely any overhead. But if a container is built on an Ubuntu Docker image. It will have a bunch of services that probably aren’t needed in a typical docker container.

            • Anony Moose@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Good point. Most containers I’ve used do seem to use Alpine as a base. Found this StackOverflow post that compared native vs container performance, and containers fair really well!

              • Shrek@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                It seems like that data is from 2014 as well. I’m sure the numbers would have improved in almost ten years too!

    • macstainless@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Omg I never knew about composerize or it-tools. This would save me a ton of headaches. Absolutely using this in the future.

  • ilmagico@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    I don’t think you’re out of touch, just use docker compose. It’s not that hard to conver the docker run example command line into a neat docker-compose.yml, if they don’t already provide one for you. So much better than just running containers manually.

    Also, you should always understand what any command or docker compose file does before you run it! And don’t blindly curl | bash either, download the bash script and look at it first.

  • AlexKalopsia@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I used docker run when I first started, I think it’s a fairly easy entry point that “just works”.

    However I would never really go back to it, since compose is a lot tighter and offers a better sense of overview and control

  • Max-P@lemmy.max-p.me
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Plain docker is useful when running some simple containers, or even one-off things. A lot of people thing about containers as long running services, but there’s also many containers that are for running essentially a single command to completion and then shuts down.

    There’s also alternate ways to handle containers, for example Podman is typically used with systemd services as unlike Docker it doesn’t work through a persistent daemon, so the configuration goes to a service.

    I typically skip the docker-compose for simple containers, and turn to compose for either containers with loads of arguments or multi-container things.

    Also switching between Docker and Podman depending on the machine and needs.

  • Toribor@corndog.uk
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I’ve started replacing my docker compose files with pure ansible that is the equivilent of doing docker run. My ansible playbooks look almost exactly like my compose file but they can also create folders, set config files or cycle services when configs are updated.

    It’s been a bit of a learning process but it’s replaced a lot what was previously documentation with code instead.

      • Toribor@corndog.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        ansible-nas

        Wow, yeah this is exactly the sort of roles/playbooks that I’ve been building. I’m definitely using this as a source before starting my own from scratch. Thanks for sharing.

    • xcjs@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I’ve done something similar, but I’m using compose files orchestrated by Ansible instead.

      • Toribor@corndog.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m actually doing both right now since I had quite a huge compose file that I haven’t converted to ansible yet. The biggest frustration I have is that there doesn’t seem to be an ansible module that works with compose v2 (the official plugin) which means I’m either stuck on the old version of compose or I have to use shell commands to run stuff like ‘docker compose up -d’.

        One nice thing I’ve gained though is for services like Plex. I have an ‘update’ playbook that I use and it will check to see if Plex is actively streaming before updating the container which isn’t something I could do easily with compose.

        • alteredEnvoy@feddit.ch
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Well the v2 plugin is basically a binary, while v1 is written with Python, which makes it super easy to write an Ansible module

        • xcjs@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’m still using the old docker-compose executable - my Docker role is still installing it until the Ansible module catches up.

    • Zephyr@feddit.nl
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I did the same, but I started from my list of run scripts… I used ChatGPT to create them, took 2 minutes…

      • Toribor@corndog.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Hahaha, I’ve been using ChatGPT in the exact same way. It requires a bit of double-checking but it really speeds things up a lot.

  • Knusper@feddit.de
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Personally, I do usually want the docker run command. Much easier to use when orchestrating the deployment with other tools.

    For readability, I just line-break the command after each argument…

  • krolden@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    1 year ago

    Ive almost completely moved to podman managed by systemd and I highly recommend it.

  • SilentMobius@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Docker-compose is a orchestration tool that wraps around the inbuilt docker functions that are exposed like “docker run”, when teaching people a tool you generally explain the base functions of the tool and then explain wrappers around that tool in terms of the functions you’ve already learned.

    Similarly when you have a standalone container you generally provide the information to get the container running in terms of base docker, not an orchestration tool… unless the container must be used alongside other containers, then orchestration config is often provided.

  • Pixel@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Honestly I never really saw the point of it, just seems like another dependency. The compose file and the docket run commands have almost the same info. I’d rather jump to kubectl and skip compose entirely. I’d like to see a tool that can convert between these 3 formats for you. As for piping into bash, no - I’d only do it on a very trusted package.

  • Captain Howdy@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I prefer to use ansible to define and provision my containers (docker/podman over containerd). For work, of course k8s and helm takes the cake. no reason to run k8s for personal self hosting, though.

    • cliffhanger407@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      No reason aside from building endless unnecessary complexity, which–let’s be honest–is 90% of the point of running a home lab.

      Shit’s broken at work: hate it. Shit’s broken at home: ooh a project!

  • yaaaaayPancakes@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    First version of my server, I wrote a bunch of custom shell scripts to execute docker run statements to launch all my containers b/c I didn’t know docker at all and didn’t want to learn compose.

    Current version of my server, I use docker compose. But all the containers I use come from linuxserver.io, and they always give examples for both. I use ansible to deploy everything.

  • Morethanevil@lmy.mymte.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I always use docker-compose. It is very handy if you ever want to have a good backup or move the whole server to another. Copy over files -> docker compose up -d and you are done For beginners, they should use docker compose from the start. Easier than docker run

    If you ever want to convert those one-liner to a proper .yml then use this converter

    • casrou@feddit.dk
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      That is one docker compose up -d for each file you copied over, right… Or are you doing something even smarter?

      • Morethanevil@lmy.mymte.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I have one docker-compose.yml for each service. You can use docker compose -f /path/to/docker-compose.yml up -d in scripts

        I would never use “one big” file for all. You only get various problems imo

        • SheeEttin@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          You use a separate file for each service? Why? I use one file for each stack, and if anything, breaking them out would give me issues.

          • Morethanevil@lmy.mymte.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I meant stack 😸

            My structure is like

            /docker/immich/docker-compose /docker/synapse/docker…

            But I read that some people make one big file for everything

  • lonlazarus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I’m curious to hear from the runners. I use compose and I feel the same, it’s more readable and editable and it allows me to backup the command by backing up the docker-compose.yml

    • alteredEnvoy@feddit.ch
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      When orchestration or provisioning tools are used (Ansible, kurbernetes, etc…), creating networks and containers are equally readable in code. The way docker compose is designed makes it hard to integrate with these tools.

      • lonlazarus@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        This is the response I was hoping to hear. I’m primarily a home-automation/self-hosted enthusiast, not necessarily a infrastructure enthusiast. As of yet, I haven’t felt the need for using more involved orchestration tools/infra.