Hi,

I want to self-host my own web server for nextcloud, jellyfin, gittea, and a bunch of other things to move away from big tech. I’m planning on having a VM for each of those apps, and running each of them in docker. I could then use Apache or Nginx to access it from outside my network. I’ve looked into virtual machines and found that QEMU would be the best option, especially for using the CLI. How would your recommend setting it up?

I ask this because I don’t want my server being used in some kind of botnet or some shit like that. I don’t think that will happen, but I’d prefer to just employ good practices to begin with just in case. Is it even worthwhile having a virtual machine for each of those services anyway?

Keep in mind that my PC I’m using is scrapped from spare parts with an R5 3600 and 16GB of memory. If I need to upgrade it I’m happy to get a bit more, but it shouldn’t be an issue.

This is also my first post on programming.dev. I’m not sure if it is a good place to post this on but hopefully there are some people

Thanks!___

  • buedi@feddit.de
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    As others mentioned, you probably do not need VMs. If you thought about VMs because of isolation, then yes. that might be a good idea.

    In an ideal world, if I had the budget / hardware, I would have a Server with multiple NICs (Network Interface Cards) connected to different ports on my Firewall for LAN and DMZ. Then I would create VMs for LAN and DMZ and on those the Docker Containers needed for that zone. Everything that is accessible from the Internet gets into the DMZ, the rest in the LAN. I could further lock it down by creating 2 DMZ zones and only put let´s say NGINX or Traefik into the Zone that gets exposed and the services behind the Reverse Proxy in the 2nd DMZ zone, which will still be isolated from LAN.

    But since I only have a small box with 1 NIC, instead I created VLANs on my Router and created a Docker Network for each VLAN. Every single service I run is a docker container and in one of the VLANs, appropriate to their level of exposure. I have one VLAN called LAN that obviously is connected to my LAN and 2 other VLANs where I basically do what I described above. One holds Traefik and has exposed ports to the Internet and the other VLAN hosts the Services which are accissible through traefik. With that setup you at least isolate network traffic and it is something I would look into if you plan to expose any of your services to the internet. Usually when you start with Docker, you probably would just expose Ports from the Containers, which get mapped to the IP of your host… and so all those Containers will have access to your LAN. At least try to separate that.

    The next thing I wanted to do, is run my Containers rootless, which means that no container has root permissions if in case something within the container decides to let the docker service do something malicious on the host, it should not be able to run as root. The caveat here is, that docker does not support VLANs in rootless mode. I spend half a day converting everything to Podman, because people where praising podman left and right if you want to run rootless, but then I found out that Podman does not support VLANs in rootless mode either :->

    Using VMs as described above would make the “I can not use docker rootless” problem less of a problem, but I decided against VMs because of Resources / Budget.

    What I can recommend when you start, do not try to make things too complicated until you are familiar with Docker and understand what you are doing. As you get better, you might want more and learn more stuff as you go.

    You could just install a Linux Distribution you are familiar with (I use Ubuntu Server LTS 22), install Docker and just play around with it a bit to see how everyting works. Only start exposing Services to the Internet if you know what you are doing.

    Maybe a few tips or keywords for you of stuff I went through step by step for later usage.

    • If you expose Services to the Internet, use a Reverse Proxy you think you will understand (NGINX, Traefik, Caddy…)
    • Try to segment your network if your Hard- / Software allows it to separate LAN Services from Services exposed in the Internet
    • Start documenting your setup from the beginning! If you are like me, everything is clear as you do it… but when I come back a month later I wonder how I set up the VLANs or what each Environment Setting does for a specific container etc ;-)
    • Instead of using Docker Volumes, think about redirecting Container directories to directories on the host instead. All my containers have their data under /opt/<container> and all my docker-compose files are in another, separate directory.
    • Implement a Backup solution early on (I use kopia, which backs up my compose directory and /opt, which should be everything I need to set up everything again on a new host)
    • Once you have a few containers up and running and think you are familiar how they work, start use docker-compose. Having a compose file for each container makes updating and maintaining them super easy. There is an updated image for a container? Just run docker-compose up -d and you are done. You need a variation of a container for testing? Copy the compose file, make adjustments and run it.
    • I use watchtower to automatically check if new docker images are available. I use it in monitoring mode. It will check and download for new images, but will not restart the containers. Instead I receive an E-Mail from watchtower. I can then check if the update is for a container exposed to the internet and then will let kopia do another backup run and just do a docker-compose up -d to restart / update the respective container, check if it still does what it does and am done.
    • Did I mention that you should document everything you do? If you are like me and have a memory like an earthworm, you should document your setup from the beginning ;-)

    All in all: Do not rush it, do not feel the pressure to do everything I wrote. You might even come up with other, much better fitting solutions for you than what I or others here are doing. The most important things? Have fun and think twice what and how you expose a service to the public :-)

  • Voroxpete@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    If you’re using docker (which you should) there’s really very little value (and quite a bit of cost, resource wise) to putting everything in separate VMs.

    Edit to add: for KVM/QEMU, you’re probably best off just using Proxmox as your host OS. If you’d prefer not to do that then the easiest way to manage QEMU is with virt-manager, either connecting over SSH, or piped from the host using X forwarding.

    I’m a big fan of KVM/QEMU, I use it at work and at home, but it does have its quirks, so prepare for a bit of a learning curve.

  • 𝘋𝘪𝘳𝘬@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Why use VMs if you want to use Docker anyway? What I recently did, was dockerizing all of my selfhosted stuff and use Nginx Proxy Manager to run and configure a reverse proxy listening on ports 443 and 80 and just forward to the ports exposed by the containers.

    My home server is a “mini PC” from 2018 with 1 terabyte SSD and 4 gigabytes of RAM and an Intel Celeron J3455. I’m currently running 7 containers.

  • InverseParallax@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    You don’t need multiple vms, I did that in the past and it was a waste, just 1 vm with multiple dockers, or FreeBSD jails. Some applications are more sensitive and need extra isolation, but nothing you shows should.

    Nginx as the reverse proxy, look up the configuration but it works perfectly.\

    Your cpu and memory are perfect, you’re ready to roll.

    • Diseased Finger@programming.devOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I’m glad to know that I don’t have to upgrade anything. I was just a bit worried that having a million different docker containers would be a bit of a strain on the CPU at least. Now that I think about it, I’m the only one using the server so any extra components will be pretty pointless.

      • InverseParallax@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Nah, docker containers are very low overhead, VMs are more overhead.

        Especially if you have no other users, you’re way overprovisioned, you’ll have tons of room to grow.

  • Chosen3339@lemmy.run
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    Use docker + docker-compose. All recent stuff have a docker-compose example deploy on their doc and generally you need to just copy paste that a run it.

    • Diseased Finger@programming.devOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Yep thats exactly what I’m doing! While I was learning ASP.NET, I decided to deploy the website using Docker and docker compose since I figured it would be easier to do than using the CLI, so I already had some experience with that.