Hi All. I have been running my own lemmy instance for a while now. I set it up sort of as an experiment, and then I realized that I liked having my own instance, as it makes me (mostly) immune to outages due to things outside my control, defederation drama, etc. So I decided that I am going to stick with having my own instance. But obviously the amount of space it is taking grows, and I apparently have zero foresight and I only have so much space on the SSD that I initially put lemmy on. So I wanted to migrate everything over to my NAS.

I am mounting a volume on my NAS via NFS. I copied over my whole lemmy directory with cp -a, and it appeared that all of the permissions and file ownership copied over properly. However, when I run the containers, the postgres container is constantly crashing. The logs say “Permission denied” and then “chmod operation not permitted” back and forth forever. I opened a shell in the container to see what was going on, and I could see that the container’s root user could not cd into /var/lib/postgres/data, but the postgres user could.

I have no_root_squash set for my NFS share if that is important, but I doubt that is even relevant since it is only the root user inside the container. I’m running my lemmy instance with rootless podman, so root inside the container actually maps to the UID of the user running the podman commands outside the container. That said, when I run this in my local filesystem, while my podman user can’t access the postgres volume outside the container, as root inside the container it can access it.

I hope this isn’t too confusing, and I hope that someone can help me with this. I know it is a very specific setup being rootless podman and trying to run it on an NFS share.

Today is also the first time I have every tried using NFS, as my NAS was always using SMB before, but I needed file ownership to do this. So it’s very possible I just need to tweak some NFS settings.

Edit:

I sort of got it working, but it’s mega hacky. It’s not a permanent solution, but it gives me some insight into what is going wrong.

I set the permissions on the postgres volume in my host to be g+rx, and it worked. However, as soon as the container started, it changed the permissions back to 700. The thing is, “root” doesn’t actually need access to the directory. The postgres user has access, and that’s all that needs it. So it this actually works. But if I need to restart the container for any reason, it no longer works. So I would need to set the permissions to g+rx every time, which is just not a good solution.

    • Dandroid@dandroid.appOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Hmm, I’m not 100% sure this is the scenario I am in. My user’s home directory is on the local file system, not the NFS, so the images are being stored on the local filesystem. The docker-compose file, the config files, and the volumes are the things that are on the NFS.

      I also think it’s worth pointing out that the pictrs container is working fine, and it also uses weird UIDs that are over 100,000

      • Max-P@lemmy.max-p.me
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Could be some missing NFS features then: make sure you’re using NFSv4.2, have locking enabled and as many features enabled. It’s a database, it’s gonna be picky. Maybe it’s failing to lock the files.

        • Dandroid@dandroid.appOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I think the link you provided might actually be the problem that I am running into, but the solutions they provided aren’t working for me for one reason or another. It looks like my NAS doesn’t support NFS 4.2. I am on 4.1.

          It is definitely a permission issue. Everything works fine if I chmod g+rw, other than that the container immediately chmods it back to 700, causing it to crash the next time I restart my container. Inside the container, root should have special permissions to access the folder, even if it doesn’t have permission, because it is root. I think this translation is happening correctly in my host filesystem, but not in NFS for the reasons mentioned in the link you posted. Most of it goes over my head, but it seems like the kernel correctly interprets what is going on in the host filesystem, not on the NAS.

          I may need to abandon this idea and just get more internal storage. SSD prices are at an all time low right now anyway.

      • fkn@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        It’s possible that ownership/group is wrong. Is there a reason you used cp -a. Instead of rsync -a? The rsync version is a much closer duplicate than the cp version.

        Edit: also if your base folder has different permissions that you are mounting into docker are different permissions this can happen.

        • Dandroid@dandroid.appOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 months ago

          I did try it with rsync -a and got the same results. :(

          Edit: Oh, I just saw your edit. The base folder could be the problem. So the folder structure leading up to the problem is like /mnt/nfs_share/podman/lemmy/volumes/postgres/. The postgres folder is what is being mounted and where the problem is. The whole lemmy folder is what is being copied. So the folder holding the problem folder should have the correct ownership and permissions. But could something upstream all the way to the podman folder cause issues all the way down?

          podman is the name of my user that runs podman commands and the name of the folder that hold all the stuff that belongs to that user. I know, that’s confusing. Did I mention that I had zero foresight?

  • Dandroid@dandroid.appOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Unfortunately, no. I ended adding another SSD to my server and am just running it there now.