Hi All. I have been running my own lemmy instance for a while now. I set it up sort of as an experiment, and then I realized that I liked having my own instance, as it makes me (mostly) immune to outages due to things outside my control, defederation drama, etc. So I decided that I am going to stick with having my own instance. But obviously the amount of space it is taking grows, and I apparently have zero foresight and I only have so much space on the SSD that I initially put lemmy on. So I wanted to migrate everything over to my NAS.

I am mounting a volume on my NAS via NFS. I copied over my whole lemmy directory with cp -a, and it appeared that all of the permissions and file ownership copied over properly. However, when I run the containers, the postgres container is constantly crashing. The logs say “Permission denied” and then “chmod operation not permitted” back and forth forever. I opened a shell in the container to see what was going on, and I could see that the container’s root user could not cd into /var/lib/postgres/data, but the postgres user could.

I have no_root_squash set for my NFS share if that is important, but I doubt that is even relevant since it is only the root user inside the container. I’m running my lemmy instance with rootless podman, so root inside the container actually maps to the UID of the user running the podman commands outside the container. That said, when I run this in my local filesystem, while my podman user can’t access the postgres volume outside the container, as root inside the container it can access it.

I hope this isn’t too confusing, and I hope that someone can help me with this. I know it is a very specific setup being rootless podman and trying to run it on an NFS share.

Today is also the first time I have every tried using NFS, as my NAS was always using SMB before, but I needed file ownership to do this. So it’s very possible I just need to tweak some NFS settings.

Edit:

I sort of got it working, but it’s mega hacky. It’s not a permanent solution, but it gives me some insight into what is going wrong.

I set the permissions on the postgres volume in my host to be g+rx, and it worked. However, as soon as the container started, it changed the permissions back to 700. The thing is, “root” doesn’t actually need access to the directory. The postgres user has access, and that’s all that needs it. So it this actually works. But if I need to restart the container for any reason, it no longer works. So I would need to set the permissions to g+rx every time, which is just not a good solution.

  • fkn@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It’s possible that ownership/group is wrong. Is there a reason you used cp -a. Instead of rsync -a? The rsync version is a much closer duplicate than the cp version.

    Edit: also if your base folder has different permissions that you are mounting into docker are different permissions this can happen.

    • Dandroid@dandroid.appOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I did try it with rsync -a and got the same results. :(

      Edit: Oh, I just saw your edit. The base folder could be the problem. So the folder structure leading up to the problem is like /mnt/nfs_share/podman/lemmy/volumes/postgres/. The postgres folder is what is being mounted and where the problem is. The whole lemmy folder is what is being copied. So the folder holding the problem folder should have the correct ownership and permissions. But could something upstream all the way to the podman folder cause issues all the way down?

      podman is the name of my user that runs podman commands and the name of the folder that hold all the stuff that belongs to that user. I know, that’s confusing. Did I mention that I had zero foresight?