Running a dedicated server in docker

What do you want to see in Armagetron soon? Any new feature ideas? Let's ponder these ground breaking ideas...
Post Reply
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Running a dedicated server in docker

Post by Z-Man »

I'm struggling to find an acceptable setup there. The current legacy_0.2.9 branch has a dockerfile at its root, if you want you can build it with, i.e.

Code: Select all

docker build . -t armagetronad
and run it as configured with

Code: Select all

docker run --rm -ti --network=host armagetronad
And it should appear as a server on your local network. So far, so good!
But it's running with a default configuration, and log files and other persistence data aren't kept around (they are if you remove the --rm, but they are a pain to get at and are not automatically reused in the next run).

The ideal way to both make the configuration editable and var data persistent would be via a bind mount of, for example, the home directory of the user that's configured to run the server:

Code: Select all

docker run --rm -ti --network=host type=bind,source=/some/directory,target=/home/armagetronad armagetronad
But... inside the docker container the server runs as user armagetronad, UID 1000, and that's directly mapped to the outside. If that's my own user ID on the outside, great! That will just work and /some/directory, if owned by me on the outside, can be fed with configuration and hold var files. But....... if that's not my UID OR I'm using rootless docker or podman (even with matching UIDs), it doesn't work. The user inside docker does not get write access to the bind mounted directory.

If I configure the docker image to run the server as root, it all works in podman/rootless docker. But then I'm running as root in the regular rootful docker configuration, which is hardly ideal.
I looked a bit into user remapping, but that would work so well with multiple users on the system. If I read it correctly.

What does work in all situations is to not use bind mappings, but volumes. Docker knows how to properly make them accessible inside the container. But volumes are an abstraction, and it's quite hard (for me) to view and edit their content.

Am I maybe thinking completely wrong here? Should I just use volumes (named, for persistence across runs) and instead of trying to get to them from the outside, whenever I want to change or inspect something, ENTER the container with 'docker exec' and use the contained tools to inspect/edit stuff? Or, if I get annoyed by only having vi available (I'm sure it's great! I'm... just always glad I remember how to exit it.), copy stuff in and out with 'docker cp'?

Or a mixture? Use the volume for variable data and the resource cache, bind mound a read only configuration directory? Yeah, I think I'll see whether that makes me happy.
User avatar
kyle
Reverse Outside Corner Grinder
Posts: 1876
Joined: Thu Jun 08, 2006 3:33 pm
Location: Indiana, USA, Earth, Milky Way Galaxy, Universe, Multiverse
Contact:

Re: Running a dedicated server in docker

Post by kyle »

Ya, file permission are a pain in docker, even worse with podman. I think there is a command that you can run on the directory you want to mount for a rootless docker container, so that permissions are not messed up. If it's not a command it's a slight change to how you mount the volumes.

This is something I wanted to try to work on to bring back CTWF, if i ever get time :(
Image
User avatar
Light
Reverse Outside Corner Grinder
Posts: 1667
Joined: Thu Oct 20, 2011 2:11 pm

Re: Running a dedicated server in docker

Post by Light »

So, I'll start by saying I haven't actually tested the image you're using as I've built my own for sty+ct and sty+ct+ap. What I do is have everything the server needs go to a single directory, then do a bind mounted volume, which for me looks something like this in the container:

/home/tron/armagetronad/server
config language resource scripts settings var

Permissions are all set to 1000:1000 on the server. If you need to edit them, since it's volumed, you can simply edit from the server with all installed tools like nano. If your goal is to allow other users edit / create files for a hosting solution, then you may need to allow group access (xx6 permission) then set the ownership on newly created files.

One other note. You don't need to use --network=host. The container only needs access to a single UDP port, so pass that through. Of course, all containers need to use their own specified port, then the server_info.cfg need to set SERVER_PORT in your settings dir to the selected port. This will of course not limit outbound connections.

If you need more or would like to use any of my stuff (dockerfile, template, etc.) just get a hold of me on Discord. codefossa#5202 I could also test out your image if needed and let you know what I use to get it working. I'm working at this moment, but basically any time after 15:00 ET I can help out. Let me know!
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: Running a dedicated server in docker

Post by Z-Man »

Yeah, apparently the issue is that inside the container, bind mounts always appear owned by root, no way around that. So to make their contents writable by a non-root user inside the container, you would have to make them world writable on the host. Also not a terribly good idea.

This old bug thread gave some details and solution suggestions. The best easy suggestion seems to be this: it works if you leave the default user inside the container as root, and then start the server with a wrapper script that chowns everything in the potential bind mount to the non-root user:

(PROGNAME=armagetronad, obv.)

Code: Select all

#!/bin/bash
chown -R ${PROGNAME}:${PROGNAME} /home/${PROGNAME}
su - ${PROGNAME} -c /usr/local/bin/${PROGNAME}-dedicated \"\$@\"
chown -R root:root /home/${PROGNAME}
The chown back to root at the end is required; without it, the contents of the bind mount on the host are owned by the random user ID docker maps the in-container user to. Also, error propagation is missing.

Yeah, it's not nice that you need a wrapper script, but the way the dockerfile is set up, I need one anyway; PROGNAME is a build parameter and apparently you can't use those in ENTRYPOINT, yay. The fact that the script is running with root rights inside the container is also not too bothersome, at least the server is running with less privileges.
While the server is running, your bind mounted host directory is owned by some random other user, though. I don't know how I feel about that. And you need to quit the server regularly to get your files back, CTRL-C skips the chown-back step.

Edit: Ah, I hadn't seen your post, Light. I'd gladly have a look at your dockerfile for inspiration! And learn how you run your stuff. No, I'm not looking into a hosting solution, I just don't like relying on fixed UIDs.
User avatar
Light
Reverse Outside Corner Grinder
Posts: 1667
Joined: Thu Oct 20, 2011 2:11 pm

Re: Running a dedicated server in docker

Post by Light »

Try using a volume.

Code: Select all

sudo docker run --rm -it -v /data/docker/armagetronad/tmp/server:/home/armagetronad test-armagetronad
When I view the home directory, the contents are not root.

Code: Select all

bash-5.0# cd /home/armagetronad/
bash-5.0# ls -al
total 16
drwxr-xr-x    3 armagetr armagetr      4096 Jan 12 03:51 .
drwxr-xr-x    1 root     root          4096 Jan 12 03:35 ..
drwxr-xr-x    3 armagetr armagetr      4096 Jan 12 03:51 .local
-rw-r--r--    1 armagetr armagetr        59 Jan 12 03:49 testfile
On my actual server, it looks like this:

Code: Select all

thomas@lightron:/data/docker/armagetronad/tmp/server$ ls -al
total 16
drwxr-xr-x 3 thomas thomas 4096 Jan 11 22:51 .
drwxr-xr-x 4 thomas thomas 4096 Jan 11 22:46 ..
drwxr-xr-x 3 thomas thomas 4096 Jan 11 22:51 .local
-rw-r--r-- 1 thomas thomas   59 Jan 11 22:49 testfile
Both users are 1000. If the user outside of the container isn't helpful, I just edit with root, but inside the container, it's set to 1000 so the user running the server process has access to it.
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: Running a dedicated server in docker

Post by Z-Man »

Yeah, that works, but only if you control user IDs inside and outside the container, and also only in rootful docker (just cursory testing on my part).

However, going along the same lines: podman has a mode that docker seemingly does not have: --userns=keep-ids. The effect of that is that user IDs inside the container will be the same as on the host, AND bind mounts just keep the owners of files (the documentation does not mention that part...). So you can

Code: Select all

podman run --rm -ti -v ... --userns=keep-id -u $UID <image name>
and it will run under your user ID in the container and see the files in the mount owned by itself. The slight wart here is that you can't have a proper home directory configured inside the container because you don't know ahead what the UID is going to be, but the usage ~/.armagetronad-dedicated folder inside was bothering me a bit anyway, and then I have an excuse to get rid of that.
User avatar
delinquent
Match Winner
Posts: 760
Joined: Sat Jul 07, 2012 3:07 am

Re: Running a dedicated server in docker

Post by delinquent »

I wonder, is it worth pinging the developers behind Mailcow and asking their opinions on config persistence? They have a package designed to run a mail server over a few docker containers. I use it to host our mail for the time being.
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: Running a dedicated server in docker

Post by Z-Man »

No need to ask, it's documented. They're using named volumes, all the way down this page. That is the correct choice if you orchestrate multiple containers and some of them need to share access to the files (dunno if Mailcow shares the content). Or if you just don't want files not controlled by the container engine on the host system.
The installation instructions say they're using rootful docker, and most of their dockerfiles do not have a USER directive, so they enter the container with root rights as well. Then a script does some chowning/chmodding (like, see some posts ago) and at the end calls 'exec "$@"', running whatever command is given from the outside. That is a neat trick! Anyway, the docs say they drop root rights for the actual service that's running, so I suppose they're passing in parameters that do that. Not sure why they don't have the right drop machinery in the script; it probably interferes with the exec magic. AS YOU KNOW (says the guy who just looked up the manpage to learn it), exec replaces the current process and does not return to the shell script. That blocks the attack path where some exploit appends some lines to the wrapper script, somehow, which then get executed with root rights again.

That sort of confirms this: rootful docker + volumes => by default, a non-root user inside the container cannot be given write rights to the volume contents by docker alone, you need the wrapper script.

I'm personally happy with podman and bind mounts; no root in sight and I have easy control of all the files. I'm not terribly interested in orchestration, I like running my servers in screen sessions with interactive terminals. WIP documentation, will add complete command lines for the most promising cases. I'll change the launch script to use the exec trick, too, and maybe I'll detect whether the script is running as root and if so, do chown + drop rights before running the server, that should then cover all sensible cases.
User avatar
delinquent
Match Winner
Posts: 760
Joined: Sat Jul 07, 2012 3:07 am

Re: Running a dedicated server in docker

Post by delinquent »

I can't really comment on the Mailcow implementation personally, all I know is that it works across multiple volumes and appears to be relatively painless. Not really my area of expertise (did I mention I hate Docker?).

As for screen sessions, this is my preferred approach too. I currently use Nelg's implementation that relies solely on a folder containing a server config to implement it, and the single command that can be used with a flag to specify which server to start/shutdown is fantastic.

I was thinking, actually, this might be an approach for Docker based servers - create a custom config, and use a script to build the container based on that config. Obviously, you can't change much without restarting the server, but by creating the volume on-the-fly you get the ability to "inject" a configuration on startup. That may or may not be actualyl possible, I'm just going on how Mailcow is installed.
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: Running a dedicated server in docker

Post by Z-Man »

Yeah, baking the configuration files into the image is definitely a valid way to go about things. That way, you can also make sure you don't change the configuration of a running server by accident, which you can do if the configuration comes in as a mount.
Post Reply