New build process based on Docker, make and GitLab CI

What do you want to see in Armagetron soon? Any new feature ideas? Let's ponder these ground breaking ideas...
Post Reply
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

New build process based on Docker, make and GitLab CI

Post by Z-Man »

Advance Warning: The first two posts are mainly for devs. The executive summary is that we now have a fully automated build system that only requires minimal developer action to deliver Windows and Linux builds.

The previous build system required manual triggering, a build machine configured just right (with a 32 bit chroot environment and libzthread manually compiled in just the right way) and was error prone. It was external: Some bzr branch that would, on request, fetch the most recent versions of the main source module, winlibs and the code::blocks project files and then use three layers of mutually including makefiles to wrangle that into our builds. That meant you could not easily go back and do builds for old sources, because that may give you wrong project files. If something failed and you reran make, it maybe remade all of the output and reuploaded builds, which broke zeroinstall's checksums.

The new system still uses make to chain together individual build steps. Building the source tarball is a make target, so is unpacking it to get a good source base, so is making a linux 32 bit server build. The structure is more robust, relies on tag files for the timestamps and only re-runs steps if you explicitly do one of the cleans. The principle used all over the place is the same as for any grown CI system: Files are copied together into a directory (called context.*), that is copied into a docker container, docker runs a script in it, the resulting files are copied out (into result.*). Later steps pick their input from earlier steps. That makes it all easily testable locally without having to re-run earlier steps. If you ever debugged a CI system's later steps, you probably know how valuable that can be.
Instructions on how to use the system locally are in docker/Readme.md.

The individual builds no longer run on your environment, so no need to specifically prepare anything. They run in docker containers pre-built and downloaded on demand. The build scripts for these are in docker/images. Most are from straightforward Dockerfiles that can be built by rootless docker. Two exceptions: The Steam runtime somehow does not build in rootless docker, so that falls back to 'sudo docker'. And code::blocks installation does not work without user interaction, so to build that image, you need to run it in docker with access to your real X server so you can klick 'next' a bunch of times. Luckily, someone else already did all the hard work to make that happen.
Currently, the images are based on Ubuntu 16.04; should the need arise, it's super easy to just upgrade, rebuild and reupload the images and carry on.
Windows builds are especially silly :) They use the same tools used for native builds, code::blocks and mingw-gcc, where mingw already is a minimal unixy wrapper. That runs in wine. Wine runs inside a docker container. For the fully automated builds, that docker container either runs inside or is controlled by another docker container. All of that runs in a virtual machine (at home) or a VServer. Count the layers of indirection.

There are also make targets to deploy the resulting build to our usual places: SF uploads, Launchpad file releases, zeroinstall and of course Steam. Oh, and there is a new download page. The source for that one is here. The build script just checks new files in for each release and modifies current pointers. The website is based on Jekyll, the static site generator used by GitHub Pages. It's nice! The site generator, not the site. Work required.

(continued, too many URIs)
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: New build process based on Docker, make and GitLab CI

Post by Z-Man »

All that would still require manual actions, but that's were the GitLab CI pipeline comes in. It's defined in a handy YAML file. Because the makefile already does all of the complicated work, the YAML file only needs to say which make target to build in each step and which files to take over to the next step.
A complete pipeline run can be seen here. The individual phases are:

1. Preparation. Here, we run bootstrap.sh, create the main build directory, configure a build in there with the desired rebranding, and do the first, simple make invocation in docker/build.

2. Basic build. Here we build servers for 32 bit Linux and Steam for Linux. They get an extra phase because the Windows sources require a completed Linux build for versioning, sorted resources and whatever else the Makefiles do best. And the Windows builds are what takes longest in the next phase, so it makes sense to give them a head start.

3. Main build. The other configurations are handled here. GitLab respects the order for jobs you give then in the yaml file, so here we can priorize the slower windows builds, followed by a debian test build (never used, but catches errors before we deploy debian source packages to the PPA), then the rest. Each job here just keeps the make tags and finished build to be used in

4. The collector. All the previous builds are collected here, make is invoked again with a top level target, so missing small jobs can be completed and we notice when big jobs have not been done or configured incorrectly, because they then run here (again).

5. Staging. Here, we deploy the builds to places where they are temporarily visible and testable. For continuous builds, uploads happen to http://download.armagetronad.org/staging/ instead of LP and SF; the Zero Install streams are updated on the website, but not committed, so the next build will just overwrite the current one. The stability of the Zero Install version is set to something lower, so you need to opt in the test these preliminary uploads.

For full releases and release candidates, in this phase, we also deploy to a staging PPA because those builds are un-rebranded and there would be no way to put them into the main PPA without overriding the previous real release already.

There is also a pack job here that only takes all of the builds and puts them into its slightly longer lived artifact (that's CI lingo for 'output files'), in case deployment fails but we still want to test the build output files.

The last job here is the Delay job. That does nothing, but for continuous builds from legacy_*, beta_* or master, it simply waits six hours. That serves two purposes: It gives us a chance to stop deployment if we find something went wrong, and it reduces the amount of final deployments. Final deployment has a check that bails out if there is another build already in the pipeline. So if we work real hard one day and make 20 commits and pushes in a couple of hours, that still results in only one deployed build, hopefully without any of the bugs introduced earlier that were quickly noticed and fixed. The delay job can also be triggered manually, in case you want to accelerate deployment.

For full releases, the delay is set to manual and must be triggered over the web interface, preferably after testing what was uploaded in the staging job.

6. Deployment. Here we upload the files to SF and LP for real, push the Debian source packages to the normal PPA, point the Zero Install feeds to the LP storage and commit the changes.

Deployment requires secrets, of course. GitLab CI has a way to pass those to builds in a protected way, but I just set up a custom runner that has them as a mounted volume, from where they are picked up. They're only visible during the deployment phase. All the build jobs run secret-less.

So, how do I trigger an alpha build?
Push to legacy_* or master.
And how do I trigger a beta build?
Push to beta_*. That should either be a hotfix to a bug on that branch or a ff-merge from legacy_* or master.
How about a release candidate?
Push to release_*. Again, either a hotfix or a ff-merge from beta_*.
And a release?
Tag something in release_*, normally something already tested as a release candidate. Test the staged output once more. Execute the delay job. And there's one more web stuff to do on Steam admin, you can't auto-deploy to the main branch.
Apart from the Steam job, unless someone comes up with a way to do things telepathically, it could not be made less work. It's one action per decision.

The goal here was not to save work, though. Automating stuff rarely is, I probably spent more time on it than on the old system, creating it and using it combined. The purpose is to make builds trivial, so we are free to do as many of them as we like. We can increase the frequency of major releases. We don't have to have a situation again where a 'next release, really soon promise' branch goes on forever. We don't have to think about whether to put that one feature in this release, it would no longer be important. If it doesn't go in this release, next month there will be a new one. Releases don't have to be perfect because there are so few, each release just needs to be better than the last.

Thinking in the other direction, maintaining long term support branches such as 0.2.8.3 also gets a whole lot easier. Remember that the docker images used for the builds are frozen. They'll be fresh when needed and will just work even in five years. I ported the system back to 0.2.8.3 (in fact, developed half of it there), so now we could release 0.2.8.3.6 any day. I used to dread that. 0.2.8.3.5 took a week to complete, it wasn't clear whether it would work at all.

The goal also is not to make us slaves to the CI and ensure that all revisions build. For most purposes, it's enough that we notice when some build breaks.

So, long live the CI/CD system!
User avatar
sinewav
Graphic Artist
Posts: 6413
Joined: Wed Jan 23, 2008 3:37 am
Contact:

Re: New build process based on Docker, make and GitLab CI

Post by sinewav »

With this new build process, is Armagetron positioned to be distributed as a Flatpak/AppImage/Snap? I have quite a few programs on my machine that I run as AppImage or Flatpak. I know one of the major complaints about these formats is the huge overhead in disk space and RAM/CPU use, but it is nice to download a new version of an app without compiling from source.
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: New build process based on Docker, make and GitLab CI

Post by Z-Man »

AppImage: Yes, we're already building those :) They're not registered at the Hub, though. That's largely a question of making the containing metadata correct. I'll look into that some time. Then, all you need to do is put the releases at a *fixed* URL without version number, and the Hub picks up changes. What bothers me a bit there is that when you read the description of the AppImage ecosystem, a lot is "here is how it could work"... but the Hub apparently already is working, that's something.

FlatPak: The build process there is special, because FlatPaks are built against a fixed set of libraries, not completely unlike Steam. You have to write some manifest files like for Debian packages, and the build happens in some controlled way... I only skimmed the docs. It should be possible and I consider it worthwhile, but it's more work.

Snap: Also possible, but I won't do it myself. I object to
- Snaps getting mounted permanently, spamming the output of df (yes, you can define an alias that filters them out, but still)
- Snaps being used by Ubuntu itself for things that have traditionally been .debs
- There only being one possible Snap repository (Also a problem with Steam and I think itch.io, but they're not Linux distributions and get away with it more easily with me)
Also, whenever there have been two competing things, one backed by Canonical/Ubuntu, one by someone else, the Canonical one lost and got dropped or left to die quietly eventually.
Patches would be welcome, though.
GluGGsel
Posts: 8
Joined: Fri Feb 04, 2011 3:32 pm

Re: New build process based on Docker, make and GitLab CI

Post by GluGGsel »

Dummie here: I tried to read it all, mostly did. Even though I didn't understood like >80%, it looks and sounds like you put a serious amount of time and effort in that, Z-man. Thank you very much for that! I (and I'm pretty sure the rest of the community too) appreciate that alot!!
GluGGsel greets :goatee:
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: New build process based on Docker, make and GitLab CI

Post by Z-Man »

Heh, I probably should have added a warning that it gets pretty technical upfront :) Done so now.

Forgotten bits: Patch notes and NEWS file generation is also automated; it is build from the GitLab issue tracker. A small Python script checks which issues have been mentioned in commits and are now closed, assigns them to releases based on which tags they fall between, fetches the metadata from GitLab (using its super-easy JSON API), sorts features from bugs and compiles a list.
Because the issue tracker is external data, the result from that is only uploaded as separate files and not included in the tarball. Doing so would violate the principle that a repeated build from the same source should yield the same result. Which isn't completely so right now, I don't think so anyway; at least, file timestamps probably would be different. There are techniques to get 100% reproducible builds; I'm not sure whether it's worth the effort.

What has fallen flat: Mac builds. They're not impossible; we talked about the cross compilation from Linux approach; that could be integrated. I'm not too keen on using cross compilation for releases, though; how do you debug such a build? An alternative would be to have a native Mac runner somewhere registered with the project. Either way, the build scripts would need to be written, and I completely do not know how.
User avatar
aP|Nelg
Match Winner
Posts: 621
Joined: Wed Oct 22, 2014 10:22 pm
Contact:

Re: New build process based on Docker, make and GitLab CI

Post by aP|Nelg »

Z-Man wrote: Fri Jun 19, 2020 11:04 pm What has fallen flat: Mac builds. They're not impossible; we talked about the cross compilation from Linux approach; that could be integrated. I'm not too keen on using cross compilation for releases, though; how do you debug such a build? An alternative would be to have a native Mac runner somewhere registered with the project. Either way, the build scripts would need to be written, and I completely do not know how.
I attempted to build Armagetron on Mac OS (I think High Sierra) not too long ago trying to decipher the LittleSteps support forum threads. From what I remember, I tried to build dlh's os-x toolkit but resulted in some errors from clang (was it? I'd have to check. Wasn't the toolkit, it's deps, and wasn't Armagetron though) even between multiple different versions of the Xcode command line tools. I got frustrated, dropped it, and forgot about it for a little while until now... Maybe I'll revisit it later today. Maybe I should start a separate support thread with the errors. Hm.

Anyway, I can think of a few actual "native Mac runner"s around who play this game, and I know at least one of them at least experiments with developing stuff. I don't think he has ever done anything with the Armagetron source code though.

Thinking about it some more, I'm not quite sure if "native Mac runner" is referring to a build process or a person. Oh well.
User avatar
Z-Man
God & Project Admin
Posts: 11585
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: New build process based on Docker, make and GitLab CI

Post by Z-Man »

aP|Nelg wrote: Sat Jun 20, 2020 9:34 am Thinking about it some more, I'm not quite sure if "native Mac runner" is referring to a build process or a person. Oh well.
Runners are the GitLab term for build servers :) Judging from the docs, they're as easy to set up as on Linux.
Post Reply