The previous build system required manual triggering, a build machine configured just right (with a 32 bit chroot environment and libzthread manually compiled in just the right way) and was error prone. It was external: Some bzr branch that would, on request, fetch the most recent versions of the main source module, winlibs and the code::blocks project files and then use three layers of mutually including makefiles to wrangle that into our builds. That meant you could not easily go back and do builds for old sources, because that may give you wrong project files. If something failed and you reran make, it maybe remade all of the output and reuploaded builds, which broke zeroinstall's checksums.
The new system still uses make to chain together individual build steps. Building the source tarball is a make target, so is unpacking it to get a good source base, so is making a linux 32 bit server build. The structure is more robust, relies on tag files for the timestamps and only re-runs steps if you explicitly do one of the cleans. The principle used all over the place is the same as for any grown CI system: Files are copied together into a directory (called context.*), that is copied into a docker container, docker runs a script in it, the resulting files are copied out (into result.*). Later steps pick their input from earlier steps. That makes it all easily testable locally without having to re-run earlier steps. If you ever debugged a CI system's later steps, you probably know how valuable that can be.
Instructions on how to use the system locally are in docker/Readme.md.
The individual builds no longer run on your environment, so no need to specifically prepare anything. They run in docker containers pre-built and downloaded on demand. The build scripts for these are in docker/images. Most are from straightforward Dockerfiles that can be built by rootless docker. Two exceptions: The Steam runtime somehow does not build in rootless docker, so that falls back to 'sudo docker'. And code::blocks installation does not work without user interaction, so to build that image, you need to run it in docker with access to your real X server so you can klick 'next' a bunch of times. Luckily, someone else already did all the hard work to make that happen.
Currently, the images are based on Ubuntu 16.04; should the need arise, it's super easy to just upgrade, rebuild and reupload the images and carry on.
Windows builds are especially silly

There are also make targets to deploy the resulting build to our usual places: SF uploads, Launchpad file releases, zeroinstall and of course Steam. Oh, and there is a new download page. The source for that one is here. The build script just checks new files in for each release and modifies current pointers. The website is based on Jekyll, the static site generator used by GitHub Pages. It's nice! The site generator, not the site. Work required.
(continued, too many URIs)