I'll be on it, definitely a blocker. Maybe I'll just bend the default directories as a temporary fix. Sorry for the trouble.
About me doing more testing: Sorry, no, and that's the most polite answer to that I can give. You'd only get a rant about how much time a release is already taking with the minimal testing that I do (better part of a weekend day), about how testing is not the developer's job because they're know to do it poorly, and how the bug was already in three alphas without anyone reporting it.
Well, already doing testing for the client version, but this one was server version, so...
Btw, if I'd change a path in a script/program I was writing I would look at all places that use the path info to make damn sure there are no accidents like this one.
What happens next? Accidental rm -rf / during make uninstall?
The usual procedure is
1) Fix the bug
2) Find out why it happened
3) Find out why it wasn't detected earlier
4) Find out how it won't happen again.
Let's stick to that. I'm not done with 1) yet.
Your statement implies you'd be a better script maintainer than me; that, irrelevant of its truth value, gets you dangerously close to being recruited
If it calms you down, the rm -f case would have been caught in the tests that did get done.
Luke: Some of the workspaces I run cvstest on, which does a test installation, have custom directories set. All of them use prefix=~/usr. I don't test all possible combinations all of the time, but I did test quite a lot of them when you added the extra directory option.
Ok, 1) is done, rc2 with the fix is rolling, time for the analysis.
The bug happened in the first place because my left hand didn't know what my right hand was doing. Specifically, my left hand changed the default for the var and run directories, but didn't remember my right hand had installed the chowning command some years ago.
Now, why didn't this get caught by the regular tests? First, the most frequently run test is cvscheck. It does a test installation into a local directory, checks if it runs, uninstalls it, and checks whether everything is gone. Two limitations of this test prevented it from catching the error. One, it is run as a normal user, thus cant su to another user for running the server, thus doesn't do the chowning. Two, it starts with an empty directory and ends with an empty directory. It doesn't test whether a system, would it have been installed there, would take damage.
Then, before I upload a release build (not the alphas), I do a full test run of both the server and the client on various clean Unix installations. I have them as virtual machines in VMWare with snapshots prepared to take an installation; As superuer, I install client and server, run the server directly from the command line and via the start script, start the client, play a round or two, then I uninstall everything and repeat the process as a normal user, installing everything into my home directory. Then, to save time, I throw away the virtual machine by reverting to the pre-installation snapshot. That last sloppyness of course means that any messup that would prevent the system from booting stays undetected. But, knowing now that maybe that would have been a useful test, I repeated it now with rc2, with negative result. The system (Ubuntu) booted cleanly without even a warning message, and I suspect the other Linuxes would have reacted likewise.
So, what test would have caught the bug? I know of none that would not have required me to precisely anticipate this particular bug.
Which leaves the question of how to prevent those things from happening again pretty wide open. Is there perhaps a system that can help with testing whether an install/run/uninstall cycle does a change to an already installed system? A real system copy and tripwire would do, but I'd rather have something that promises a faster test, kind of on the timescale of cvscheck (about one minute) and something that doesn't eat away half of my harddisk, ideally something a regular user can run. I'm open to suggestions.
How about making a file which contains EVERY single file in the system including permissions and owning user/group before the installation, then make one after the installation and compare them.
The tripwire approach. That would work if there were no other active programs around, require a real system and root privileges, and take quite a lot of time.
And the system would need to be a fresh one every time, because once a bug has done some unnoticeable, but dangerous, modification that slipped through, or did it during an unchecked installation, it won't be detected as a change on the next test.
So all in all, zero of my three wishes fulfilled. I don't think this will be practical. Hmm, on the other hand, the test could run overnight with little trouble...
Any good way to generate such a system fingerprint?