Here are some quick stats:
Lines of build-system code (including Makefile and Haskell code):
- old build system: 7793
- new build system: 5766 (about 2000 fewer lines, or a 26% reduction)
Furthermore, this doesn’t count the code for ‘cabal make’, which is still in Cabal but is no longer used by GHC.
Time to validate with -j2 (the default; test suite is still single-threaded):
- old: 28 mins
- new: 28 mins
Single and dual-core builds don’t see much difference. However, adding more cores starts to demonstrate the improved parallelism: validate with -j4 (still single-threaded test suite):
- old: 25.3 mins
- new: 24.0 mins
Parallelism in the new build system is a lot better. It can build libraries in parallel with each other, profiled libraries in parallel with non-profiled libraries, and even libraries in parallel with stage 2. There’s very little explicit ordering in the new build system, we only tell make about dependencies.
Time to do ‘make’ when the tree is fully up-to-date:
- old: 2m 41s
- new: 4.1s
Time to do ‘make distclen':
- old: 5.7s
- new: 1.0s
We also have all-new build-system documentation.
The biggest change you’ll notice is that the build system now expresses all the dependencies, so whatever you change, you should be able to say ‘make’ to bring everything up to date. Sometimes this can result in more rebuilding than you were expecting, or more than is strictly necessary, but it should save time in the long run as we run
into fewer problems caused by things being inconsistent or out-of-date in the build.
We stretched GNU make to its limits. On the whole it performed pretty well: even for a build of this size, the time and memory consumed by make itself is negligible. The most annoying problem we encountered was the need to split the build into phases to work around GNU make’s lack of support for dependencies between included makefiles.
On the whole I’m now convinced that non-recursive make is not only useful but practical, provided you stick to some clear idioms in your build-system design.