It was added back in
5147d5c69f
for unclear reasons (and the lack of bug number doesn't help), and hasn't been
used, as far as I can see in the gecko-dev history, other than in bug 206029,
which is the only use currently in the tree.
Bug 206029 was working around the Flash player installer modifying Firefox's
prefs file and not dealing with it properly or something depending on the line
endings. 11 years later, all prefs files except channel-prefs.js are in
omni.ja, so obviously, bug 206029 doesn't actually apply anymore.
So, let's simplify it all and get rid of this.
Compressing C++ unit tests is a long pole when writing test archives.
Experimenting with various levels of compression revealed that
compression level 9 was providing minimal space savings for
significantly longer archiving times and greater CPU usage.
Results of our experimentation of `make -sj8 package-tests` on OS X
with various levels of compression are below. Note: these numbers were
accidentally obtained without JS tests being archived. This skews the
results a little but doesn't impact the analysis below.
ARCHIVE SIZE WALL CPU
(L=9)
cppunittest 76,806,629 30.6s
mochitest 61,276,928 9.4s
reftest 31,204,396 11.0s
ALL 228,146,761 31.2s 75.9s
(L=8)
cppunittest 76,851,593 24.1s
mochitest 61,279,322 8.9s
reftest 31,207,867 10.4s
ALL 228,228,096 24.9s 64.7s
(L=7)
cppunittest 77,102,292 14.3s
mochitest 61,305,147 8.2s
reftest 31,260,359 9.4s
ALL 228,717,803 15.0s 49.1s
(L=6)
cppunittest 77,321,408 11.5s
mochitest 61,336,539 8.2s
reftest 31,303,604 9.2s
ALL 229,123,307 12.2s 44.7s
(L=5)
cppunittest 78,226,404 8.2s
mochitest 61,483,804 7.6s
reftest 31,509,349 8.8s
ALL 230,725,600 9.6s 39.7s
(L=4)
cppunittest 79,733,669 6.3s
mochitest 61,825,519 7.6s
reftest 31,924,171 8.4s
ALL 233,669,991 9.0s 36.4s
(L=3)
cppunittest 82,380,731 5.8s
mochitest 62,554,431 7.1s
reftest 32,696,415 8.1s
ALL 239,180,168 8.9s 34.6s
Levels lower than 3 resulted in larger archives with no decreae in
wall time and marginal decrease in CPU time.
As we can see, lowering the compression level reduces archiving time by
>3x while only increasing total archive size by ~2.5 MB or ~1% for
compression level 5.
Total time hits a plateau around levels 4 and 5. After that, file size
increases faster for little decrease in wall time. I suspect that we're
hitting Python limits from having to process thousands of files: there's
only so fast Python can do I/O and make function calls.
I think choosing 4 or 5 for the new compression level are acceptable.
I went with 5 because the wall time savings from 5 to 4 are marginal and
the archive size does start to increase a bit faster at 4. That being
said, 4 does consume 10% less CPU. I could easily just 4 as well. 5 is
more conservative. We can always change to 4 after seeing results in the
wild.
The end result of this change is `make package-tests` is much faster:
Before: 228,146,761 bytes; 31.2s wall; 75.9s CPU
After: 230,725,600 bytes; 11.4s wall; 45.0s CPU
Delta: +2,578,839 bytes; -19.8s wall; -30.9s CPU
When you take the whole series into consideration:
Before: 44.2s wall; 84.6s CPU
After: 11.4s wall; 45.0s CPU
Lowering CPU is impressive considering we switched from the C `zip`
implementation to Python!
Keep in mind we were at ~78s wall before e87b74b3db43 introduced
concurrent archive generation!
And we still haven't eliminated the staging of JS tests, which are
several thousand files and a few dozen MB!
An upcoming commit will introduce a caller that doesn't want the maximum
compression level. This commit introduces arguments to control the
compression level inside written archives.
Metrics are nice. Adding this output clearly demonstrates that C++ unit
tests are the long pole by far: they take ~95% of wall execution time
to archive (~30s total). The next longest archive only takes ~11s to
produce. This will be important if we ever want to reduce archive time
further on optimal hardware.
FWIW, disabling compression will produce a C++ unit test archive in
1.0s. Archives with more files take longer, despite the significantly
smaller sizes.
Won't impact performance much. But fewer make foo makes porting the C++
unit tests (which are the largest remaining tests) to the Python
archiver easier to grok.
This conversion did change behavior slightly. Previously, startup
cache files weren't being packaged if startup cache was disabled. Now,
we always package them since their presence in the test archive should
be harmless. The original change to guard their inclusion in
ee82e0ae5488 was probably unnecessary.
This is slightly more involved than earlier changes because reftests
have a one-off mechanism for finding files. Essentially, the master
reftest manifest is loaded, directories are discovered, and every file
in those directories is packaged.
We add support to our test archive generation tool to read sources from
reftest manifests and tell it where the reftest manifests are.
print-manifest-dirs.py was only being used for staging reftest files.
Since we don't do that any more, the functionality doesn't need to exist
in a standalone file, so it has been moved inline into test_archive.py.
This change avoids copying ~26,000 tests consuming 131 MB during test
packaging. This is a majority of the file count that was remaining in
the stage directory at this point. On my machine (which hasn't typically
seen major wall time wins from not staging files due to its fast SSD),
this change made test packaging ~20% faster, reducing wall time from
~50s to ~40s!
A Try push seemed to indicate drastic results with the series up to this
point. Including the already landed changes to generate test archives
concurrently, test packaging times on OS X builders dropped from ~18:40
to 6:29! Times on Linux x64 remained about the same (~2:46). This is
possibly due to these machines already having SSDs and due to normal
variance in performance of builders and EC2 instances.
With this change, all test ZIP archives are now generated via Python and
mozpack.
This change does not change I/O or file copy behavior at all. There is
still a lot of room for eliminating extra file copies.
The web-platform test archive now builds without any staging at all.
This saves ~103 MB of file copies on my machine.
The testing/web-platform/Makefile.in serves no purpose after this
change, so it and all references to it have been removed.
This is very similar to what we did for xpcshell. Like xpcshell, there
are still some staged files. However, about 73MB of copies are
eliminated with this change. On my machine, overall execution time of
test packaging appears to decrease, although CPU usage is up slightly.
This commit produces the xpcshell test archive without staging 5000+
xpcshell test files first.
We teach the archiver to ignore .mkdir.done files.
The xpcshell Makefile.in still stages some files. This is less than
ideal. However, it is a small handful of files and shouldn't add too
much overhead.
This appears to not impact overall CPU usage significantly on my
machine, despute using Python instead of `zip`. It does reduce I/O
by ~25MB by avoiding the staging copy.
Test archive generation currently copies a bunch of files into a staging
area then runs `zip` to produce ZIP files. There are 2 concerns with
this approach:
1) We incur a lot of extra I/O to copy files so everything is
rooted in a single tree so the `zip` invocation and paths are
simple.
2) ZIP files inherit properties from the local filesystem (including
mtime), making ZIP files non-deterministic.
This commit introduces a new mozbuild action for producing test
archives. It does so using the mozpack file finder and JAR writer,
which are used throughout the build to deterministically
produce ZIP/JAR files from files in multiple source directories.
We implement support for producing the mozharness archive. This archive
does not involve files that are staged, so no I/O is saved. In fact,
the switch from `zip` to Python likely makes this slightly slower.
However, we do have deterministic archives now.
Additional archives will be ported over in subsequent commits.
Previously, we always skipped over files beginning with a ".". This
commit adds an option to include them.
This is needed to support test package generation via Python / mozpack.
The flags added in toolkit/locales/Makefile.in turn out not to be actually
used, so just remove that.
The remaining uses of XULPPFLAGS are to set debug flags depending on whether
MOZ_DEBUG is set or not. Just set a dedicated variable with the right value
from configure.
When running mach `build-backend` or `config.status`, it is now possible to
pass multiple backends to the --backend/-b option, so that they can share
moz.build reading and object emitting.
The command line syntax is however maybe a little awkward:
mach build-backend -b Backend1 Backend2
but supporting with `-b Backend1 -b Backend2` requires more argument parser
twiddling (action='append' doesn't work out of the box with choices, we'd
need a custom action class)
Currently, we set a flag on each object to know whether it has been consumed
by the backend. This doesn't work nicely when multiple backends try to consume
the same objects.
- Make all backends report the time spent in their own execution
- Change how the data is collected for the reader and emitter such that
each of them is aware of its own data, instead of everything being
tracked by the backend.
This is meant to open the door to multiple backends running from the
same execution of config.status.
This commit exposes test-deps file info as a mach command, and
modifies the test scheme reader to make it filter out unsuitable
contexts when generating TestManifest objects for metadata context.
This bumps the NDK version to r10e.
Previously, we used brew to install android-sdk and a custom version
of android-ndk. That makes it hard to control the installed versions.
This installs from downloaded archives, which unifies the Mac OS X
approach with the straight-forward Linux approach.
The 'tools' package depends on 'platform-tools-preview' now. Roll
with it until Google breaks us back again.
The behaviour of the |android| tool has changed; recent versions don't
reveal what packages are installed. That means we can't skip already
installed packages; and we can't really tell if our installation
attempts succeeded. But we have faith!
This gets us a limited version of AAR support: we can consume static
AAR libraries, where here static does not refer to linking, but to
static assets that are fixed at build-backend time and not modified
(or produced) during the build. This lets us pin our dependencies
(and move to Google's versioned Maven repository packages, away from
Google's unversioned ad-hoc packages).
By restricting to static AAR libraries, we avoid having to handle
truly complicated dependency trees, as changing parts of generated AAR
files require delicate rebuilding of the APKs (and internal libraries)
that depend on the AAR files.
It is possible that we will generate AARs in the tree at some time.
Right now, we don't do that, even for GeckoView: the AARs produced are
assembled as artifacts at package time and are intended for external
consumption. We might want this for GeckoView and Fennec at some
time; we should consider using Gradle everywhere at that point.
The patch itself does the simplest possible thing (which has precedent
from Gradle and other build systems): it simply "explodes" the AAR
into the object directory and uses existing mechanisms to refer to the
exploded pieces.
AARs have both required and optional components. Each component is
defined with an expected and required flag. If a component is expected
and not present, or not expected and is present, an error is raised.
If the component is expected and present, autoconf's ifelse() macro is
used to define the relevant AAR_* component variables. If the
component is not expected and not present, no action is taken. A
consuming build backend therefore can guard all AAR_* component
variables with just the top-level AAR variable.
Many AAR files have empty assets/ directories. This patch doesn't
explode empty assets/ directories, protecting against trivial changes
to AAR files that don't impact the build.
There's a lot not to like in this approach, including:
* We need to manually reference internal AAR libs;
* I haven't separated the pinned version numbers out of configure.in.
However, it's closer to what we want than what we have!