Historically we built all our binaries in directories in the objdir, then
symlinked them into dist/bin. Some binaries needed to be copied instead
so that certain relative path lookups work properly, so we resorted to
sprinkling `NSDISTMODE=copy` around Makefiles.
This change makes it so we build PROGRAMs (not any other sort of targets)
directly in dist/bin instead. We could do the same for our other targets
with a little more work.
There were several places in the tree that were copying built binaries to
some other place and needed fixup to match the new location of binaries.
On Windows pdb files are left in the objdir where the program was
originally linked. symbolstore.py needs to locate the pdb file both to
determine whether it should dump symbols for a binary and also to copy
the pdb file into the symbol package. We fix this by simply looking for
the pdb file in the current working directory if it isn't present next
to the binary, which matches how we invoke symbolstore.py.
MozReview-Commit-ID: 8TOD1uTXD5e
For consistency with when MOZ_SOURCE_CHANGESET is set, and because while
slim, there is a chance of conflict with short forms that don't exist
with the full form that could bite us in the long run.
Historically we built all our binaries in directories in the objdir, then
symlinked them into dist/bin. Some binaries needed to be copied instead
so that certain relative path lookups work properly, so we resorted to
sprinkling `NSDISTMODE=copy` around Makefiles.
This change makes it so we build PROGRAMs (not any other sort of targets)
directly in dist/bin instead. We could do the same for our other targets
with a little more work.
There were several places in the tree that were copying built binaries to
some other place and needed fixup to match the new location of binaries.
On Windows pdb files are left in the objdir where the program was
originally linked. symbolstore.py needs to locate the pdb file both to
determine whether it should dump symbols for a binary and also to copy
the pdb file into the symbol package. We fix this by simply looking for
the pdb file in the current working directory if it isn't present next
to the binary, which matches how we invoke symbolstore.py.
MozReview-Commit-ID: 8TOD1uTXD5e
We want symbolstore.py to fail, preferably loudly, if we can't find the
necessary tools, and throwing away errors here runs counter to that
goal. Dumper is a base class for Dumper_Win32, where we probably don't
have file(1), but Dumper_Win32 shouldn't be calling RunFileCommand.
symbolstore.py processes filenames in FILE lines of symbol files to encode
information about the source repository they came from, or to mark
known generated source files. It also reads the dist/include install
manifest so it can map header files from there back to their source locations.
These mappings were broken on Windows because symbolstore.py first passes
filenames into `FixFilenameCase`, which calls `GetFinalPathNameByHandleW`,
which breaks things in two ways:
1) It returns paths with an uppercase drive letter, and source paths from
elsewhere have a lowercase drive letter.
2) It resolves symlinks, and on Taskcluster Windows builds the whole build
is done within a symlinked directory so paths directly from the srcdir
and objdir won't match those canonicalized paths.
This patch adds a `normpath` function to symbolstore.py and moves the
contents of `FixFilenameCase` into it on Windows, and just makes it
an alias for `os.path.normpath` everywhere else. It then uses it everywhere
we deal with paths that will be compared against source file paths from symbol
files so that all paths are canonicalized the same and we can do simple
string matching from there.
Additionally, this patch adds a check to the functional test to verify
that header files from dist/include are correctly mapped to the source
repository. Unfortunately there is still not a test for generated files
because they only appear in the libxul symbol file, and dumping symbols
from libxul is too slow to invoke as part of a unit test.
MozReview-Commit-ID: Dx3z1BZcIvc
Now that builds are uploading generated source files to an S3 bucket,
symbolstore.py can alter the FILE lines in symbol files to record the
URLs where those generated source files can be found. We currently record
files from the hg repository as `hg:<repo>:<path>:<revision>`, so here we
record generated files as `s3:<bucket>:<path>:` and expect that Socorro
will map that to the S3 bucket in a sensible way.
This patch does not change source server indexing, which allows Microsoft
debuggers to fetch source files for a build. That will be handled in a
followup.
MozReview-Commit-ID: 1g14smF0fo8
Now that builds are uploading generated source files to an S3 bucket,
symbolstore.py can alter the FILE lines in symbol files to record the
URLs where those generated source files can be found. We currently record
files from the hg repository as `hg:<repo>:<path>:<revision>`, so here we
record generated files as `s3:<bucket>:<path>:` and expect that Socorro
will map that to the S3 bucket in a sensible way.
This patch does not change source server indexing, which allows Microsoft
debuggers to fetch source files for a build. That will be handled in a
followup.
MozReview-Commit-ID: 1g14smF0fo8
The 'src' subdir needs to be part of the path *after*
the blob/commit_id section of the url, so we need to
no strip it from the prefix when we match.
MozReview-Commit-ID: 9HA3a7d8kh4
We were prefix-matching the rust srcdir when hyperlinking
symbols, but then appending the relative source path to
the top level repo url, resulting in broken links.
Instead, link to the srcdir url at github.
MozReview-Commit-ID: 33tSMM96Vie
This gives us source file names with repository info in our generated
symbol files, so that crash reports on crash-stats can link to the
correct source files for files from the Rust standard library.
I've hardcoded the source paths that the Rust project uses, which is
not my favorite thing, but there's no simple way to get this information
otherwise.
MozReview-Commit-ID: 6SeaMqH8xfc
This removes handling of dumping symbols in parallel from symbolstore.py
and updates unit tests.
A prior commit made symbolstore.py handle a single file at a time, leaving
concurrency to be handled by make, so this is no longer needed.
MozReview-Commit-ID: C7IHdVHHjRH
This commit moves symbol dumping to the compile tier, to be run via "syms"
targets. Tracking files are used for the sake of incremental builds, because
dump_syms may genearate multiple outputs whose paths are not known ahead of
time.
Minimal changes to symbolstore.py are made here. More extensive
simplifications will be made in a future commit on the basis of symbolstore.py
handling one file at a time.
MozReview-Commit-ID: 3mOP8A6Y7iM
It turns out that running makecab to compress PDB files takes a significant
amount of time in the buildsymbols step. I wrote an implementation of
makecab in Rust that implements only the subset of features we use and
it's significantly faster:
https://github.com/luser/rust-makecab
This patch adds a makecab check to moz.configure, adds a release build of
the makecab binary to the Windows tooltool manifests, points the build at
it from mozconfig.win-common, and changes symbolstore.py to use MAKECAB
from substs instead of calling `makecab.exe` directly.
MozReview-Commit-ID: 76FHLIZFCXS
If the srcdir is in a path containing a symlink on Windows, when
`FixFilenameCase` calls `GetFinalPathNameByHandleW` to normalize the case of
a source file it will get a path to the file with the symlink resolved.
This breaks our "is this file in the source repository" check.
This patch makes the code call `FixFilenameCase` for any srcdir arguments
that are passed to the script, so any symlinks will be resolved there
and the prefix matching will work.
MozReview-Commit-ID: 2UibW9XYWoK
Currently, we use the gzip default of 6. Our history with zlib tells
us that reducing the compression level to 5 or 4 often yields
significantly faster operations while only sacrificing a little
storage. Measurement here shows similar results.
On libxul.so.dbg:
level time compressed
6 21.0s 231,045,158
5 15.8s 232,926,435
4 12.2s 237,587,011
3 11.1s 245,104,157
Changing the level from 6 to 4 increases the size of the compressed
file by 6,541,853 bytes, or 2.83%. But it saves ~10s from the long
pole of builds in automation. And that's just from libxul.
When you factor in all compressed files, this change has a significant
impact on symbol generation.
Before: 221s wall; 150s CPU; 311,424,856 bytes
After: 192s wall; 130s CPU; 318,085,885 bytes
That's on my machine, which has a 4.0 GHz CPU. CPU time savings in
automation will likely be more significant.
MozReview-Commit-ID: 7CbRSZvUayj
MOZ_SOURCE_REPO is set by automation to indicate the URL of the current
repository. I'm not sure what SRCSRV_ROOT is from. Probably legacy.
Use MOZ_SOURCE_REPO instead of SRCSRV_ROOT.
MozReview-Commit-ID: IfCSiaqgJb5
MOZ_SOURCE_REPO is set by automation to indicate the URL of the current
repository. I'm not sure what SRCSRV_ROOT is from. Probably legacy.
Use MOZ_SOURCE_REPO instead of SRCSRV_ROOT.
MozReview-Commit-ID: IfCSiaqgJb5
makecab.exe has 3 options for compression: disable, MSZIP, and LZX.
Here is a breakdown of the 3 levels of compression for an opt 32-bit
build on my i7-6700K:
directory size full.zip xul.pd_ `buildsymbols`
None 1,360 MB 227 MB 146 MB 49s
MSZIP 520 MB 221 MB 142 MB 113s
LZX 436 MB 169 MB 102 MB 248s
(The original size of xul.pdb is ~500 MB.)
This commit switches us to MSZIP as the compression format. This
makes `builsymbols` >2x faster while only increasing the full zip
archive size by ~31%. This feels like an appropriate trade-off.
The memory related flag has been removed because it only applies
to LZX compression.
It's worth noting that using `zip` to compress xul.pdb and xul.sym:
Level Zip Size xul.pdb Compressed Time
9 160.6 MB 139.8 MB 76s
7 161.4 MB 140.5 MB 30s
5 164.7 MB 143.2 MB 16s
4 170.0 MB 147.3 MB 12s
3 176.4 MB 151.6 MB 11s
So "MSZIP" compression appears to be using level 9. If we could swap
in our own cab generator that uses a zlib compression level less
than 9, we'll make symbol generation significantly faster without
sacrificing too much size. I'm inclined to punt that to a follow-up
bug.
MozReview-Commit-ID: GbbClkn9PLN
This commit contains a few things:
* Update our copy of google-breakpad to upstream c53ed143108948eb7e2d7ee77dc8c0d92050ce7c
* Get rid of all but one local patch, fold a few related local patches into one
* Misc build fixup to sync with upstream--adding a few new moz.build files,
source files
* The final bits of unhooking Breakpad from the profiler:
** Revert to only building toolkit/crashreporter if MOZ_CRASHREPORTER.
** Stop building bits of Breakpad that we only needed for the profiler.
** Remove a few bits of profiler code that were used to interface with Breakpad.
** Remove toolkit/crashreporter/breakpad-logging, which was only used to
suppress Breakpad logging for the in-process stackwalker.
* Upstream removed their Android-compat sys/ucontext.h because the Android NDK
added it, but the bionic we're using for Gonk builds is too old, so add a
copy of the previous version of those files to
toolkit/crashreporter/gonk-include to keep Gonk building.
* Consolidate moz.build files under toolkit/crashreporter/google-breakpad/client/linux
Large files take longer to process. Scheduling large files after smaller
files means there is a higher chance a large file may be a long pole
during processing.
This commit changes the scheduling logic to exhaustively obtain the set
of files to be processed. It then sorts them by descending file size and
schedules them in the resulting order, thus minimizing the chances for a
large file to be the long pole holding up processing completion.
On my machine this doesn't change wall execution time. However,
automation may be different. And the logic of the new behavior is sound.
concurrent.futures doesn't have a WorkerInitializer equivalent to what
multiprocessing.Pool has, so refactor things slightly to remove that dependency.