It is not at *all* clear how multiple optimizations for a single task should
interact. No simple logical operation is right in all cases, and in fact in
most imaginable cases the desired behavior turns out to be independent of all
but one of the optimizations. For example, given both `seta` and
`skip-unless-files-changed` optimizations, if SETA says to skip a test, it is
low value and should be skipped regardless of what files have changed. But if
SETA says to run a test, then it has likely been skipped in previous pushes, so
it should be run regardless of what has changed in this push.
This also adds a bit more output about optimization, that may be useful for
anyone wondering why a particular job didn't run.
MozReview-Commit-ID: 3OsvRnWjai4
It is not at *all* clear how multiple optimizations for a single task should
interact. No simple logical operation is right in all cases, and in fact in
most imaginable cases the desired behavior turns out to be independent of all
but one of the optimizations. For example, given both `seta` and
`skip-unless-files-changed` optimizations, if SETA says to skip a test, it is
low value and should be skipped regardless of what files have changed. But if
SETA says to run a test, then it has likely been skipped in previous pushes, so
it should be run regardless of what has changed in this push.
This also adds a bit more output about optimization, that may be useful for
anyone wondering why a particular job didn't run.
MozReview-Commit-ID: 3OsvRnWjai4
It is not at *all* clear how multiple optimizations for a single task should
interact. No simple logical operation is right in all cases, and in fact in
most imaginable cases the desired behavior turns out to be independent of all
but one of the optimizations. For example, given both `seta` and
`skip-unless-files-changed` optimizations, if SETA says to skip a test, it is
low value and should be skipped regardless of what files have changed. But if
SETA says to run a test, then it has likely been skipped in previous pushes, so
it should be run regardless of what has changed in this push.
This also adds a bit more output about optimization, that may be useful for
anyone wondering why a particular job didn't run.
MozReview-Commit-ID: 3OsvRnWjai4
This includes adding TASKCLUSTER_VOLUMES to docker image builds directly. The
env variable is not added as part of the task transform because `run-task` is
not in payload.command. In fact, build-image.sh calls run-task after doing
some other housekeeping.
Ideally image builds would be turned into jobs and all of this would occur
automatically, but that turns out to be quite a bit too complex for this
incidental fix -- perhaps best solved in another bug.
MozReview-Commit-ID: FYHvafJras7
This includes adding TASKCLUSTER_VOLUMES to docker image builds directly. The
env variable is not added as part of the task transform because `run-task` is
not in payload.command. In fact, build-image.sh calls run-task after doing
some other housekeeping.
Ideally image builds would be turned into jobs and all of this would occur
automatically, but that turns out to be quite a bit too complex for this
incidental fix -- perhaps best solved in another bug.
MozReview-Commit-ID: FYHvafJras7
See the inline comment for the rationale here.
This check may not catch all volumes and caches. But after subsequent
commits refactor how permissions for caches and volumes are handled,
this edge case will likely result in permissions errors in the task,
so it isn't worth worrying about.
Several Dockerfile have been updated to add missing VOLUME so the check
passes.
In the case of desktop1604-test, we stopped removing
/home/worker/.cache because you can't remove a mount point, which is
what volumes are inside Docker containers.
MozReview-Commit-ID: GEyNkkX00kN
valgrind test will try to load debug information for the modules present
in a stack trace. If it fails to do it, we endup with a stack trace with
only memory addresses.
We install debuginfo for all installed packages and look for all libs
in the system common locations, and try to install the corresponding
debug information package.
These are acomplished with debuginfo-install yum utility script.
MozReview-Commit-ID: 76mHOUKKJud
To date we have variously specified both worker-type and worker-implementation,
often manually coordinated. We also embedded a few awkward assumptions such as
that the native engine only runs on OS X.
But a worker type has one and only one implementation, and that implementation
is stable over time (as changing it would require simultaneous landings on all
trees).
Instead, this change makes worker-type the primary configuration, and derives
both a worker implementation (defining the payload format) and worker OS
(determining what to include in the payload) from that value. The derivation
occurs when deciding how to implement a particular job, where the run_using
functions are distinguished by worker implementation.
The two-part logic to determine how and where to run a test task based on its
platform is combined into a single transform, `set_worker_type`.
This contains some other related changes:
- MOZ_AUTOMATION is set in specific jobs, rather than everywhere docker-worker
is used
- the URL to test packages is factored out into a shared function
- docker-worker test defaults are applied in `mozharness_test.py`
- the WORKER_TYPE array in `task.py`, formerly mixing two types of keys, is
split
- the 'invalid' workerType is assigned an 'invalid' implementation
- all tasks that do not use job descriptions but use docker-worker, etc. have
`worker.os` added
Tested to not produce a substantially different taskgraph for a regular push, a
try push, or a nightly cron.
MozReview-Commit-ID: LDHrmrpBo7I
A few commits ago, we bumped up the default zstandard compression level
from 3 to 10 when we switched to multi-threaded compression. Even with
multiple threads, this was a bit slower.
For images that will be built once and read multiple times, it is
worthwhile to burn extra CPU once and produce a small image. However,
for other tasks where the number of reads is limited, it isn't
worth it to use this extra CPU. This commit uses the SCM level as
a proxy for "optimize for speed." If the task is associated with level
1 (a try push), we lower the compression level and optimize for
speed. Otherwise, we keep the higher compression level and
optimize for image size.
Credit goes to Jonas for this terrific idea.
MozReview-Commit-ID: Hui97KsZpgw
Note that the to_json method prefers the taskgraph's dependencies information
(edges) to that from the task.dependencies entries. At a few points in
task-graph generation, these values differ, although that is expected (for
example, the full task set contains no edges, but that information is still in
task.dependencies). Unifying that representation leads to some difficulty with
task transforms that reach into the dependency tree (beetmover), so the
different representations are left as-is.
MozReview-Commit-ID: GeW8HNwFA9Z