For kind=hook, the spec doesn't include this value as it's untrustworthy.
For kind=task, it's still untrustworthy, but there is no privilege escalation
so that's not important. Still, it dramatically expands the size of the task
definition.
MozReview-Commit-ID: 6scQ2ZwxP10
The latest python-zstandard uses a newer zstandard that is faster.
It also has wheels available, which means installation doesn't require
Python development headers, etc.
MozReview-Commit-ID: 5gRq81KYmX4
Now that `mach taskcluster-build-image` can, we can avoid all the manual
handling based on curl and jq in the image builder.
An additional advantage on relying on `mach taskcluster-build-image`
doing more is that less changes to the build-image.sh script will be
necessary, and thus less updates of the image builder docker image.
The zstd command we spawn, if available at all, might be the wrong
version: zstd changed its stream format in an incompatible way at some
point, and the version shipped in e.g. Ubuntu 16.04 uses the old format,
while the version taskcluster relies on uses the new format.
Relying on gps's zstandard library allows to ensure we use the right
version. Another advantage is that we can trivially pip install it in a
virtualenv if it isn't available on the system running the command.
If we're ridding ourselves of the subprocess spawning for zstd, we might
as well cover curl as well. Especially considering the error handling
when subprocesses are involved is not trivial, such that the current
error handling code is actually broken and leads to dead-lock
conditions, when, for example, curl is still waiting for the python side
to read data, but the python side is not reading data anymore because
an exception was thrown in the tar reading loop.
Ideally, we'd simply use the --build-arg docker argument along with ARG
in the Dockerfile, but that's only supported from Docker API 1.21, and
we're stuck on 1.18 for the moment.
So we add another hack to how we handle the Dockerfile, by adding a
commented syntax that allows to declare arguments to the Dockerfile.
The arguments can be defined in the docker images kind.yml file through
the `args` keyword. Under the hood, they are passed down to the docker
image task through the environment. The mach taskcluster-build-image
command then uses the corresponding values from the environment to
generate a "preprocessed" Dockerfile for its context.
Initially this will skip toolchain task optimizations, which avoids hashing
local directory contents and speeds up taskgraph generation by about 25%.
MozReview-Commit-ID: B4LB5BV86nw
Add a release promotion custom action for releng's TC release promotion migration work.
This action generates a graph dependent on previously built tasks. To track these, we add the `do_not_optimize` and `existing_tasks` parameters. The `do_not_optimize` parameter specifies tasks that we want to explicitly exclude from taskgraph optimization. The `existing_tasks` parameter specifies a label-to-taskid map for tasks from previous graphs.
MozReview-Commit-ID: 1vKrNUavM4V
These tests can now be run with:
./mach python-test taskcluster/taskgraph
or:
./mach python-test taskcluster
They can now run in parallel by passing in -j.
MozReview-Commit-ID: JXeZV8B04Sf
These tests can now be run with:
./mach python-test taskcluster/taskgraph
or:
./mach python-test taskcluster
They can now run in parallel by passing in -j.
MozReview-Commit-ID: JXeZV8B04Sf
Graph morphs modify the graph after optimization, without changing its meaning.
In this case, that means adding index tasks that will insert paths into the
index beyond the relatively limited number afforded in task.routes.
MozReview-Commit-ID: AJy4exX7q2v
Various modules under taskcluster are doing ad-hoc url formatting or
requests to taskcluster services. While we could use the taskcluster
client python module, it's kind of overkill for the simple requests done
here. So instead of vendoring that module, create a smaller one with
a limited set of functions we need.
This changes the behavior of the get_artifact function to return a
file-like object when the file is neither a json nor a yaml, but that
branch was never used (and was actually returning an unassigned
variable, so it was broken anyways).
At the same time, make the function that does HTTP requests more
error-resistant, using urllib3's Retry with a backoff factor.
Also add a function that retrieves the list of artifacts, that while
currently unused, will be used by `mach artifact` shortly.
Various modules under taskcluster are doing ad-hoc url formatting or
requests to taskcluster services. While we could use the taskcluster
client python module, it's kind of overkill for the simple requests done
here. So instead of vendoring that module, create a smaller one with
a limited set of functions we need.
This changes the behavior of the get_artifact function to return a
file-like object when the file is neither a json nor a yaml, but that
branch was never used (and was actually returning an unassigned
variable, so it was broken anyways).
At the same time, make the function that does HTTP requests more
error-resistant, using urllib3's Retry with a backoff factor.
Also add a function that retrieves the list of artifacts, that while
currently unused, will be used by `mach artifact` shortly.
This adds `.cron.yml` and a new mach command to interpret it. While
functionality is limited to nightlies right now, there is room to expand to
more diverse periodic tasks. Let your imagination run wild!
MozReview-Commit-ID: KxQkaUbsjQs
Previously, all callers outside of tests that passed
"target_tasks_method" to TaskGraphGenerator all used the same pattern
of looking for a key in the parameters and calling a function in
the target_tasks module.
Future commits will refactor how target tasks graph work. To
make the transition easier, we move the logic for obtaining the
target tasks method into TaskGraphGenerator.
MozReview-Commit-ID: 3QU09iGhoXh