Currently, if a task defines its own expiry with a very large value,
that will be respected even on try, where we actually don't want that to
happen.
This also helps simplify the setup for docker images.
We also take on the occasion to remove the discrepancy between the
default expiry for tasks in general and tests in particular. Bug 1258497
set the original expiry to 14 days, bug 1281004 added another place
where the expiry was set to 14 days for tests specifically, and then bug
1304180 changed the expiry to 28 days, but it just seems the location
for tests was overlooked rather than deliberately left to 14 days.
Differential Revision: https://phabricator.services.mozilla.com/D96962
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Make some ad-hoc manual updates to `testing/marionette/client/setup.py`, `testing/marionette/harness/setup.py`, and `testing/firefox-ui/harness/setup.py`, which have hard-coded regexes that break after the reformat.
5. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Make some ad-hoc manual updates to `testing/marionette/client/setup.py`, `testing/marionette/harness/setup.py`, and `testing/firefox-ui/harness/setup.py`, which have hard-coded regexes that break after the reformat.
5. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
Allow-list all Python code in tree for use with the black linter, and re-format all code in-tree accordingly.
To produce this patch I did all of the following:
1. Make changes to tools/lint/black.yml to remove include: stanza and update list of source extensions.
2. Run ./mach lint --linter black --fix
3. Make some ad-hoc manual updates to python/mozbuild/mozbuild/test/configure/test_configure.py -- it has some hard-coded line numbers that the reformat breaks.
4. Add a set of exclusions to black.yml. These will be deleted in a follow-up bug (1672023).
# ignore-this-changeset
Differential Revision: https://phabricator.services.mozilla.com/D94045
This change is to facilitate defining docker-images in comm/taskcluster/docker. At the
moment this is not possible due to how 'context_path' is set.
taskgraph.docker.util is already imported by the transform code, so it can make use of
the existing image_path function. image_path return an absolute_path, while some of the
consumers of context_path expect a path that's relative to topsrcdir.
Differential Revision: https://phabricator.services.mozilla.com/D92702
In bug 1626058, I changed how the docker image digest was generated:
- I used the same directory structure to generate the digest as was used for generating the context
- I moved context generation to the decision task, and used the hash of that as part of the digest.
Unfortunately, it turns out the file name in the gzip header of the context
.tar.gz differed between when we are creating a context to write out, and when
were just generating the hash.
This adjust the name used in the gzip header to be consistent.
Differential Revision: https://phabricator.services.mozilla.com/D84753
We starting doing that because snapshot.debian.org would ban some AWS IP
ranges, and we would get random failures, but that's not the case
anymore. OTOH, when more "normal" errors happen, like when you change a
Dockerfile to add a package, and that package actually doesn't exist,
the image build is tried 5 times, with no chance it will succeed, and
treeherder doesn't link to the log because it's purple, so you need to
manually go to taskcluster.
Removing the autoretry will make things smoother.
Differential Revision: https://phabricator.services.mozilla.com/D73392
Now that we have added the necessary scopes to `ci-configuration`,
we can add the in-tree scopes to give tasks access to the
`hgmointernal` config Taskcluster secret.
Differential Revision: https://phabricator.services.mozilla.com/D25001
This is useful for the out-of-tree taskgraph code. Downstream products can
pin the generated decision task image by revision, rather than contents.
Differential Revision: https://phabricator.services.mozilla.com/D19032
This allows images to be built on every commit. This is useful for the
out-of-tree taskgraph, that builds a docker image with the taskgraph code
installed.
Differential Revision: https://phabricator.services.mozilla.com/D19031
Eventually, workers will provide these variables directly
(https://bugzilla.mozilla.org/show_bug.cgi?id=1460015). But for now, this
ensures that TASKCLUSTER_ROOT_URL is set everywhere in production, and
TASKCLUSTER_PROXY_URL is set wherever the proxy is active.
The taskgraph Taskcluster utils module gets a `get_root_url()` that gets the
root URL for the current run, either from an environment variable in production
or, on the command line, defaulting to https://taskcluster.net for user
convenience. When the production instance's URL changes, we can simply change
that default.
Other changes to use this function are reserved for later commits.
This changes the docker build process propagate TASKCLUSTER_ROOT_URL into the
docker images where necessary (using %ARG), specifically to create URLs for
debian repo paths.
Eventually, workers will provide these variables directly
(https://bugzilla.mozilla.org/show_bug.cgi?id=1460015). But for now, this
ensures that TASKCLUSTER_ROOT_URL is set everywhere, and TASKCLUSTER_PROXY_URL
is set wherever the proxy is active.
The setup for the mach commands defaults to https://taskcluster.net for user
convenience. When the production instance's URL changes, we can simply change
that default.
This changes the docker build process propagate TASKCLUSTER_ROOT_URL into the
docker images where necessary (using %ARG), specifically to create URLs for
debian repo paths.
Eventually, workers will provide these variables directly
(https://bugzilla.mozilla.org/show_bug.cgi?id=1460015). But for now, this
ensures that TASKCLUSTER_ROOT_URL is set everywhere, and TASKCLUSTER_PROXY_URL
is set wherever the proxy is active.
The setup for the mach commands defaults to https://taskcluster.net for user
convenience. When the production instance's URL changes, we can simply change
that default.
This changes the docker build process to propagate TASKCLUSTER_ROOT_URL into
the docker images, and for good measure includes some code to use that value to
generate debian repo paths.
Differential Revision: https://phabricator.services.mozilla.com/D14196
There are several kinds that cache tasks based on the inputs that go into the task. Historically,
these inputs included the name of upstream tasks. This change these tasks to include the digest
of the upstream tasks.
This also bumps the version of the docker and toolchain as every digest is changed for them.
Differential Revision: https://phabricator.services.mozilla.com/D11949
This uses the latest image_builder image (on docker hub) to build even the
image_builder image.
The change to `docker.py` handles a new API response (`aux`) from the Docker
daemon. It's unclear what this key means, but displaying it is simple.
Differential Revision: https://phabricator.services.mozilla.com/D8441
Most jobs include at least one transform that verifies the input of all the
tasks against a schema. This code is duplicated in each transform. Refactor it,
so that we only need one copy of the logic.
Differential Revision: https://phabricator.services.mozilla.com/D12165
Most jobs include at least one transform that verifies the input of all the
tasks against a schema. This code is duplicated in each transform. Refactor it,
so that we only need one copy of the logic.
Differential Revision: https://phabricator.services.mozilla.com/D12165
This uses the latest image_builder image (on docker hub) to build even the
image_builder image.
The change to `docker.py` handles a new API response (`aux`) from the Docker
daemon. It's unclear what this key means, but displaying it is simple.
Differential Revision: https://phabricator.services.mozilla.com/D8441
The digest for a docker image task did not include the digest for the parent
image in it, and so in particular did not depend on the versions of packages
included in a parent image.
If two branches have a docker image with identical docker files, but different
parents, this would lead to them both getting the same digest, leading to
unexpected interference between the branches.
This fixes things by including the digest of the parent image as input into the
digest of child images.
Differential Revision: https://phabricator.services.mozilla.com/D11807
When apt-get fails, it has a distinctive error code (100). Most of the
time, when apt-get fails, it's because of some network error, or
possibly some problem unpacking archives. When that happens, retrying
the task usually "fixes" the issue.
One of the (currently) most common causes of problems is
snapshot.debian.org not being available to some of the EC2 instances.
It would be possible to only set things up so that we only retry when we
detect such setup (checking the public IP of the instance is not in the
known list of problematic IPs), but that would require possibly wrapping
apt-get, or something along those line, which is not entirely trivial to
do for the packages tasks, because they don't rely on docker images.
However, since there aren't many apt-get failures other than these,
and since there have been, historically, some intermittent apt-get
failures of a different nature that were solved by re-running the tasks,
it seems fair to just retry wheneven apt-get fails.
One downside of the approach is that if for some reason a change to a
Dockerfile ends up mentioning a package that doesn't exist, that too
will result in multiple retries ; which might be inconvenient, but
that's not something that's going to happen often.
Differential Revision: https://phabricator.services.mozilla.com/D11420
This uses the latest image_builder image (on docker hub) to build even the
image_builder image.
The change to `docker.py` handles a new API response (`aux`) from the Docker
daemon. It's unclear what this key means, but displaying it is simple.
Differential Revision: https://phabricator.services.mozilla.com/D8441