Files
tubestation/taskcluster/docs/attributes.rst
Gregory Szorc 3697053827 Bug 1460777 - Taskgraph tasks for retrieving remote content; r=dustin, glandium
Currently, many tasks fetch content from the Internets. A problem with
that is fetching from the Internets is unreliable: servers may have
outages or be slow; content may disappear or change out from under us.

The unreliability of 3rd party services poses a risk to Firefox CI.
If services aren't available, we could potentially not run some CI tasks.
In the worst case, we might not be able to release Firefox. That would
be bad. In fact, as I write this, gmplib.org has been unavailable for
~24 hours and Firefox CI is unable to retrieve the GMP source code.
As a result, building GCC toolchains is failing.

A solution to this is to make tasks more hermetic by depending on
fewer network services (which by definition aren't reliable over time
and therefore introduce instability).

This commit attempts to mitigate some external service dependencies
by introducing the *fetch* task kind.

The primary goal of the *fetch* kind is to obtain remote content and
re-expose it as a task artifact. By making external content available
as a cached task artifact, we allow dependent tasks to consume this
content without touching the service originally providing that
content, thus eliminating a run-time dependency and making tasks more
hermetic and reproducible over time.

We introduce a single "fetch-url" "using" flavor to define tasks that
fetch single URLs and then re-expose that URL as an artifact. Powering
this is a new, minimal "fetch" Docker image that contains a
"fetch-content" Python script that does the work for us.

We have added tasks to fetch source archives used to build the GCC
toolchains.

Fetching remote content and re-exposing it as an artifact is not
very useful by itself: the value is in having tasks use those
artifacts.

We introduce a taskgraph transform that allows tasks to define an
array of "fetches." Each entry corresponds to the name of a "fetch"
task kind. When present, the corresponding "fetch" task is added as a
dependency. And the task ID and artifact path from that "fetch" task
is added to the MOZ_FETCHES environment variable of the task depending
on it. Our "fetch-content" script has a "task-artifacts"
sub-command that tasks can execute to perform retrieval of all
artifacts listed in MOZ_FETCHES.

To prove all of this works, the code for fetching dependencies when
building GCC toolchains has been updated to use `fetch-content`. The
now-unused legacy code has been deleted.

This commit improves the reliability and efficiency of GCC toolchain
tasks. Dependencies now all come from task artifacts and should always
be available in the common case. In addition, `fetch-content` downloads
and extracts files concurrently. This makes it faster than the serial
application which we were previously using.

There are some things I don't like about this commit.

First, a new Docker image and Python script for downloading URLs feels
a bit heavyweight. The Docker image is definitely overkill as things
stand. I can eventually justify it because I want to implement support
for fetching and repackaging VCS repositories and for caching Debian
packages. These will require more packages than what I'm comfortable
installing on the base Debian image, therefore justifying a dedicated
image.

The `fetch-content static-url` sub-command could definitely be
implemented as a shell script. But Python is readily available and
is more pleasant to maintain than shell, so I wrote it in Python.

`fetch-content task-artifacts` is more advanced and writing it in
Python is more justified, IMO. FWIW, the script is Python 3 only,
which conveniently gives us access to `concurrent.futures`, which
facilitates concurrent download.

`fetch-content` also duplicates functionality found elsewhere.
generic-worker's task payload supports a "mounts" feature which
facilitates downloading remote content, including from a task
artifact. However, this feature doesn't exist on docker-worker.
So we have to implement downloading inside the task rather than
at the worker level. I concede that if all workers had generic-worker's
"mounts" feature and supported concurrent download, `fetch-content`
wouldn't need to exist.

`fetch-content` also duplicates functionality of
`mach artifact toolchain`. I probably could have used
`mach artifact toolchain` instead of writing
`fetch-content task-artifacts`. However, I didn't want to introduce
the requirement of a VCS checkout. `mach artifact toolchain` has its
origins in providing a feature to the build system. And "fetching
artifacts from tasks" is a more generic feature than that. I think
it should be implemented as a generic feature and not something that is
"toolchain" specific.

I think the best place for a generic "fetch content" feature is in
the worker, where content can be defined in the task payload. But as
explained above, that feature isn't universally available. The next
best place is probably run-task. run-task already performs generic,
very-early task preparation steps, such as performing a VCS checkout.
I would like to fold `fetch-content` into run-task and make it all
driven by environment variables. But run-task is currently Python 2
and achieving concurrency would involve a bit of programming (or
adding package dependencies). I may very well port run-task to Python
3 and then fold fetch-content into it. Or maybe we leave
`fetch-content` as a standalone script.

MozReview-Commit-ID: AGuTcwNcNJR
2018-06-06 14:37:49 -07:00

228 lines
7.3 KiB
ReStructuredText

===============
Task Attributes
===============
Tasks can be filtered, for example to support "try" pushes which only perform a
subset of the task graph or to link dependent tasks. This filtering is the
difference between a full task graph and a target task graph.
Filtering takes place on the basis of attributes. Each task has a dictionary
of attributes and filters over those attributes can be expressed in Python. A
task may not have a value for every attribute.
The attributes, and acceptable values, are defined here. In general, attribute
names and values are the short, lower-case form, with underscores.
kind
====
A task's ``kind`` attribute gives the name of the kind that generated it, e.g.,
``build`` or ``spidermonkey``.
run_on_projects
===============
The projects where this task should be in the target task set. This is how
requirements like "only run this on inbound" get implemented. These are
either project names or the aliases
* `integration` -- integration repositories (autoland, inbound, etc)
* `trunk` -- integration repositories plus mozilla-central
* `release` -- release repositories including mozilla-central
* `all` -- everywhere (the default)
For try, this attribute applies only if ``-p all`` is specified. All jobs can
be specified by name regardless of ``run_on_projects``.
If ``run_on_projects`` is set to an empty list, then the task will not run
anywhere, unless its build platform is specified explicitly in try syntax.
task_duplicates
===============
This is used to indicate that we want multiple copies of the task created.
This feature is used to track down intermittent job failures.
If this value is set to N, the task-creation machinery will create a total of N
copies of the task. Only the first copy will be included in the taskgraph
output artifacts, although all tasks will be contained in the same taskGroup.
While most attributes are considered read-only, target task methods may alter
this attribute of tasks they include in the target set.
build_platform
==============
The build platform defines the platform for which the binary was built. It is
set for both build and test jobs, although test jobs may have a different
``test_platform``.
build_type
==========
The type of build being performed. This is a subdivision of ``build_platform``,
used for different kinds of builds that target the same platform. Values are
* ``debug``
* ``opt``
test_platform
=============
The test platform defines the platform on which tests are run. It is only
defined for test jobs and may differ from ``build_platform`` when the same binary
is tested on several platforms (for example, on several versions of Windows).
This applies for both talos and unit tests.
Unlike build_platform, the test platform is represented in a slash-separated
format, e.g., ``linux64/opt``.
unittest_suite
==============
This is the unit test suite being run in a unit test task. For example,
``mochitest`` or ``cppunittest``.
unittest_flavor
===============
If a unittest suite has subdivisions, those are represented as flavors. Not
all suites have flavors, in which case this attribute should be set to match
the suite. Examples: ``mochitest-devtools-chrome-chunked`` or ``a11y``.
unittest_try_name
=================
This is the name used to refer to a unit test via try syntax. It
may not match either of ``unittest_suite`` or ``unittest_flavor``.
talos_try_name
==============
This is the name used to refer to a talos job via try syntax.
raptor_try_name
===============
This is the name used to refer to a raptor job via try syntax.
job_try_name
============
This is the name used to refer to a "job" via try syntax (``-j``). Note that for
some kinds, ``-j`` also matches against ``build_platform``.
test_chunk
==========
This is the chunk number of a chunked test suite (talos or unittest). Note
that this is a string!
e10s
====
For test suites which distinguish whether they run with or without e10s, this
boolean value identifies this particular run.
image_name
==========
For the ``docker_image`` kind, this attribute contains the docker image name.
nightly
=======
Signals whether the task is part of a nightly graph. Useful when filtering
out nightly tasks from full task set at target stage.
all_locales
===========
For the ``l10n`` and ``nightly-l10n`` kinds, this attribute contains the list
of relevant locales for the platform.
all_locales_with_changesets
===========================
Contains a dict of l10n changesets, mapped by locales (same as in ``all_locales``).
l10n_chunk
==========
For the ``l10n`` and ``nightly-l10n`` kinds, this attribute contains the chunk
number of the job. Note that this is a string!
chunk_locales
=============
For the ``l10n`` and ``nightly-l10n`` kinds, this attribute contains an array of
the individual locales this chunk is responsible for processing.
locale
======
For jobs that operate on only one locale, we set the attribute ``locale`` to the
specific locale involved. Currently this is only in l10n versions of the
``beetmover`` and ``balrog`` kinds.
signed
======
Signals that the output of this task contains signed artifacts.
stub-installer
==============
Signals to the build system that this build is expected to have a stub installer
present, and informs followon tasks to expect it.
repackage_type
==============
This is the type of repackage. Can be ``repackage`` or
``repackage_signing``.
fetch-artifact
==============
For fetch jobs, this is the path to the artifact for that fetch operation.
toolchain-artifact
==================
For toolchain jobs, this is the path to the artifact for that toolchain.
toolchain-alias
===============
For toolchain jobs, this optionally gives an alias that can be used instead of the
real toolchain job name in the toolchains list for build jobs.
always_target
=============
Tasks with this attribute will be included in the ``target_task_graph`` regardless
of any target task filtering that occurs. When a task is included in this manner
(i.e it otherwise would have been filtered out), it will be considered for
optimization even if the ``optimize_target_tasks`` parameter is False.
This is meant to be used for tasks which a developer would almost always want to
run. Typically these tasks will be short running and have a high risk of causing
a backout. For example ``lint`` or ``python-unittest`` tasks.
shipping_product
================
For release promotion jobs, this is the product we are shipping.
shipping_phase
==============
For release promotion jobs, this is the shipping phase (build, promote, push, ship).
During the build phase, we build and sign shippable builds. During the promote phase,
we generate l10n repacks and push to the candidates directory. During the push phase,
we push to the releases directory. During the ship phase, we update bouncer, push to
Google Play, version bump, mark as shipped in ship-it.
Using the "snowman model", we depend on previous graphs if they're defined. So if we
ask for a ``push`` (the head of the snowman) and point at the body and base, we only
build the head. If we don't point at the body and base, we build the whole snowman
(build, promote, push).
artifact_prefix
===============
Most taskcluster artifacts are public, so we've hardcoded ``public/build`` in a
lot of places. To support private artifacts, we've moved this to the
``artifact_prefix`` attribute. It will default to ``public/build`` but will be
overrideable per-task.