This patch enables `run-on-projects` to work appropriately for
nightly builds and tests. Initially, we were setting an empty
`run-on-projects` for nightly `build_platform`s, then explicitly
targeting the platforms in nightly-specific `target_task_method`s.
Instead, this patch enables nightlies to `run-on-projects` everywhere,
but governs the use of nightlies by either the `include_nightly`
parameter, or the `--include-nightly` try option. This lets us filter
nightly-related `target_task_method`s against `run-on-projects` without
losing all nightly tasks.
Then, enable spidermonkey tests by removing optimization from beta and
release. This patch also enables everything then disables specific
tasks, rather than disabling everything and enabling specific tasks.
Since we're beginning with a `filter_for_project` call, we should be
able to reduce these if blocks to zero over time, if desired.
MozReview-Commit-ID: A9tolynaChF
Various modules under taskcluster are doing ad-hoc url formatting or
requests to taskcluster services. While we could use the taskcluster
client python module, it's kind of overkill for the simple requests done
here. So instead of vendoring that module, create a smaller one with
a limited set of functions we need.
This changes the behavior of the get_artifact function to return a
file-like object when the file is neither a json nor a yaml, but that
branch was never used (and was actually returning an unassigned
variable, so it was broken anyways).
At the same time, make the function that does HTTP requests more
error-resistant, using urllib3's Retry with a backoff factor.
Also add a function that retrieves the list of artifacts, that while
currently unused, will be used by `mach artifact` shortly.
Various modules under taskcluster are doing ad-hoc url formatting or
requests to taskcluster services. While we could use the taskcluster
client python module, it's kind of overkill for the simple requests done
here. So instead of vendoring that module, create a smaller one with
a limited set of functions we need.
This changes the behavior of the get_artifact function to return a
file-like object when the file is neither a json nor a yaml, but that
branch was never used (and was actually returning an unassigned
variable, so it was broken anyways).
At the same time, make the function that does HTTP requests more
error-resistant, using urllib3's Retry with a backoff factor.
Also add a function that retrieves the list of artifacts, that while
currently unused, will be used by `mach artifact` shortly.
Previously, we ran a single "target task" function to mutate the full
task graph into a subset based on input parameters (try syntax,
repository being built for, etc). This concept is useful. But
the implementation was limiting because we could only have a single
"target tasks" function.
This commit introduces the concept of "filters." They conceptually
do the same thing as "target tasks methods" but you can run more than
1 of them.
Filters are simply functions that examine an input graph+parameters
and emit nodes that should be retained. Filters, like target tasks
methods, are defined via decorated functions in a module.
TaskGraphGenerator has been converted to use filters. The list of
defined filters can be defined in the parameters dict passed into
TaskGraphGenerator. A default filter list is provided in decision.py.
The intent is to eventually convert target tasks to filters. Until
that happens, we always run the registered target tasks method via
a filter proxy function.
No new tests have been added because we don't yet have any
functionality relying explicitly on filters. Tests will be added in
a subsequent commit once we add a new filter.
While I was here, I also snuck in some logging on the size of the
graphs.
MozReview-Commit-ID: ERn2hIYbMRp
Before, we'd open files and feed bytes to yaml.load(). When a str
is fed to yaml.load(), it attempts to guess the encoding. It defaults
to UTF-8 unless somebody set us up the BOM. This is probably OK.
Except if the file isn't valid UTF-8, the exception will be raised
in the bowels of YAML parsing and it may not be obvious the failure
is due to invalid UTF-8 input versus say Python str/unicode
coercion foo.
We change all call sites that load YAML from a file to use
codecs.open() to open the file in UTF-8 and perform UTF-8
decoding/validation at file read time. This should make any UTF-8
failures more obvious. Furthermore, it reinforces that our YAML files
are UTF-8 and not some other encoding.
I discovered this issue as part of trying to get emoji symbols to
render on Treeherder. Unfortunately, it appears pyyaml detects
many emoji as unprintable characters and refuses to load them. This
makes me sad and makes me want to abandon pyyaml/YAML in favor of
something that supports emoji :P
MozReview-Commit-ID: AOvAruZFfnK
The `taskgraph` package generates TaskCluster task graphs based on collections
of task "kinds". Initially, there is only one kind, the "legacy" kind, which
reads the YAML files from `testing/taskcluster/tasks` to generate the task
graph.
Try syntax is implemented by filtering the tasks in the taskgraph after it has
been created, then extending the result to include any prerequisite tasks.
A collection of `mach taskgraph` subcommands are provided for developers to
extend or debug the task-graph generation process.
MozReview-Commit-ID: 1TJCns4XxZ8