Bug 1749473 - Remove warnings from the generated talos.rst file r=perftest-reviewers,sparky DONTBUILD

Differential Revision: https://phabricator.services.mozilla.com/D170330
This commit is contained in:
ogiorgis
2023-02-21 14:47:07 +00:00
parent d14efa1df7
commit 2c0fccca00
4 changed files with 628 additions and 466 deletions

View File

@@ -110,4 +110,5 @@ redirects:
fatal warnings:
- "WARNING: '([^']*)' reference target not found:((?!.rst).)*$"
max_num_warnings: 6600
max_num_warnings: 6380

File diff suppressed because it is too large Load Diff

View File

@@ -28,10 +28,9 @@ suites:
mean <https://searchfox.org/mozilla-central/source/testing/talos/talos/output.py#259>`__
provided by the benchmark
- description:
This is the
`JetStream <http://browserbench.org/JetStream/in-depth.html>`__
javascript benchmark taken verbatim and slightly modified to fit into
our pageloader extension and talos harness.
| This is the `JetStream <http://browserbench.org/JetStream/in-depth.html>`__
javascript benchmark taken verbatim and slightly modified to fit into
our pageloader extension and talos harness.
a11yr: >
- contact: :jamie and accessibility team
- source: `a11y.manifest <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/tests/a11y>`__
@@ -44,7 +43,7 @@ suites:
* suite: `geometric mean`_ of the 2 subtest results.
- reporting: test time in ms (lower is better)
- description:
This test ensures basic a11y tables and permutations do not cause performance regressions.
| This test ensures basic a11y tables and permutations do not cause performance regressions.
- **Example Data**
* 0;dhtml.html;1584;1637;1643;1665;1741;1529;1647;1645;1692;1647;1542;1750;1654;1649;1541;1656;1674;1645;1645;1740;1558;1652;1654;1656;1654 |
* 1;tablemutation.html;398;385;389;391;387;387;385;387;388;385;384;31746;386;387;384;387;389;387;387;387;388;391;386;387;388 |
@@ -59,25 +58,25 @@ suites:
* suite: `geometric mean`_ of the the subtest results.
- reporting: test time in ms (lower is better)
- description:
This test measures the performance of the Firefox about:preferences
page. This test is a little different than other pageload tests in that
we are loading one page (about:preferences) but also testing the loading
of that same page's subcategories/panels (i.e. about:preferences#home).
| This test measures the performance of the Firefox about:preferences
page. This test is a little different than other pageload tests in that
we are loading one page (about:preferences) but also testing the loading
of that same page's subcategories/panels (i.e. about:preferences#home).
When simply changing the page's panel/category, that doesn't cause a new
onload event as expected; therefore we had to introduce loading the
'about:blank' page in between each page category; that forces the entire
page to reload with the specified category panel activated.
When simply changing the page's panel/category, that doesn't cause a new
onload event as expected; therefore we had to introduce loading the
'about:blank' page in between each page category; that forces the entire
page to reload with the specified category panel activated.
For that reason, when new panels/categories are added to the
'about:preferences' page, it can be expected that a performance
regression may be introduced, even if a subtest hasn't been added for
that new page category yet.
For that reason, when new panels/categories are added to the
'about:preferences' page, it can be expected that a performance
regression may be introduced, even if a subtest hasn't been added for
that new page category yet.
This test should only ever have 1 pagecycle consisting of the main
about-preferences page and each category separated by an about:blank
between. Then repeats are achieved by using 25 cycles (instead of
pagecycles).
This test should only ever have 1 pagecycle consisting of the main
about-preferences page and each category separated by an about:blank
between. Then repeats are achieved by using 25 cycles (instead of
pagecycles).
- **Example Data**
* 0;preferences;346;141;143;150;136;143;153;140;154;156;143;154;146;147;151;166;140;146;140;144;144;156;154;150;140
* 2;preferences#search;164;142;133;141;141;141;142;140;131;146;131;140;131;131;139;142;140;144;146;143;143;142;142;137;143
@@ -149,9 +148,9 @@ suites:
* subtest: `ignore first`_ data point, then take the `median`_ of the remaining 24 data points; `source: test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l356>`__
* suite: No value for the suite, only individual subtests are relevant.
- description:
To run this locally, you'll need to pull down the `tp5 page
set <#page-sets>`__ and run it in a local web server. See the `tp5
section <#tp5>`__.
| To run this locally, you'll need to pull down the `tp5 page
set <#page-sets>`__ and run it in a local web server. See the `tp5
section <#tp5>`__.
- **Example Data**
* 0;simple.webconsole.open.DAMP;1198.86;354.38;314.44;337.32;344.73;339.05;345.55;358.37;314.89;353.73;324.02;339.45;304.63;335.50;316.69;341.05;353.45;353.73;342.28;344.63;357.62;375.18;326.08;363.10;357.30
* 1;simple.webconsole.reload.DAMP;44.60;41.21;25.62;29.85;38.10;42.29;38.25;40.14;26.95;39.24;40.32;34.67;34.64;44.88;32.51;42.09;28.04;43.05;40.62;36.56;42.44;44.11;38.69;29.10;42.00
@@ -197,28 +196,28 @@ suites:
- summarization:
* subtest: `ignore first`_ data point, then take the `median`_ of the remaining 4; `source: test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l986>`__
- description:
This measures the amount of time it takes to render a page after
changing its display list. The page has a large number of display list
items (10,000), and mutates one every frame. The goal of the test is to
make displaylist construction a bottleneck, rather than painting or
other factors, and thus improvements or regressions to displaylist
construction will be visible. The test runs in ASAP mode to maximize
framerate, and the result is how quickly the test was able to mutate and
re-paint 600 items, one during each frame.
| This measures the amount of time it takes to render a page after
changing its display list. The page has a large number of display list
items (10,000), and mutates one every frame. The goal of the test is to
make displaylist construction a bottleneck, rather than painting or
other factors, and thus improvements or regressions to displaylist
construction will be visible. The test runs in ASAP mode to maximize
framerate, and the result is how quickly the test was able to mutate and
re-paint 600 items, one during each frame.
dromaeo: >
- description:
Dromaeo suite of tests for JavaScript performance testing. See the
`Dromaeo wiki <https://wiki.mozilla.org/Dromaeo>`__ for more
information.
| Dromaeo suite of tests for JavaScript performance testing. See the
`Dromaeo wiki <https://wiki.mozilla.org/Dromaeo>`__ for more
information.
This suite is divided into several sub-suites.
This suite is divided into several sub-suites.
Each sub-suite is divided into tests, and each test is divided into
sub-tests. Each sub-test takes some (in theory) fixed piece of work and
measures how many times that piece of work can be performed in one
second. The score for a test is then the geometric mean of the
runs/second numbers for its sub-tests. The score for a sub-suite is the
geometric mean of the scores for its tests.
Each sub-suite is divided into tests, and each test is divided into
sub-tests. Each sub-test takes some (in theory) fixed piece of work and
measures how many times that piece of work can be performed in one
second. The score for a test is then the geometric mean of the
runs/second numbers for its sub-tests. The score for a sub-suite is the
geometric mean of the scores for its tests.
dromaeo_css: >
- contact: :emilio, and css/layout team
- source: `css.manifest <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/tests/dromaeo>`__
@@ -238,10 +237,10 @@ suites:
filter.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l527>`__
* suite: `geometric mean`_ of the 6 subtest results.
- description:
Each page in the manifest is part of the dromaeo css benchmark. Each
page measures the performance of searching the DOM for nodes matching
various CSS selectors, using different libraries for the selector
implementation (jQuery, Dojo, Mootools, ExtJS, Prototype, and Yahoo UI).
| Each page in the manifest is part of the dromaeo css benchmark. Each
page measures the performance of searching the DOM for nodes matching
various CSS selectors, using different libraries for the selector
implementation (jQuery, Dojo, Mootools, ExtJS, Prototype, and Yahoo UI).
- **Example Data**
* 0;dojo.html;2209.83;2269.68;2275.47;2278.83;2279.81;4224.43;4344.96;4346.74;4428.69;4459.82;4392.80;4396.38;4412.54;4414.34;4415.62;3909.94;4027.96;4069.08;4099.63;4099.94;4017.70;4018.96;4054.25;4068.74;4081.31;3825.10;3984.20;4053.23;4074.59;4106.63;3893.88;3971.80;4031.15;4046.68;4048.31;3978.24;4010.16;4046.66;4051.68;4056.37;4189.50;4287.98;4390.98;4449.89;4450.20;4536.23;4557.82;4588.40;4662.58;4664.42;4675.51;4693.13;4743.72;4758.12;4764.67;4138.00;4251.60;4346.22;4410.12;4417.23;4677.53;4702.48;4714.62;4802.59;4805.33;4445.07;4539.91;4598.93;4605.45;4618.79;4434.40;4543.09;4618.56;4683.98;4689.51;4485.26;4496.75;4511.23;4600.86;4602.08;4567.52;4608.33;4615.56;4619.31;4622.79;3469.44;3544.11;3605.80;3647.74;3658.56;3101.88;3126.41;3147.73;3159.92;3170.73;3672.28;3686.40;3730.74;3748.89;3753.59;4411.71;4521.50;4633.98;4702.72;4708.76;3626.62;3646.71;3713.07;3713.13;3718.91;3846.17;3846.25;3913.61;3914.63;3916.22;3982.88;4112.98;4132.26;4194.92;4201.54;4472.64;4575.22;4644.74;4645.42;4665.51;4120.13;4142.88;4171.29;4208.43;4211.03;4405.36;4517.89;4537.50;4637.77;4644.28;4548.25;4581.20;4614.54;4658.42;4671.09;4452.78;4460.09;4494.06;4521.30;4522.37;4252.81;4350.72;4364.93;4441.40;4492.78;4251.34;4346.70;4355.00;4358.89;4365.72;4494.64;4511.03;4582.11;4591.79;4592.36;4207.54;4308.94;4309.14;4406.71;4474.46
* 1;ext.html;479.65;486.21;489.61;492.94;495.81;24454.14;33580.33;34089.15;34182.83;34186.15;34690.83;35050.30;35051.30;35071.65;35099.82;5758.22;5872.32;6389.62;6525.38;6555.57;8303.96;8532.96;8540.91;8544.00;8571.49;8360.79;8408.79;8432.96;8447.28;8447.83;5817.71;5932.67;8371.83;8389.20;8643.44;7983.80;8073.27;8073.84;8076.48;8078.15;24596.00;32518.84;32787.34;32830.51;32861.00;2220.87;2853.84;3333.53;3345.17;3445.47;24785.75;24971.75;25044.25;25707.61;25799.00;2464.69;2481.89;2527.57;2534.65;2534.92;217793.00;219347.90;219495.00;220059.00;297168.00;40556.19;53062.47;54275.73;54276.00;54440.37;50636.75;50833.49;50983.49;51028.49;51032.74;10746.36;10972.45;11450.37;11692.18;11797.76;8402.58;8415.79;8418.66;8426.75;8428.16;16768.75;16896.00;16925.24;16945.58;17018.15;7047.68;7263.13;7313.16;7337.38;7383.22;713.88;723.72;751.47;861.35;931.00;25454.36;25644.90;25801.87;25992.61;25995.00;819.89;851.23;852.00;886.59;909.89;14325.79;15064.92;15240.39;15431.23;15510.61;452382.00;458194.00;458707.00;459226.00;459601.00;45699.54;46244.54;46270.54;46271.54;46319.00;1073.94;1080.66;1083.35;1085.84;1087.74;26622.33;27807.58;27856.72;28040.58;28217.86;37229.81;37683.81;37710.81;37746.62;37749.81;220386.00;222903.00;240808.00;247394.00;247578.00;25567.00;25568.49;25610.74;25650.74;25710.23;26466.21;28718.71;36175.64;36529.27;36556.00;26676.00;30757.69;31965.84;34521.83;34622.65;32791.18;32884.00;33194.83;33720.16;34192.66;32150.36;32520.02;32851.18;32947.18;33128.01;29472.85;30214.09;30708.54;30999.23;32879.51;23822.88;23978.28;24358.88;24470.88;24515.51
@@ -256,43 +255,44 @@ suites:
- data: see Dromaeo DOM
- reporting: speed in test runs per second (higher is better)
- description:
Each page in the manifest is part of the dromaeo dom benchmark. These
are the specific areas that Dromaeo DOM covers:
| Each page in the manifest is part of the dromaeo dom benchmark. These
are the specific areas that Dromaeo DOM covers:
* **DOM Attributes**:
Measures performance of getting and setting a DOM attribute, both via
``getAttribute`` and via a reflecting DOM property. Also throws in some
expando getting/setting for good measure.
* **DOM Attributes**:
Measures performance of getting and setting a DOM attribute, both via
``getAttribute`` and via a reflecting DOM property. Also throws in some
expando getting/setting for good measure.
* **DOM Modification**:
Measures performance of various things that modify the DOM tree:
creating element and text nodes and inserting them into the DOM.
* **DOM Modification**:
Measures performance of various things that modify the DOM tree:
creating element and text nodes and inserting them into the DOM.
* **DOM Query**:
Measures performance of various methods of looking for nodes in the DOM:
``getElementById``, ``getElementsByTagName``, and so forth.
* **DOM Query**:
Measures performance of various methods of looking for nodes in the DOM:
``getElementById``, ``getElementsByTagName``, and so forth.
* **DOM Traversal**:
Measures performance of various accessors (``childNodes``,
``firstChild``, etc) that would be used when doing a walk over the DOM
tree.
* **DOM Traversal**:
Measures performance of various accessors (``childNodes``,
``firstChild``, etc) that would be used when doing a walk over the DOM
tree.
Please see `dromaeo_css <#dromaeo_css>`_ for examples of data.
Please see `dromaeo_css <#dromaeo_css>`_ for examples of data.
glterrain: >
- contact: :jgilbert and gfx
- source: `glterrain <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/tests/webgl/benchmarks/terrain>`__
- type: `Page load`_
- data: we load the perftest.html page (which generates 4 metrics to track) 25 times, resulting in 4 sets of 25 data points
- summarization: Measures average frames interval while animating a simple WebGL scene
* subtest: `ignore first`_ data point, then take the `median`_ of the remaining 24; `source: test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l381>`__
* subtest: `ignore first`_ data point, then take the `median`_ of the remaining 24; `source:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l381>`__
* suite: `geometric mean`_ of the 4 subtest results.
- description:
This tests animates a simple WebGL scene (static textured landscape, one
moving light source, rotating viewport) and measure the frames
throughput (expressed as average interval) over 100 frames. It runs in
ASAP mode (vsync off) and measures the same scene 4 times (for all
combination of antialiasing and alpha. It reports the results as 4
values) one for each combination. Lower results are better.
| This tests animates a simple WebGL scene (static textured landscape, one
moving light source, rotating viewport) and measure the frames
throughput (expressed as average interval) over 100 frames. It runs in
ASAP mode (vsync off) and measures the same scene 4 times (for all
combination of antialiasing and alpha. It reports the results as 4
values) one for each combination. Lower results are better.
- **Example Data**
* 0;0.WebGL-terrain-alpha-no-AA-no;19.8189;20.57185;20.5069;21.09645;20.40045;20.89025;20.34285;20.8525;20.45845;20.6499;19.94505;20.05285;20.316049;19.46745;19.46135;20.63865;20.4789;19.97015;19.9546;20.40365;20.74385;20.828649;20.78295;20.51685;20.97069
* 1;1.WebGL-terrain-alpha-no-AA-yes;23.0464;23.5234;23.34595;23.40609;22.54349;22.0554;22.7933;23.00685;23.023649;22.51255;23.25975;23.65819;22.572249;22.9195;22.44325;22.95015;23.3567;23.02089;22.1459;23.04545;23.09235;23.40855;23.3296;23.18849;23.273249
@@ -311,11 +311,11 @@ suites:
- **Example Data**
* 0;Mean tick time across 100 ticks: ;54.6916;49.0534;51.21645;51.239650000000005;52.44295
- description:
This test playbacks a video file and ask WebGL to draw video frames as
WebGL textures for 100 ticks. It collects the mean tick time across 100
ticks to measure how much time it will spend for a video texture upload
to be a WebGL texture (gl.texImage2D). We run it for 5 times and ignore
the first found. Lower results are better.
| This test playbacks a video file and ask WebGL to draw video frames as
WebGL textures for 100 ticks. It collects the mean tick time across 100
ticks to measure how much time it will spend for a video texture upload
to be a WebGL texture (gl.texImage2D). We run it for 5 times and ignore
the first found. Lower results are better.
kraken: >
- contact: :sdetar, jandem, and SpiderMonkey Team
- source: `kraken.manifest <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/tests/kraken>`__
@@ -329,9 +329,9 @@ suites:
to report a single number.
* suite: `geometric mean`_ of the 14 subtest results.
- description:
This is the `Kraken <https://wiki.mozilla.org/Kraken>`__ javascript
benchmark taken verbatim and slightly modified to fit into our
pageloader extension and talos harness.
| This is the `Kraken <https://wiki.mozilla.org/Kraken>`__ javascript
benchmark taken verbatim and slightly modified to fit into our
pageloader extension and talos harness.
- **Example Data**
* 0;ai-astar;100;95;98;102;101;99;97;98;98;102
* 1;audio-beat-detection;147;147;191;173;145;139;186;143;183;140
@@ -385,33 +385,33 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l734>`__
* suite: identical to subtest
- description:
**Important note:** This test now requires an 'opt' build. If the
perf-reftest is ran on a non-opt build, it will time out (more
specifically on innertext-1.html, and possibly others in the future).
| **Important note:** This test now requires an 'opt' build. If the
perf-reftest is ran on a non-opt build, it will time out (more
specifically on innertext-1.html, and possibly others in the future).
Style system performance test suite. The perf-reftest suite is a unique
talos suite where each subtest loads two different test pages: a 'base'
page (i.e. bloom_basic) and a 'reference' page (i.e. bloom_basic_ref),
and then compares each of the page load times against eachother to
determine the variance.
Style system performance test suite. The perf-reftest suite is a unique
talos suite where each subtest loads two different test pages: a 'base'
page (i.e. bloom_basic) and a 'reference' page (i.e. bloom_basic_ref),
and then compares each of the page load times against eachother to
determine the variance.
Talos runs each of the two pages as if they are stand-alone tests, and
then calculates and reports the variance; the test output 'replicates'
reported from bloom_basic are actually the comparisons between the
'base' and 'reference' pages for each page load cycle. The suite
contains multiple subtests, each of which contains a base page and a
reference page.
Talos runs each of the two pages as if they are stand-alone tests, and
then calculates and reports the variance; the test output 'replicates'
reported from bloom_basic are actually the comparisons between the
'base' and 'reference' pages for each page load cycle. The suite
contains multiple subtests, each of which contains a base page and a
reference page.
If you wish to see the individual 'base' and 'reference' page results
instead of just the reported difference, the 'base_replicates' and
'ref_replicates' can be found in the PERFHERDER_DATA log file output,
and in the 'local.json' talos output file when running talos locally. In
production, both of the page replicates are also archived in the
perfherder-data.json file. The perfherder-data.json file is archived
after each run in production, and can be found on the Treeherder Job
Details tab when the perf-reftest job symbol is selected.
If you wish to see the individual 'base' and 'reference' page results
instead of just the reported difference, the 'base_replicates' and
'ref_replicates' can be found in the PERFHERDER_DATA log file output,
and in the 'local.json' talos output file when running talos locally. In
production, both of the page replicates are also archived in the
perfherder-data.json file. The perfherder-data.json file is archived
after each run in production, and can be found on the Treeherder Job
Details tab when the perf-reftest job symbol is selected.
This test suite was ported over from the `style-perf-tests <https://github.com/heycam/style-perf-tests>`__.
This test suite was ported over from the `style-perf-tests <https://github.com/heycam/style-perf-tests>`__.
- **Example Data**
* "replicates": [1.185, 1.69, 1.22, 0.36, 11.26, 3.835, 3.315, 1.355, 3.185, 2.485, 2.2, 1.01, 0.9, 1.22, 1.9,
0.285, 1.52, 0.31, 2.58, 0.725, 2.31, 2.67, 3.295, 1.57, 0.3], "value": 1.7349999999999999, "unit": "ms",
@@ -432,13 +432,13 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l734>`__
* suite: identical to subtest
- description:
Individual style system performance tests. The perf-reftest-singletons
suite runs the perf-reftest 'base' pages (i.e. bloom_basic) test
individually, and reports the values for that single test page alone,
NOT the comparison of two different pages. There are multiple subtests
in this suite, each just containing the base page on its own.
| Individual style system performance tests. The perf-reftest-singletons
suite runs the perf-reftest 'base' pages (i.e. bloom_basic) test
individually, and reports the values for that single test page alone,
NOT the comparison of two different pages. There are multiple subtests
in this suite, each just containing the base page on its own.
This test suite was ported over from the `style-perf-tests <https://github.com/heycam/style-perf-tests>`__.
This test suite was ported over from the `style-perf-tests <https://github.com/heycam/style-perf-tests>`__.
- **Example Data**
* bloombasic.html;88.34000000000003;88.66499999999999;94.815;92.60500000000002;95.30000000000001;
* 98.80000000000001;91.975;87.73500000000001;86.925;86.965;93.00500000000001;98.93;87.45000000000002;
@@ -453,18 +453,18 @@ suites:
* subtest: `ignore first`_ data point, then take the `median`_ of the remaining 9; `source:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l986>`__
- description:
This page animates some complex gradient patterns in a
requestAnimationFrame callback. However, it also churns the CPU during
each callback, spinning an empty loop for 14ms each frame. The intent is
that, if we consider the rasterization costs to be 0, then the animation
should run close to 60fps. Otherwise it will lag. Since rasterization
costs are not 0, the lower we can get them, the faster the test will
run. The test runs in ASAP mode to maximize framerate.
| This page animates some complex gradient patterns in a
requestAnimationFrame callback. However, it also churns the CPU during
each callback, spinning an empty loop for 14ms each frame. The intent is
that, if we consider the rasterization costs to be 0, then the animation
should run close to 60fps. Otherwise it will lag. Since rasterization
costs are not 0, the lower we can get them, the faster the test will
run. The test runs in ASAP mode to maximize framerate.
The test runs for 10 seconds, and the resulting score is how many frames
we were able to render during that time. Higher is better. Improvements
(or regressions) to general painting performance or gradient rendering
will affect this benchmark.
The test runs for 10 seconds, and the resulting score is how many frames
we were able to render during that time. Higher is better. Improvements
(or regressions) to general painting performance or gradient rendering
will affect this benchmark.
rasterflood_svg: >
- contact: :jrmuizel, :jimm, and gfx
- source: `rasterflood_svg.html <https://searchfox.org/mozilla-central/source/testing/talos/talos/tests/gfx/benchmarks/rasterflood_svg.html>`__
@@ -474,23 +474,23 @@ suites:
* subtest: `ignore first`_ data point, then take the `median`_ of the remaining 9; `source:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l986>`__
- description:
This page animates some complex SVG patterns in a requestAnimationFrame
callback. However, it also churns the CPU during each callback, spinning
an empty loop for 14ms each frame. The intent is that, if we consider
the rasterization costs to be 0, then the animation should run close to
60fps. Otherwise it will lag. Since rasterization costs are not 0, the
lower we can get them, the faster the test will run. The test runs in
ASAP mode to maximize framerate. The result is how quickly the browser
is able to render 600 frames of the animation.
| This page animates some complex SVG patterns in a requestAnimationFrame
callback. However, it also churns the CPU during each callback, spinning
an empty loop for 14ms each frame. The intent is that, if we consider
the rasterization costs to be 0, then the animation should run close to
60fps. Otherwise it will lag. Since rasterization costs are not 0, the
lower we can get them, the faster the test will run. The test runs in
ASAP mode to maximize framerate. The result is how quickly the browser
is able to render 600 frames of the animation.
Improvements (or regressions) to general painting performance or SVG are
likely to affect this benchmark.
Improvements (or regressions) to general painting performance or SVG are
likely to affect this benchmark.
sessionrestore: >
- contact: :dale, :dao, :farre, session restore module owners/peers, and DOM team
- source: `talos/sessionrestore <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/startup_test/sessionrestore>`__
- bug: `bug 936630 <https://bugzilla.mozilla.org/show_bug.cgi?id=936630>`__,
`bug 1331937 <https://bugzilla.mozilla.org/show_bug.cgi?id=1331937>`__,
`bug 1531520 <https://bugzilla.mozilla.org/show_bug.cgi?id=1531520>`__
`bug 1331937 <https://bugzilla.mozilla.org/show_bug.cgi?id=1331937>`__,
`bug 1531520 <https://bugzilla.mozilla.org/show_bug.cgi?id=1531520>`__
- type: Startup_
- measuring: time spent reading and restoring the session.
- reporting: interval in ms (lower is better).
@@ -500,19 +500,19 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l305>`__
* suite: identical to subtest
- description:
Three tests measure the time spent reading and restoring the session
from a valid sessionstore.js. Time is counted from the *process start*
until the *sessionRestored* event.
| Three tests measure the time spent reading and restoring the session
from a valid sessionstore.js. Time is counted from the *process start*
until the *sessionRestored* event.
In *sessionrestore*, this is tested with a configuration that requires
the session to be restored. In *sessionrestore_no_auto_restore*, this is
tested with a configuration that requires the session to not be
restored. Both of the above tests use a sessionstore.js file that
contains one window and roughly 89 tabs. In
*sessionrestore_many_windows*, this is tested with a sessionstore.js
that contains 3 windows and 130 tabs. The first window contains 50 tabs,
80 remaning tabs are divided equally between the second and the third
window.
In *sessionrestore*, this is tested with a configuration that requires
the session to be restored. In *sessionrestore_no_auto_restore*, this is
tested with a configuration that requires the session to not be
restored. Both of the above tests use a sessionstore.js file that
contains one window and roughly 89 tabs. In
*sessionrestore_many_windows*, this is tested with a sessionstore.js
that contains 3 windows and 130 tabs. The first window contains 50 tabs,
80 remaning tabs are divided equally between the second and the third
window.
- **Example Data**
* [2362.0, 2147.0, 2171.0, 2134.0, 2116.0, 2145.0, 2141.0, 2141.0, 2136.0, 2080.0]
sessionrestore_many_windows: >
@@ -531,7 +531,8 @@ suites:
startup_about_home_paint_cached: >
- contact: :mconley, Firefox Desktop Front-end team, :gijs, :fqueze, and :dthayer
- See `startup_about_home_paint <#startup_about_home_paint>`_.
- description: Tests loading about:home on startup with the about:home startup cache enabled.
- description:
| Tests loading about:home on startup with the about:home startup cache enabled.
startup_about_home_paint_realworld_webextensions: >
- contact: :mconley, Firefox Desktop Front-end team, :gijs, :fqueze, and :dthayer
- source: `addon <https://hg.mozilla.org/mozilla-central/file/tip/testing/talos/talos/startup_test/startup_about_home_paint/addon/>`__
@@ -561,9 +562,9 @@ suites:
* The time it takes to animate the tabs. That's the responsibility
of the TART test. tabpaint is strictly concerned with the painting of the web content.
- data: we load the tabpaint trigger page 20 times, each run produces
two values (the time it takes to paint content when opened from the
parent, and the time it takes to paint content when opened from
content), resulting in 2 sets of 20 data points.
two values (the time it takes to paint content when opened from the
parent, and the time it takes to paint content when opened from
content), resulting in 2 sets of 20 data points.
- **Example Data**
* 0;tabpaint-from-parent;105;76;66;64;64;69;65;63;70;68;64;60;65;63;54;61;64;67;61;64
* 1;tabpaint-from-content;129;68;72;72;70;78;86;85;82;79;120;92;76;80;74;82;76;89;77;85
@@ -693,18 +694,19 @@ suites:
* 28;newtab-open-preload-yes.all.TART;5.10;4.60;4.63;8.94;5.01;4.69;4.63;4.67;4.93;5.43;4.78;5.12;4.77;4.65;4.50;4.78;4.75;4.63;4.76;4.45;4.86;4.88;4.69;4.86;4.92
* 29;newtab-open-preload-yes.error.TART;35.90;37.24;38.57;40.60;36.04;38.12;38.78;36.73;36.91;36.69;38.12;36.69;37.79;35.80;36.11;38.01;36.59;38.85;37.14;37.30;38.02;38.95;37.64;37.86;36.43
tart_flex: >
- description: This test was created as a part of a goal to switch away from xul flexbox to css flexbox
- description:
| This test was created as a part of a goal to switch away from xul flexbox to css flexbox
- Contact: No longer being maintained by any team/individual
tp5n: >
- contact: fx-perf@mozilla.com
- description:
The tp5 is an updated web page test set to 100 pages from April 8th, 2011. Effort was made for the pages to no longer be splash screens/login pages/home pages but to be pages that better reflect the actual content of the site in question.
| The tp5 is an updated web page test set to 100 pages from April 8th, 2011. Effort was made for the pages to no longer be splash screens/login pages/home pages but to be pages that better reflect the actual content of the site in question.
tp5: >
- description:
Note that the tp5 test no longer exists (only talos-tp5o) though many
tests still make use of this pageset. Here, we provide an overview of
the tp5 pageset and some information about how data using the tp5
pageset might be used in various suites.
| Note that the tp5 test no longer exists (only talos-tp5o) though many
tests still make use of this pageset. Here, we provide an overview of
the tp5 pageset and some information about how data using the tp5
pageset might be used in various suites.
tp5o: >
- contact: :davehunt, and perftest team
- source: `tp5n.zip <#page-sets>`__
@@ -716,30 +718,28 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l449>`__
* suite: `geometric mean`_ of the 51 subtest results.
- description:
Tests the time it takes Firefox to load the `tp5 web page test
set <#page-sets>`__. The web set was culled from the Alexa top 500 April
8th, 2011 and consists of 100 pages in tp5n and 51 in tp5o. Some suites
use a subset of these, i.e. 48/51 tests to reduce noise - check with the
owner of the test suite which uses the pageset to check if this
difference exists there.
| Tests the time it takes Firefox to load the `tp5 web page test
set <#page-sets>`__. The web set was culled from the Alexa top 500 April
8th, 2011 and consists of 100 pages in tp5n and 51 in tp5o. Some suites
use a subset of these, i.e. 48/51 tests to reduce noise - check with the
owner of the test suite which uses the pageset to check if this
difference exists there.
Here are the broad steps we use to create the test set:
Here are the broad steps we use to create the test set:
#. Take the Alexa top 500 sites list
#. Remove all sites with questionable or explicit content
#. Remove duplicate site (for ex. many Google search front pages)
#. Manually select to keep interesting pages (such as pages in different
locales)
#. Select a more representative page from any site presenting a simple
search/login/etc. page
#. Deal with Windows 255 char limit for cached pages
#. Limit test set to top 100 pages
#. Take the Alexa top 500 sites list
#. Remove all sites with questionable or explicit content
#. Remove duplicate site (for ex. many Google search front pages)
#. Manually select to keep interesting pages (such as pages in different locales)
#. Select a more representative page from any site presenting a simple search/login/etc. page
#. Deal with Windows 255 char limit for cached pages
#. Limit test set to top 100 pages
Note that the above steps did not eliminate all outside network access
so we had to take further action to scrub all the pages so that there
are 0 outside network accesses (this is done so that the tp test is as
deterministic measurement of our rendering/layout/paint process as
possible).
Note that the above steps did not eliminate all outside network access
so we had to take further action to scrub all the pages so that there
are 0 outside network accesses (this is done so that the tp test is as
deterministic measurement of our rendering/layout/paint process as
possible).
- **Example Data**
* 0;163.com/www.163.com/index.html;1035;512;542;519;505;514;551;513;554;793;487;528;528;498;503;530;527;490;521;535;521;496;498;564;520
* 1;56.com/www.56.com/index.html;1081;583;580;577;597;580;623;558;572;592;598;580;564;583;596;600;579;580;566;573;566;581;571;600;586
@@ -803,16 +803,16 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l470>`__
* suite: `geometric mean`_ of the 51 subtest results.
- description:
This test is identical to tscrollx, but it scrolls the 50 pages of the
tp5o set (rather than 6 synthetic pages which tscrollx scrolls). There
are two variants for each test page. The "regular" variant waits 500ms
after the page load event fires, then iterates 100 scroll steps of 10px
each (or until the bottom of the page is reached - whichever comes
first), then reports the average frame interval. The "CSSOM" variant is
similar, but uses APZ's smooth scrolling mechanism to do compositor
scrolling instead of main-thread scrolling. So it just requests the
final scroll destination and the compositor handles the scrolling and
reports frame intervals.
| This test is identical to tscrollx, but it scrolls the 50 pages of the
tp5o set (rather than 6 synthetic pages which tscrollx scrolls). There
are two variants for each test page. The "regular" variant waits 500ms
after the page load event fires, then iterates 100 scroll steps of 10px
each (or until the bottom of the page is reached - whichever comes
first), then reports the average frame interval. The "CSSOM" variant is
similar, but uses APZ's smooth scrolling mechanism to do compositor
scrolling instead of main-thread scrolling. So it just requests the
final scroll destination and the compositor handles the scrolling and
reports frame intervals.
- **Example Data**
* 0;163.com/www.163.com/index.html;9.73;8.61;7.37;8.17;7.58;7.29;6.88;7.45;6.91;6.61;8.47;7.12
* 1;56.com/www.56.com/index.html;10.85;10.24;10.75;10.30;10.23;10.10;10.31;10.06;11.10;10.06;9.56;10.30
@@ -883,16 +883,16 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l205>`__
* suite: same as subtest result
- description:
A purer form of paint measurement than tpaint. This test opens a single
window positioned at 10,10 and sized to 300,300, then resizes the window
outward \|max\| times measuring the amount of time it takes to repaint
each resize. Dumps the resulting dataset and average to stdout or
logfile.
| A purer form of paint measurement than tpaint. This test opens a single
window positioned at 10,10 and sized to 300,300, then resizes the window
outward \|max\| times measuring the amount of time it takes to repaint
each resize. Dumps the resulting dataset and average to stdout or
logfile.
In `bug
1102479 <https://bugzilla.mozilla.org/show_bug.cgi?id=1102479>`__
tresize was rewritten to work in e10s mode which involved a full rewrite
of the test.
In `bug
1102479 <https://bugzilla.mozilla.org/show_bug.cgi?id=1102479>`__
tresize was rewritten to work in e10s mode which involved a full rewrite
of the test.
- **Example Data**
* [23.2565333333333, 23.763383333333362, 22.58369999999999, 22.802766666666653, 22.304050000000025, 23.010383333333326, 22.865466666666677, 24.233716666666705, 24.110983333333365, 22.21390000000004, 23.910333333333316, 23.409816666666647, 19.873049999999992, 21.103966666666686, 20.389749999999978, 20.777349999999984, 20.326283333333365, 22.341616666666667, 20.29813333333336, 20.769600000000104]
- **Possible regression causes**
@@ -905,15 +905,15 @@ suites:
- Perfomatic: "Ts, Paint"
- type: Startup_
- data: 20 times we start the browser and time how long it takes to
paint the startup test page, resulting in 1 set of 20 data points.
paint the startup test page, resulting in 1 set of 20 data points.
- summarization:
* subtest: identical to suite
* suite: `ignore first`_ data point, then take the `median`_ of the remaining 19 data points; `source:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l135>`__
- description:
Starts the browser to display tspaint_test.html with the start time in
the url, waits for `MozAfterPaint and onLoad <#paint>`__ to fire, then
records the end time and calculates the time to startup.
| Starts the browser to display tspaint_test.html with the start time in
the url, waits for `MozAfterPaint and onLoad <#paint>`__ to fire, then
records the end time and calculates the time to startup.
- **Example Data**
* [1666.0, 1195.0, 1139.0, 1198.0, 1248.0, 1224.0, 1213.0, 1194.0, 1229.0, 1196.0, 1191.0, 1230.0, 1247.0, 1169.0, 1217.0, 1184.0, 1196.0, 1192.0, 1224.0, 1192.0]
- **Possible regression causes**
@@ -921,7 +921,8 @@ suites:
browser window (e.g. browser.xul) and it's frame gets created. Fix
this by ensuring it's display:none by default.
ts_paint_flex: >
- description: This test was created as a part of a goal to switch away from xul flexbox to css flexbox
- description:
| This test was created as a part of a goal to switch away from xul flexbox to css flexbox
- Contact: No longer being maintained by any team/individual
ts_paint_heavy: >
- `ts_paint <#ts_paint>`_ test run against a heavy user profile.
@@ -940,15 +941,15 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l623>`__
* suite: `geometric mean`_ of the 6 subtest results.
- description:
This test scrolls several pages where each represent a different known
"hard" case to scroll (\* needinfo), and measures the average frames
interval (1/FPS) on each. The ASAP test (tscrollx) iterates in unlimited
frame-rate mode thus reflecting the maximum scroll throughput per page.
To turn on ASAP mode, we set these preferences:
| This test scrolls several pages where each represent a different known
"hard" case to scroll (\* needinfo), and measures the average frames
interval (1/FPS) on each. The ASAP test (tscrollx) iterates in unlimited
frame-rate mode thus reflecting the maximum scroll throughput per page.
To turn on ASAP mode, we set these preferences:
``preferences = {'layout.frame_rate': 0, 'docshell.event_starvation_delay_hint': 1}``
``preferences = {'layout.frame_rate': 0, 'docshell.event_starvation_delay_hint': 1}``
See also `tp5o_scroll <#tp5o_scroll>`_ which has relevant information for this test.
See also `tp5o_scroll <#tp5o_scroll>`_ which has relevant information for this test.
- **Example Data**
* 0;tiled.html;5.41;5.57;5.34;5.64;5.53;5.48;5.44;5.49;5.50;5.50;5.49;5.66;5.50;5.37;5.57;5.54;5.46;5.31;5.41;5.57;5.50;5.52;5.71;5.31;5.44
* fixed.html;10.404609053497941;10.47;10.66;10.45;10.73;10.79;10.64;10.64;10.82;10.43;10.92;10.47;10.47;10.64;10.74;10.67;10.40;10.83;10.77;10.54;10.38;10.70;10.44;10.38;10.56
@@ -961,8 +962,7 @@ suites:
- source: `svg_static <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/tests/svg_static/>`__
- type: `Page load`_
- data: we load the 5 svg pages 25 times, resulting in 5 sets of 25 data points
- summarization: An svg-only number that measures SVG rendering
performance of some complex (but static) SVG content.
- summarization: An svg-only number that measures SVG rendering performance of some complex (but static) SVG content.
* subtest: `ignore first`_ **5** data points, then take the `median`_ of the remaining 20; `source:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l623>`__
* suite: `geometric mean`_ of the 5 subtest results.
@@ -986,18 +986,18 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l623>`__
* suite: `geometric mean`_ of the 2 subtest results.
- description:
Renders many semi-transparent, partially overlapping SVG rectangles, and
measures time to completion of this rendering.
| Renders many semi-transparent, partially overlapping SVG rectangles, and
measures time to completion of this rendering.
Note that this test also tends to reflect changes in network efficiency
and navigation bar rendering issues:
Note that this test also tends to reflect changes in network efficiency
and navigation bar rendering issues.
- Most of the page load tests measure from before the location is
changed, until onload + mozafterpaint, therefore any changes in
chrome performance from the location change, or network performance
(the pages load from a local web server) would affect page load
times. SVG opacity is rather quick by itself, so any such
chrome/network/etc performance changes would affect this test more
than other page load tests (relatively, in percentages).
changed, until onload + mozafterpaint, therefore any changes in
chrome performance from the location change, or network performance
(the pages load from a local web server) would affect page load
times. SVG opacity is rather quick by itself, so any such
chrome/network/etc performance changes would affect this test more
than other page load tests (relatively, in percentages).
- **Example Data**
* 0;big-optimizable-group-opacity-2500.svg;170;171;205;249;249;244;192;252;192;431;182;250;189;249;151;168;209;194;247;250;193;250;255;247;247
* 1;small-group-opacity-2500.svg;585;436;387;441;512;438;440;380;443;391;450;386;459;383;445;388;450;436;485;443;383;438;528;444;441
@@ -1011,14 +1011,14 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l623>`__
* suite: `geometric mean`_ of the 7 subtest results.
- description:
An svg-only number that measures SVG rendering performance, with
animations or iterations of rendering. This is an ASAP test --i.e. it
iterates in unlimited frame-rate mode thus reflecting the maximum
rendering throughput of each test. The reported value is the overall
duration the sequence/animation took to complete. To turn on ASAP mode,
we set these preferences:
| An svg-only number that measures SVG rendering performance, with
animations or iterations of rendering. This is an ASAP test --i.e. it
iterates in unlimited frame-rate mode thus reflecting the maximum
rendering throughput of each test. The reported value is the overall
duration the sequence/animation took to complete. To turn on ASAP mode,
we set these preferences:
``preferences = {'layout.frame_rate': 0, 'docshell.event_starvation_delay_hint': 1}``
``preferences = {'layout.frame_rate': 0, 'docshell.event_starvation_delay_hint': 1}``
- **Example Data**
* 0;hixie-001.xml;562;555;508;521;522;520;499;510;492;514;502;504;500;521;510;506;511;505;495;517;520;512;503;504;502
* 1;hixie-002.xml;510;613;536;530;536;522;498;505;500;504;498;529;498;509;493;512;501;506;504;499;496;505;508;511;503
@@ -1044,16 +1044,16 @@ suites:
test.py <https://dxr.mozilla.org/mozilla-central/source/testing/talos/talos/test.py#l190>`__
* suite: identical to subtest
- description:
Tests the amount of time it takes the open a new window from a currently
open browser. This test does not include startup time. Multiple test
windows are opened in succession, results reported are the average
amount of time required to create and display a window in the running
instance of the browser. (Measures ctrl-n performance.)
| Tests the amount of time it takes the open a new window from a currently
open browser. This test does not include startup time. Multiple test
windows are opened in succession, results reported are the average
amount of time required to create and display a window in the running
instance of the browser. (Measures ctrl-n performance.)
- **Example Data**
* [209.219, 222.180, 225.299, 225.970, 228.090, 229.450, 230.625, 236.315, 239.804, 242.795, 244.5, 244.770, 250.524, 251.785, 253.074, 255.349, 264.729, 266.014, 269.399, 326.190]
v8_7: >
- description:
This is the V8 (version 7) javascript benchmark taken verbatim and slightly modified
to fit into our pageloader extension and talos harness. The previous version of this
test is V8 version 5 which was run on selective branches and operating systems.
| This is the V8 (version 7) javascript benchmark taken verbatim and slightly modified
to fit into our pageloader extension and talos harness. The previous version of this
test is V8 version 5 which was run on selective branches and operating systems.
- contact: No longer being maintained by any team/individual

View File

@@ -420,9 +420,10 @@ class TalosGatherer(FrameworkGatherer):
# Example Data for using code block
example_list = [s.strip() for s in description.split("* ")]
result += f" * {example_list[0]}\n"
result += " .. code-block::\n\n"
result += "\n .. code-block::\n\n"
for example in example_list[1:]:
result += f" {example}\n"
result += "\n"
elif " * " in description:
# Sub List
@@ -451,9 +452,9 @@ class TalosGatherer(FrameworkGatherer):
result += r" * " + key + r": " + str(value) + r"\n"
# Command
result += " * Command\n"
result += " * Command\n\n"
result += " .. code-block::\n\n"
result += f" ./mach talos-test -a {title}\n"
result += f" ./mach talos-test -a {title}\n\n"
if self._task_list.get(title, []):
result += " * **Test Task**:\n\n"