Bug 1928539 - Add PerfCompare Documentation to firefox source docs. r=sparky,perftest-reviewers DONTBUILD

This patch adds a PerfCompare Documentation page to the performance testing docs.

Differential Revision: https://phabricator.services.mozilla.com/D229411
This commit is contained in:
Carla Severe 2024-11-20 22:42:01 +00:00
parent 17c3904d5b
commit 9ba30504e5
37 changed files with 326 additions and 6 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

View File

@ -13,6 +13,7 @@ Performance Testing
mach-try-perf
mozperftest
perf-sheriffing
perfcompare
performance-infrastructure
perftest-in-a-nutshell
raptor
@ -30,6 +31,7 @@ For more detailed information about each test suite and project, see their docum
* :doc:`mach-try-perf`
* :doc:`mozperftest`
* :doc:`perf-sheriffing`
* :doc:`perfcompare`
* :doc:`performance-infrastructure`
* :doc:`perftest-in-a-nutshell`
* :doc:`raptor`

View File

@ -0,0 +1,149 @@
=============
PerfCompare
=============
.. contents::
:depth: 5
:local:
PerfCompare is an improved performance comparison tool that will soon replace Perfherders Compare View. It allows comparisons of up to three **new** revisions/patches versus the **base** revision of a repository (mozilla-central, autoland, etc). Up to three **new** revisions compared to the **base** repositorys history over time can be selected. The two comparison workflows lead to results indicating whether patches have caused an improvement or regression. The following documentation captures the apps features and workflows in more detail.
Where can I find PerfCompare?
==============================
Aside from `the website perf.compare <https://perf.compare/>`_, it will be accessible on Perfherders Compare View search and results pages.
The source code can be viewed in GitHub's `repository <https://github.com/mozilla/perfcompare>`_.
Home / Search Page
====================
Landing on PerfCompare, two search comparison workflows are available: **Compare with a base** or **Compare over time**.
Compare with a base
--------------------
.. image:: ./perfcomparehomescreen.png
:alt: PerfCompare Interface with Three Selected Revisions to Compare with a Base
:scale: 50%
:align: center
PerfCompare allows up to three **new** revisions to compare against a **base** revision. The specific testing framework or harness can also be selected.
Compare over time
------------------
Its also possible to select up to three revisions to compare against a base repositorys history over a specified period.
.. image:: ./compareovertime.png
:alt: PerfCompare Selection Interface for Revisions/Pushes to Compare over Time
:scale: 50%
:align: center
Results Page
=============
After pressing the Compare button, the Results Page displays the information of the selected revisions and the results table.
Edit the compared revisions
----------------------------
The compared revisions can be edited, and a new comparison can be computed for an updated results table without having to return to the home page. Clicking the **Edit entry** button will open the edit view.
.. image:: ./resultseditentry.png
:alt: PerfCompare Results Page Edit Entry Selection
:scale: 50%
:align: center
In the edit view, its possible to search for revisions or delete selected revisions. The option to cancel and return to the previous selections is available. Otherwise, once satisfied with the changes, clicking **Compare** will update the data in the results table.
.. image:: ./resultseditentryviewbase.png
:alt: PerfCompare Results Page Compare with a Base Edit Entry View
:scale: 50%
:align: center
Like Compare with a base, clicking **Edit Entry** will open the edit view to change selections for the base repository, time range or delete or search for new selected revisions.
.. image:: ./resultseditentryviewtime.png
:alt: PerfCompare Results Page Compare over Time Edit Entry View
:scale: 50%
:align: center
Results Table
===============
Please refer to the `Understanding the Results <standard-workflow.html#understanding-the-results>`_ section of the Compare View documentation for information on interpreting the results table.
Its possible to search the results table by platform, title, or revisions. Other frameworks can be selected to see the results in a different test harness. The **All revisions** dropdown provides options to see the results according to a specific new revision.
.. image:: ./resultstable.png
:alt: PerfCompare Results Table
:scale: 50%
:align: center
The **Download JSON** button generates a json output of the results data.
The results table can be filtered according to Platforms, Status (No Changes, Improvement, or Regression), or Confidence (Low, Medium, High)
.. image:: ./resultstablefilters.png
:alt: PerfCompare Results Table with Filters
:scale: 50%
:align: center
Expanded Rows
--------------
Clicking on the **the carrot down** button expands the row
.. image:: ./resultstableexpanded.png
:alt: PerfCompare Results Table with Expanded Row
:scale: 50%
:align: center
In the expanded view, hovering over the points or curve on the graphs shows more information about it.
.. image:: ./resultstableexpandedgraph.png
:alt: PerfCompare Results Table with Hover Over The Graph
:scale: 50%
:align: center
Subtests
---------
When such data is available, clicking on the **subtest icon** opens a new page containing the subtests information for the selected result
.. image:: ./resultstablesubtests.png
:alt: PerfCompare Results Table with Subtests View
:scale: 50%
:align: center
Graph view
-----------
Clicking on the **graph icon** opens the graph of the historical data or graph view for the job in a new window on Treeherder.
.. image:: ./resultstableexpandedgraph.png
:alt: PerfCompare Results Table with Graph View
:scale: 50%
:align: center
Here is an example of the graph view after clicking this icon:
.. image:: ./resultstablegraphveiwperfherder.png
:alt: Historical Graph Data on Perfherder
:scale: 50%
:align: center
Retrigger test jobs
===================
Its possible to retrigger jobs within Taskcluster. Clicking on the **retrigger icon** will show a dialog to choose how many new runs should be started. Note that signing in with valid taskcluster
.. image:: ./resultstableretrigger.png
:alt: PerfCompare Results Table with Taskcluster Login
:scale: 50%
:align: center
.. image:: ./resultstableretriggerjobs.png
:alt: PerfCompare Results Table with Retrigger Jobs Dialog
:scale: 50%
:align: center

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

View File

@ -89,7 +89,7 @@ When you click on the "Alerts Summary" hyperlink it will take you to an alert su
Running Performance Tests
-------------------------
Performance tests can either be run locally, or in CI using try runs. In general, it's recommended to use try runs to verify the performance changes your patch produces (if any). This is because the hardware that we run tests on may not have the same characteristics as local machines so local testing may not always produce the same performance differences. Using try runs also allows you to use our performance comparison tooling such as `Compare View <https://treeherder.mozilla.org/perfherder/comparechooser>`_, and `PerfCompare <https://perf.compare/>`_. See the `Performance Comparisons`_ section for more information on that.
Performance tests can either be run locally, or in CI using try runs. In general, it's recommended to use try runs to verify the performance changes your patch produces (if any). This is because the hardware that we run tests on may not have the same characteristics as local machines so local testing may not always produce the same performance differences. Using try runs also allows you to use our performance comparison tooling such as `Compare View <https://treeherder.mozilla.org/perfherder/comparechooser>`_ and `PerfCompare <https://perf.compare/>`_. See the `Performance Comparisons`_ section for more information on that.
It's still possible that a local test can reproduce a change found in CI though, but it's not guaranteed. To run a test locally, you can look at the tests listed in either of the harness documentation test lists such as this one for `Raptor tests <raptor.html#raptor-tests>`_. There are four main ways that you'll find to run these tests:
@ -109,7 +109,14 @@ Performance Comparisons
Comparing performance metrics across multiple try runs is an important step in the performance testing process. It's used to ensure that changes don't regress our metrics, to determine if a performance improvement is produced from a patch, and among other things, used to verify that a fix resolves a performance alert.
We currently use the `Compare View <https://treeherder.mozilla.org/perfherder/comparechooser>`_ for comparing performance numbers. The first interface that's seen in that process is the following which is used to select two pushes (based on the revisions) to compare.
We currently use PerfCompare for comparing performance numbers. Landing on PerfCompare, two search comparison workflows are available: Compare with a base or Compare over time. Compare with a base allows up to three new revisions to compare against a base revision. Although talos is set at the default, any other testing framework or harness can also be selected before clicking the Compare button. :ref:`You can find more information about using PerfCompare here <PerfCompare>`.
.. image:: ./perfcomparehomescreen.png
:alt: PerfCompare Selection Interface for Revisions/Pushes to Compare
:scale: 50%
:align: center
Our old tool for comparing perfomance numbers, `Compare View <https://treeherder.mozilla.org/perfherder/comparechooser>`_, will be replaced by PerfCompare early next year. The first interface that's seen in that process is the following which is used to select two pushes (based on the revisions) to compare.
.. image:: ./compare_view_selection.png
:alt: Selection Interface for Revisions/Pushes to Compare

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 325 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 290 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 322 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 411 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 388 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

View File

@ -20,7 +20,6 @@ When ``mach try perf`` is run, the exact same interface as seen with ``./mach tr
After selecting some relevant categories, and pressing enter in the interface, two pushes will start. The first push that happens is the **new** try run. It contains any patches that may have been made locally. After that, a **base** try run is produced which uses the mozilla-central those patches are based on. These two are then used to produce a Compare View link in the command console that can be used to quickly see the performance differences between the **base**, and **new** tests. Note that these two pushes can take some time, and there's some work in progress `to reduce this wait time here <https://bugzilla.mozilla.org/show_bug.cgi?id=1845789>`_.
CompareView
-----------

Binary file not shown.

After

Width:  |  Height:  |  Size: 334 KiB

View File

@ -20,7 +20,6 @@ When ``mach try perf`` is run, the exact same interface as seen with ``./mach tr
After selecting some relevant categories, and pressing enter in the interface, two pushes will start. The first push that happens is the **new** try run. It contains any patches that may have been made locally. After that, a **base** try run is produced which uses the mozilla-central those patches are based on. These two are then used to produce a Compare View link in the command console that can be used to quickly see the performance differences between the **base**, and **new** tests. Note that these two pushes can take some time, and there's some work in progress `to reduce this wait time here <https://bugzilla.mozilla.org/show_bug.cgi?id=1845789>`_.
CompareView
-----------

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

View File

@ -0,0 +1,8 @@
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
---
name: perfcompare
manifest: None
static-only: True
suites: {}

View File

@ -0,0 +1,149 @@
=============
PerfCompare
=============
.. contents::
:depth: 5
:local:
PerfCompare is an improved performance comparison tool that will soon replace Perfherders Compare View. It allows comparisons of up to three **new** revisions/patches versus the **base** revision of a repository (mozilla-central, autoland, etc). Up to three **new** revisions compared to the **base** repositorys history over time can be selected. The two comparison workflows lead to results indicating whether patches have caused an improvement or regression. The following documentation captures the apps features and workflows in more detail.
Where can I find PerfCompare?
==============================
Aside from `the website perf.compare <https://perf.compare/>`_, it will be accessible on Perfherders Compare View search and results pages.
The source code can be viewed in GitHub's `repository <https://github.com/mozilla/perfcompare>`_.
Home / Search Page
====================
Landing on PerfCompare, two search comparison workflows are available: **Compare with a base** or **Compare over time**.
Compare with a base
--------------------
.. image:: ./perfcomparehomescreen.png
:alt: PerfCompare Interface with Three Selected Revisions to Compare with a Base
:scale: 50%
:align: center
PerfCompare allows up to three **new** revisions to compare against a **base** revision. The specific testing framework or harness can also be selected.
Compare over time
------------------
Its also possible to select up to three revisions to compare against a base repositorys history over a specified period.
.. image:: ./compareovertime.png
:alt: PerfCompare Selection Interface for Revisions/Pushes to Compare over Time
:scale: 50%
:align: center
Results Page
=============
After pressing the Compare button, the Results Page displays the information of the selected revisions and the results table.
Edit the compared revisions
----------------------------
The compared revisions can be edited, and a new comparison can be computed for an updated results table without having to return to the home page. Clicking the **Edit entry** button will open the edit view.
.. image:: ./resultseditentry.png
:alt: PerfCompare Results Page Edit Entry Selection
:scale: 50%
:align: center
In the edit view, its possible to search for revisions or delete selected revisions. The option to cancel and return to the previous selections is available. Otherwise, once satisfied with the changes, clicking **Compare** will update the data in the results table.
.. image:: ./resultseditentryviewbase.png
:alt: PerfCompare Results Page Compare with a Base Edit Entry View
:scale: 50%
:align: center
Like Compare with a base, clicking **Edit Entry** will open the edit view to change selections for the base repository, time range or delete or search for new selected revisions.
.. image:: ./resultseditentryviewtime.png
:alt: PerfCompare Results Page Compare over Time Edit Entry View
:scale: 50%
:align: center
Results Table
===============
Please refer to the `Understanding the Results <standard-workflow.html#understanding-the-results>`_ section of the Compare View documentation for information on interpreting the results table.
Its possible to search the results table by platform, title, or revisions. Other frameworks can be selected to see the results in a different test harness. The **All revisions** dropdown provides options to see the results according to a specific new revision.
.. image:: ./resultstable.png
:alt: PerfCompare Results Table
:scale: 50%
:align: center
The **Download JSON** button generates a json output of the results data.
The results table can be filtered according to Platforms, Status (No Changes, Improvement, or Regression), or Confidence (Low, Medium, High)
.. image:: ./resultstablefilters.png
:alt: PerfCompare Results Table with Filters
:scale: 50%
:align: center
Expanded Rows
--------------
Clicking on the **the carrot down** button expands the row
.. image:: ./resultstableexpanded.png
:alt: PerfCompare Results Table with Expanded Row
:scale: 50%
:align: center
In the expanded view, hovering over the points or curve on the graphs shows more information about it.
.. image:: ./resultstableexpandedgraph.png
:alt: PerfCompare Results Table with Hover Over The Graph
:scale: 50%
:align: center
Subtests
---------
When such data is available, clicking on the **subtest icon** opens a new page containing the subtests information for the selected result
.. image:: ./resultstablesubtests.png
:alt: PerfCompare Results Table with Subtests View
:scale: 50%
:align: center
Graph view
-----------
Clicking on the **graph icon** opens the graph of the historical data or graph view for the job in a new window on Treeherder.
.. image:: ./resultstableexpandedgraph.png
:alt: PerfCompare Results Table with Graph View
:scale: 50%
:align: center
Here is an example of the graph view after clicking this icon:
.. image:: ./resultstablegraphveiwperfherder.png
:alt: Historical Graph Data on Perfherder
:scale: 50%
:align: center
Retrigger test jobs
===================
Its possible to retrigger jobs within Taskcluster. Clicking on the **retrigger icon** will show a dialog to choose how many new runs should be started. Note that signing in with valid taskcluster
.. image:: ./resultstableretrigger.png
:alt: PerfCompare Results Table with Taskcluster Login
:scale: 50%
:align: center
.. image:: ./resultstableretriggerjobs.png
:alt: PerfCompare Results Table with Retrigger Jobs Dialog
:scale: 50%
:align: center

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 249 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 325 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 290 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 322 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 411 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 388 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 334 KiB

View File

@ -89,7 +89,7 @@ When you click on the "Alerts Summary" hyperlink it will take you to an alert su
Running Performance Tests
-------------------------
Performance tests can either be run locally, or in CI using try runs. In general, it's recommended to use try runs to verify the performance changes your patch produces (if any). This is because the hardware that we run tests on may not have the same characteristics as local machines so local testing may not always produce the same performance differences. Using try runs also allows you to use our performance comparison tooling such as `Compare View <https://treeherder.mozilla.org/perfherder/comparechooser>`_, and `PerfCompare <https://perf.compare/>`_. See the `Performance Comparisons`_ section for more information on that.
Performance tests can either be run locally, or in CI using try runs. In general, it's recommended to use try runs to verify the performance changes your patch produces (if any). This is because the hardware that we run tests on may not have the same characteristics as local machines so local testing may not always produce the same performance differences. Using try runs also allows you to use our performance comparison tooling such as `Compare View <https://treeherder.mozilla.org/perfherder/comparechooser>`_ and `PerfCompare <https://perf.compare/>`_. See the `Performance Comparisons`_ section for more information on that.
It's still possible that a local test can reproduce a change found in CI though, but it's not guaranteed. To run a test locally, you can look at the tests listed in either of the harness documentation test lists such as this one for `Raptor tests <raptor.html#raptor-tests>`_. There are four main ways that you'll find to run these tests:
@ -109,7 +109,14 @@ Performance Comparisons
Comparing performance metrics across multiple try runs is an important step in the performance testing process. It's used to ensure that changes don't regress our metrics, to determine if a performance improvement is produced from a patch, and among other things, used to verify that a fix resolves a performance alert.
We currently use the `Compare View <https://treeherder.mozilla.org/perfherder/comparechooser>`_ for comparing performance numbers. The first interface that's seen in that process is the following which is used to select two pushes (based on the revisions) to compare.
We currently use PerfCompare for comparing performance numbers. Landing on PerfCompare, two search comparison workflows are available: Compare with a base or Compare over time. Compare with a base allows up to three new revisions to compare against a base revision. Although talos is set at the default, any other testing framework or harness can also be selected before clicking the Compare button. :ref:`You can find more information about using PerfCompare here <PerfCompare>`.
.. image:: ./perfcomparehomescreen.png
:alt: PerfCompare Selection Interface for Revisions/Pushes to Compare
:scale: 50%
:align: center
Our old tool for comparing perfomance numbers, `Compare View <https://treeherder.mozilla.org/perfherder/comparechooser>`_, will be replaced by PerfCompare early next year. The first interface that's seen in that process is the following which is used to select two pushes (based on the revisions) to compare.
.. image:: ./compare_view_selection.png
:alt: Selection Interface for Revisions/Pushes to Compare

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB