gecko-dev/testing/web-platform
Darren Shen 8d5b194d44 Bug 1442425 [wpt PR 9383] - [css-typed-om] Add per-property tests., a=testonly
Automatic update from web-platform-tests
Currently, we don't have any test coverage over properties. In our
existing tests, we only use 'canonical' test properties like 'width'.
This means that it's possible that other properties (e.g. 'height')
might not work and we still pass the tests.

We add a bunch of new files, each representing a property. Think of
these as metadata for the properties. For example, each file contains
the values that are valid for that property. We then have a test suite
that uses this metadata to generate appropriate tests.

More properties will be coming.

There is a test failure involving setting margin-top to unitless zero.

Bug: 774887
Change-Id: I8f8463c8f608454ef177e81ace21fe1eeb66f897
Reviewed-on: https://chromium-review.googlesource.com/882901
Commit-Queue: Darren Shen <shend@chromium.org>
Reviewed-by: nainar <nainar@chromium.org>
Cr-Commit-Position: refs/heads/master@{#534623}

<!-- Reviewable:start -->

<!-- Reviewable:end -->

wpt-commits: 7b7b881f37b51f061c6305a43dbd8b233def6796
wpt-pr: 9383
reapplied-commits: 370e267e160568862f1fd9ec246ab5bb840f586e, fe4514c84e7ad28e46bad5da93381deb99b177f3, 7806af854343c043a2645a4034fdc7812f65daad, 9ddfd21554293dec5a4bf2e5375ae4f3c9f2ded0, 75f63c4d1ebc949647184fd60972fc7b9fd4affb, 1f3a5b496acd2288cc8cf0c32af86cb35157ea4e, 88b42bd5847abac58a62c4d6b33c1509bfce5f3d, 15c2e4c690700c6c115f8afe5e44ded10d943538, c8d461ef1437641ae7d4ea1d21e1e60cd62910b0, a6088a5f48ee299386a84d2f771902267d7355b1, 0634cd8f08ebe0905a9188fb1398c7b5f889c5dc, c8ee4a012dae506ae06bb5b2ad50942b04c1aaaa, c2c352456a4cf62dcc12f851138b04397675a445, b93a8879555d2fa7e7d4e00a275513a3a6338b35, b86e1331cb36634fd33677043b61fc0c1d8485bc, 44ddf14fd3346658c3223f13652073fafbfa48fa, a1a5840a6bb53e305ba02bcbeb215659342d0edb, 7465cb110ae5ec2e2ca73182caf5293f0efc8fd5, aad5349b3458bc3414e274b33fa86a1123901ff2, eca0907980d2769c449894a6277c60c1a306792f, 38626987c0cfd6e715cfcc6f4f1a1209191a03c5, e4a67f7ddcde6cd99348e9104bd7ed07074da44a, bb3c9990840a0fae2afc840b5952d7874785b112, 042d7adef0bdb9dc80e825c3997ace7519477c42, 99f1ea44fc7915b8b7b33bce4732fa8765fd3ac2
2018-03-02 16:40:30 +00:00
..
certs
meta Bug 1442425 [wpt PR 9383] - [css-typed-om] Add per-property tests., a=testonly 2018-03-02 16:40:30 +00:00
mozilla Backed out 8 changesets (bug 1433190, bug 1433144) for OS X wpt reftest whac-a-mole bustage and too-frequent other intermittents 2018-01-27 00:30:30 -08:00
outbound
products
tests Bug 1442425 [wpt PR 9383] - [css-typed-om] Add per-property tests., a=testonly 2018-03-02 16:40:30 +00:00
update
mach_commands_base.py
mach_commands.py
mach_test_package_commands.py Bug 1413636 - Fix wpt mach command on one-click loaner; r=jgraham 2017-11-01 12:58:48 -04:00
manifestupdate.py
moz.build Bug 1436394: Move selectors and css-scoping WPT tests to CSS Parsing and Computation. r=jgraham 2018-02-07 17:05:37 +01:00
README.md Bug 1413575: Adjust in-tree references to mozbase documentation to point in-tree. r=ahal 2017-11-20 11:43:13 -07:00
runtests.py
wptrunner.ini

web-platform-tests

This directory contains the W3C web-platform-tests. They can be run using mach:

mach web-platform-tests

To limit the testrun to certain directories use the --include option; for example:

mach web-platform-tests --include=dom

The testsuite contains a mix of javascript tests and reftests. To limit the type of tests that get run, use --test-type=testharness for javascript tests or --test-type=reftest for reftests.

FAQ

  • I fixed a bug and some tests have started to pass. How do I fix the UNEXPECTED-PASS messages when web-platform-tests is run?

    You need to update the expectation data for those tests. See the section on expectations below.

  • I want to write some new tests for the web-platform-tests testsuite. How do I do that?

    See the section on tests below. You can commit the tests directly to the Mozilla repository under testing/web-platform/tests and they will be upstreamed next time the test is imported. For this reason please ensure that any tests you write are testing correct-per-spec behaviour even if we don't yet pass, get proper review, and have a commit message that makes sense outside of the Mozilla context. If you are writing tests that should not be upstreamed yet for some reason they must be located under testing/web-platform/mozilla/tests.

    It is important to note that in order for the tests to run the manifest file must be updated; this should not be done by hand, but by running mach wpt-manifest-update (or mach web-platform-tests --manifest-update, if you also wish to run some tests).

    mach web-platform-tests-create <path> is a helper script designed to help create new web-platform-tests. It opens a locally configured editor at <path> with web-platform-tests boilerplate filled in, and in the background runs mach web-platform-tests --manifest-update <path>, so the test being developed is added to the manifest and opened for interactive development.

  • How do I write a test that requires the use of a Mozilla-specific feature?

    Tests in the mozilla/tests/ directory use the same harness but are not synced with any upstream. Be aware that these tests run on the server with a /_mozilla/ prefix to their URLs.

  • A test is unstable; how do I disable it?

    See the section on disabling tests.

Directories

tests/ contains the tests themselves. This is a copy of a certain revision of web-platform-tests. Any patches modifying this directory will be upstreamed next time the tests are imported.

harness/ contains the wptrunner test runner. Again the contents of this directory will be overwritten on update.

meta/ contains Gecko-specific expectation data. This is explained in the following section.

mozilla/tests contains tests that will not be upstreamed and may make use of Mozilla-specific features.

mozilla/meta contains metadata for the Mozilla-specific tests.

Expectation Data

With the tests coming from upstream, it is not guaranteed that they all pass in Gecko-based browsers. For this reason it is necessary to provide metadata about the expected results of each test. This is provided in a set of manifest files in the meta/ subdirectories.

There is one manifest file per test with "non-default" expectations. By default tests are expected to PASS, and tests with subtests are expected to have an overall status of OK. The manifest file of a test has the same path as the test file but under the meta directory rather than the tests directory and has the suffix .ini.

The format of these files is similar to ini files, but with a couple of important differences; sections can be nested using indentation, and only : is permitted as a key-value separator. For example the expectation file for a test with one failing subtest and one erroring subtest might look like:

[filename.html]
    type: testharness

    [Subtest name for failing test]
        expected: FAIL

    [Subtest name for erroring test]
        expected: ERROR

Expectations can also be made platform-specific using a simple python-like conditional syntax e.g. for a test that times out on linux but otherwise fails:

[filename.html]
    type: reftest
    expected:
        if os == "linux": TIMEOUT
        FAIL

The available variables for the conditions are those provided by mozinfo.

For more information on manifest files, see the wptrunner documentation.

Autogenerating Expectation Data

After changing some code it may be necessary to update the expectation data for the relevant tests. This can of course be done manually, but tools are available to automate much of the process.

First one must run the tests that have changed status, and save the raw log output to a file:

mach web-platform-tests --include=url/of/test.html --log-raw=new_results.log

Then the web-platform-tests-update command may be run using this log data to update the expectation files:

mach web-platform-tests-update --no-check-clean new_results.log

By default this only updates the results data for the current platform. To forcibly overwrite all existing result data, use the --ignore-existing option to the update command.

Disabling Tests

Tests are disabled using the same manifest files used to set expectation values. For example, if a test is unstable on Windows, it can be disabled using an ini file with the contents:

[filename.html]
    type: testharness
    disabled:
        if os == "win": https://bugzilla.mozilla.org/show_bug.cgi?id=1234567

Enabling Prefs

Some tests require specific prefs to be enabled before running. These prefs can be set in the expectation data using a prefs key with a comma-seperate list of pref.name:value items to set e.g.

[filename.html]
    prefs: [dom.serviceWorkers.enabled:true,
            dom.serviceWorkers.exemptFromPerDomainMax:true,
            dom.caches.enabled:true]

Disabling Leak Checks

When a test is imported that leaks, it may be necessary to temporarily disable leak checking for that test in order to allow the import to proceed. This works in basically the same way as disabling a test, but with the key 'leaks' e.g.

[filename.html]
    type: testharness
    leaks:
        if os == "linux": https://bugzilla.mozilla.org/show_bug.cgi?id=1234567

Setting per-Directory Metadata

Occasionally it is useful to set metadata for an entire directory of tests e.g. to disable then all, or to enable prefs for every test. In that case it is possible to create a __dir__.ini file in the metadata directory corresponding to the tests for which you want to set this metadata e.g. to disable all the tests in tests/feature/unsupported/, one might create meta/feature/unsupported/__dir__.ini with the contents:

disabled: Feature is unsupported

Settings set in this way are inherited into subdirectories. It is possible to unset a value that has been set in a parent using the special token @Reset (usually used with prefs), or to force a value to true or false using @True and @False. For example to enable the tests in meta/feature/unsupported/subfeature-supported one might create an ini file meta/feature/unsupported/subfeature-supported/__dir__.ini like:

disabled: @False

Test Format

Javascript tests are written using testharness.js. Reftests are similar to standard Gecko reftests without an explicit manifest file, but with in-test or filename conventions for identifying the reference.

Full documentation on test authoring and submission can be found on testthewebforward.org.

Test Manifest

web-platform-tests use a large auto-generated JSON file as their manifest. This stores data about the type of tests, their references, if any, and their timeout, gathered by inspecting the filenames and the contents of the test files.

In order to update the manifest it is recommended that you run mach web-platform-tests --manifest-update. This rescans the test directory looking for new, removed, or altered tests.

Running Tests In Other Browsers

web-platform-tests is cross browser, and the runner is compatible with multiple browsers. Therefore it's possible to check the behaviour of tests in other browsers. By default Chrome, Edge and Servo are supported. In order to run the tests in these browsers use the --product argument to wptrunner:

mach wpt --product chrome dom/historical.html

By default these browsers run without expectation metadata, but it can be added in the testing/web-platform/products/<product> directory. To run with the same metadata as for Firefox (so that differences are reported as unexpected results), pass --meta testing/web-platform/meta to the mach command.