llvm/docs/CommandGuide/lit.pod
Duncan Sands 7c2da9648c Fix some typos.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@95542 91177308-0d34-0410-b5e6-96231b3b80d8
2010-02-08 11:03:31 +00:00

355 lines
12 KiB
Plaintext

=pod
=head1 NAME
lit - LLVM Integrated Tester
=head1 SYNOPSIS
B<lit> [I<options>] [I<tests>]
=head1 DESCRIPTION
B<lit> is a portable tool for executing LLVM and Clang style test suites,
summarizing their results, and providing indication of failures. B<lit> is
designed to be a lightweight testing tool with as simple a user interface as
possible.
B<lit> should be run with one or more I<tests> to run specified on the command
line. Tests can be either individual test files or directories to search for
tests (see L<"TEST DISCOVERY">).
Each specified test will be executed (potentially in parallel) and once all
tests have been run B<lit> will print summary information on the number of tests
which passed or failed (see L<"TEST STATUS RESULTS">). The B<lit> program will
execute with a non-zero exit code if any tests fail.
By default B<lit> will use a succinct progress display and will only print
summary information for test failures. See L<"OUTPUT OPTIONS"> for options
controlling the B<lit> progress display and output.
B<lit> also includes a number of options for controlling how tests are exected
(specific features may depend on the particular test format). See L<"EXECUTION
OPTIONS"> for more information.
Finally, B<lit> also supports additional options for only running a subset of
the options specified on the command line, see L<"SELECTION OPTIONS"> for
more information.
Users interested in the B<lit> architecture or designing a B<lit> testing
implementation should see L<"LIT ARCHITECTURE">
=head1 GENERAL OPTIONS
=over
=item B<-h>, B<--help>
Show the B<lit> help message.
=item B<-j> I<N>, B<--threads>=I<N>
Run I<N> tests in parallel. By default, this is automatically chosen to match
the number of detected available CPUs.
=item B<--config-prefix>=I<NAME>
Search for I<NAME.cfg> and I<NAME.site.cfg> when searching for test suites,
instead of I<lit.cfg> and I<lit.site.cfg>.
=item B<--param> I<NAME>, B<--param> I<NAME>=I<VALUE>
Add a user defined parameter I<NAME> with the given I<VALUE> (or the empty
string if not given). The meaning and use of these parameters is test suite
dependent.
=back
=head1 OUTPUT OPTIONS
=over
=item B<-q>, B<--quiet>
Suppress any output except for test failures.
=item B<-s>, B<--succinct>
Show less output, for example don't show information on tests that pass.
=item B<-v>, B<--verbose>
Show more information on test failures, for example the entire test output
instead of just the test result.
=item B<--no-progress-bar>
Do not use curses based progress bar.
=back
=head1 EXECUTION OPTIONS
=over
=item B<--path>=I<PATH>
Specify an addition I<PATH> to use when searching for executables in tests.
=item B<--vg>
Run individual tests under valgrind (using the memcheck tool). The
I<--error-exitcode> argument for valgrind is used so that valgrind failures will
cause the program to exit with a non-zero status.
=item B<--vg-arg>=I<ARG>
When I<--vg> is used, specify an additional argument to pass to valgrind itself.
=item B<--time-tests>
Track the wall time individual tests take to execute and includes the results in
the summary output. This is useful for determining which tests in a test suite
take the most time to execute. Note that this option is most useful with I<-j
1>.
=back
=head1 SELECTION OPTIONS
=over
=item B<--max-tests>=I<N>
Run at most I<N> tests and then terminate.
=item B<--max-time>=I<N>
Spend at most I<N> seconds (approximately) running tests and then terminate.
=item B<--shuffle>
Run the tests in a random order.
=back
=head1 ADDITIONAL OPTIONS
=over
=item B<--debug>
Run B<lit> in debug mode, for debugging configuration issues and B<lit> itself.
=item B<--show-suites>
List the discovered test suites as part of the standard output.
=item B<--no-tcl-as-sh>
Run Tcl scripts internally (instead of converting to shell scripts).
=item B<--repeat>=I<N>
Run each test I<N> times. Currently this is primarily useful for timing tests,
other results are not collated in any reasonable fashion.
=back
=head1 EXIT STATUS
B<lit> will exit with an exit code of 1 if there are any FAIL or XPASS
results. Otherwise, it will exit with the status 0. Other exit codes used for
non-test related failures (for example a user error or an internal program
error).
=head1 TEST DISCOVERY
The inputs passed to B<lit> can be either individual tests, or entire
directories or hierarchies of tests to run. When B<lit> starts up, the first
thing it does is convert the inputs into a complete list of tests to run as part
of I<test discovery>.
In the B<lit> model, every test must exist inside some I<test suite>. B<lit>
resolves the inputs specified on the command line to test suites by searching
upwards from the input path until it finds a I<lit.cfg> or I<lit.site.cfg>
file. These files serve as both a marker of test suites and as configuration
files which B<lit> loads in order to understand how to find and run the tests
inside the test suite.
Once B<lit> has mapped the inputs into test suites it traverses the list of
inputs adding tests for individual files and recursively searching for tests in
directories.
This behavior makes it easy to specify a subset of tests to run, while still
allowing the test suite configuration to control exactly how tests are
interpreted. In addition, B<lit> always identifies tests by the test suite they
are in, and their relative path inside the test suite. For appropriately
configured projects, this allows B<lit> to provide convenient and flexible
support for out-of-tree builds.
=head1 TEST STATUS RESULTS
Each test ultimately produces one of the following six results:
=over
=item B<PASS>
The test succeeded.
=item B<XFAIL>
The test failed, but that is expected. This is used for test formats which allow
specifying that a test does not currently work, but wish to leave it in the test
suite.
=item B<XPASS>
The test succeeded, but it was expected to fail. This is used for tests which
were specified as expected to fail, but are now succeeding (generally because
the feautre they test was broken and has been fixed).
=item B<FAIL>
The test failed.
=item B<UNRESOLVED>
The test result could not be determined. For example, this occurs when the test
could not be run, the test itself is invalid, or the test was interrupted.
=item B<UNSUPPORTED>
The test is not supported in this environment. This is used by test formats
which can report unsupported tests.
=back
Depending on the test format tests may produce additional information about
their status (generally only for failures). See the L<Output|"LIT OUTPUT">
section for more information.
=head1 LIT INFRASTRUCTURE
This section describes the B<lit> testing architecture for users interested in
creating a new B<lit> testing implementation, or extending an existing one.
B<lit> proper is primarily an infrastructure for discovering and running
arbitrary tests, and to expose a single convenient interface to these
tests. B<lit> itself doesn't know how to run tests, rather this logic is
defined by I<test suites>.
=head2 TEST SUITES
As described in L<"TEST DISCOVERY">, tests are always located inside a I<test
suite>. Test suites serve to define the format of the tests they contain, the
logic for finding those tests, and any additional information to run the tests.
B<lit> identifies test suites as directories containing I<lit.cfg> or
I<lit.site.cfg> files (see also B<--config-prefix>. Test suites are initially
discovered by recursively searching up the directory hierarchy for all the input
files passed on the command line. You can use B<--show-suites> to display the
discovered test suites at startup.
Once a test suite is discovered, its config file is loaded. Config files
themselves are just Python modules which will be executed. When the config file
is executed, two important global variables are predefined:
=over
=item B<lit>
The global B<lit> configuration object (a I<LitConfig> instance), which defines
the builtin test formats, global configuration parameters, and other helper
routines for implementing test configurations.
=item B<config>
This is the config object (a I<TestingConfig> instance) for the test suite,
which the config file is expected to populate. The following variables are also
available on the I<config> object, some of which must be set by the config and
others are optional or predefined:
B<name> I<[required]> The name of the test suite, for use in reports and
diagnostics.
B<test_format> I<[required]> The test format object which will be used to
discover and run tests in the test suite. Generally this will be a builtin test
format available from the I<lit.formats> module.
B<test_src_root> The filesystem path to the test suite root. For out-of-dir
builds this is the directory that will be scanned for tests.
B<test_exec_root> For out-of-dir builds, the path to the test suite root inside
the object directory. This is where tests will be run and temporary output files
places.
B<environment> A dictionary representing the environment to use when executing
tests in the suite.
B<suffixes> For B<lit> test formats which scan directories for tests, this
variable as a list of suffixes to identify test files. Used by: I<ShTest>,
I<TclTest>.
B<substitutions> For B<lit> test formats which substitute variables into a test
script, the list of substitutions to perform. Used by: I<ShTest>, I<TclTest>.
B<unsupported> Mark an unsupported directory, all tests within it will be
reported as unsupported. Used by: I<ShTest>, I<TclTest>.
B<parent> The parent configuration, this is the config object for the directory
containing the test suite, or None.
B<on_clone> The config is actually cloned for every subdirectory inside a test
suite, to allow local configuration on a per-directory basis. The I<on_clone>
variable can be set to a Python function which will be called whenever a
configuration is cloned (for a subdirectory). The function should takes three
arguments: (1) the parent configuration, (2) the new configuration (which the
I<on_clone> function will generally modify), and (3) the test path to the new
directory being scanned.
=back
=head2 TEST DISCOVERY
Once test suites are located, B<lit> recursively traverses the source directory
(following I<test_src_root>) looking for tests. When B<lit> enters a
sub-directory, it first checks to see if a nest test suite is defined in that
directory. If so, it loads that test suite recursively, otherwise it
instantiates a local test config for the directory (see L<"LOCAL CONFIGURATION
FILES">).
Tests are identified by the test suite they are contained within, and the
relative path inside that suite. Note that the relative path may not refer to an
actual file on disk; some test formats (such as I<GoogleTest>) define "virtual
tests" which have a path that contains both the path to the actual test file and
a subpath to identify the virtual test.
=head2 LOCAL CONFIGURATION FILES
When B<lit> loads a subdirectory in a test suite, it instantiates a local test
configuration by cloning the configuration for the parent direction -- the root
of this configuration chain will always be a test suite. Once the test
configuration is cloned B<lit> checks for a I<lit.local.cfg> file in the
subdirectory. If present, this file will be loaded and can be used to specialize
the configuration for each individual directory. This facility can be used to
define subdirectories of optional tests, or to change other configuration
parameters -- for example, to change the test format, or the suffixes which
identify test files.
=head2 LIT EXAMPLE TESTS
The B<lit> distribution contains several example implementations of test suites
in the I<ExampleTests> directory.
=head1 SEE ALSO
L<valgrind(1)>
=head1 AUTHOR
Written by Daniel Dunbar and maintained by the LLVM Team (L<http://llvm.org>).
=cut