Kate Stone b9c1b51e45 *** This commit represents a complete reformatting of the LLDB source code
*** to conform to clang-format’s LLVM style.  This kind of mass change has
*** two obvious implications:

Firstly, merging this particular commit into a downstream fork may be a huge
effort.  Alternatively, it may be worth merging all changes up to this commit,
performing the same reformatting operation locally, and then discarding the
merge for this particular commit.  The commands used to accomplish this
reformatting were as follows (with current working directory as the root of
the repository):

    find . \( -iname "*.c" -or -iname "*.cpp" -or -iname "*.h" -or -iname "*.mm" \) -exec clang-format -i {} +
    find . -iname "*.py" -exec autopep8 --in-place --aggressive --aggressive {} + ;

The version of clang-format used was 3.9.0, and autopep8 was 1.2.4.

Secondly, “blame” style tools will generally point to this commit instead of
a meaningful prior commit.  There are alternatives available that will attempt
to look through this change and find the appropriate prior commit.  YMMV.

llvm-svn: 280751
2016-09-06 20:57:50 +00:00

78 lines
2.6 KiB
Python

#!/usr/bin/env python
"""
A simple bench runner which delegates to the ./dotest.py test driver to run the
benchmarks defined in the list named 'benches'.
You need to hand edit 'benches' to modify/change the command lines passed to the
test driver.
Use the following to get only the benchmark results in your terminal output:
./bench.py -e /Volumes/data/lldb/svn/regression/build/Debug/lldb -x '-F Driver::MainLoop()' 2>&1 | grep -P '^lldb.*benchmark:'
See also bench-history.
"""
from __future__ import print_function
from __future__ import absolute_import
import os
import sys
import re
from optparse import OptionParser
# dotest.py invocation with no '-e exe-path' uses lldb as the inferior program,
# unless there is a mentioning of custom executable program.
benches = [
# Measure startup delays creating a target, setting a breakpoint, and run
# to breakpoint stop.
'./dotest.py -v +b %E %X -n -p TestStartupDelays.py',
# Measure 'frame variable' response after stopping at a breakpoint.
'./dotest.py -v +b %E %X -n -p TestFrameVariableResponse.py',
# Measure stepping speed after stopping at a breakpoint.
'./dotest.py -v +b %E %X -n -p TestSteppingSpeed.py',
# Measure expression cmd response with a simple custom executable program.
'./dotest.py +b -n -p TestExpressionCmd.py',
# Attach to a spawned process then run disassembly benchmarks.
'./dotest.py -v +b -n %E -p TestDoAttachThenDisassembly.py'
]
def main():
"""Read the items from 'benches' and run the command line one by one."""
parser = OptionParser(usage="""\
%prog [options]
Run the standard benchmarks defined in the list named 'benches'.\
""")
parser.add_option('-e', '--executable',
type='string', action='store',
dest='exe',
help='The target program launched by lldb.')
parser.add_option('-x', '--breakpoint-spec',
type='string', action='store',
dest='break_spec',
help='The lldb breakpoint spec for the target program.')
# Parses the options, if any.
opts, args = parser.parse_args()
print("Starting bench runner....")
for item in benches:
command = item.replace('%E',
'-e "%s"' % opts.exe if opts.exe else '')
command = command.replace('%X', '-x "%s"' %
opts.break_spec if opts.break_spec else '')
print("Running %s" % (command))
os.system(command)
print("Bench runner done.")
if __name__ == '__main__':
main()