Use instance of MCAsmBackend from BinaryContext instead of creating a
new one.
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D152849
Fix off-by-one error while handling of the --max-funcs=<N> option.
We used to process N+1 functions when N was requested.
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D152751
In lite mode (default for X86), BOLT optimizes and relocates functions
with profile. The rest of the code is preserved, but if it references
relocated code such references have to be updated. The update is handled
by scanExternalRefs() function. Note that we cannot solely rely on
relocations written by the linker, as not all code references are
exposed to the linker. Additionally, the linker can modify certain
instructions and relocations will no longer match the code.
With this change, start using symbolic disassembler for scanning code
for references in scanExternalRefs(). Unlike the previous approach, the
symbolizer properly detects and creates references for instructions with
multiple/ambiguous symbolic operands and handles cases where a
relocation doesn't match any operand. See test cases for examples.
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D152631
This is a first "serious" version of stale profile matching in BOLT. This diff
extends the hash computation for basic blocks so that we can apply a fuzzy
hash-based matching. The idea is to compute several "versions" of a hash value
for a basic block. A loose version of a hash (computed by ignoring instruction
operands) allows to match blocks in functions whose content has been changed,
while stricter hash values (considering instruction opcodes with operands and
even based on hashes of block's successors/predecessors) allow to resolve
collisions. In order to save space and build time, individual hash components
are blended into a single uint64_t.
There are likely numerous ways of improving hash computation but already this
simple variant provides significant perf benefits.
**Perf testing** on the clang binary: collecting data on clang-10 and using it
to optimize clang-11 (with ~1 year of commits in between). Next, we compare
- //stale_clang// (clang-11 optimized with profile collected on clang-10 with **infer-stale-profile=0**)
- //opt_clang// (clang-11 optimized with profile collected on clang-11)
- //infer_clang// (clang-11 optimized with profile collected on clang-10 with **infer-stale-profile=1**)
`LTO-only` mode:
//stale_clang// vs //opt_clang//: task-clock [delta(%): 9.4252 ± 1.6582, p-value: 0.000002]
(That is, there is a ~9.5% perf regression)
//infer_clang// vs //opt_clang//: task-clock [delta(%): 2.1834 ± 1.8158, p-value: 0.040702]
(That is, the regression is reduced to ~2%)
Related BOLT logs:
```
BOLT-INFO: identified 2114 (18.61%) stale functions responsible for 30.96% samples
BOLT-INFO: inferred profile for 2101 (18.52% of all profiled) functions responsible for 30.95% samples
```
`LTO+AutoFDO` mode:
//stale_clang// vs //opt_clang//: task-clock [delta(%): 19.1293 ± 1.4131, p-value: 0.000002]
//infer_clang// vs //opt_clang//: task-clock [delta(%): 7.4364 ± 1.3343, p-value: 0.000002]
Related BOLT logs:
```
BOLT-INFO: identified 5452 (50.27%) stale functions responsible for 85.34% samples
BOLT-INFO: inferred profile for 5442 (50.23% of all profiled) functions responsible for 85.33% samples
```
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D146661
The BOLT instrumentation runtime uses a bump allocator that has a fixed amount of maximum memory. In some cases, this memory limit is not large enough (https://github.com/llvm/llvm-project/issues/59174). We are hitting this limit when instrumenting the Rust compiler with BOLT.
This change increases the memory of the bump allocator used for writing the resulting BOLT profile.
Reviewed By: rafauler
Differential Revision: https://reviews.llvm.org/D151891
Align yaml and fdata profiles by applying the same treatment to recursive
calls (direct, indirect, tail). fdata profile increments entry count when
handling recursive calls. Make perf/pre-aggregated perf reader (DataAggregator)
do the same.
Test Plan:
In pre-aggregated-perf.test, add a dummy pre-aggregated branch entry between
an indirect call in `frame_dummy` function and its entry point.
Check that YAML profile gets incremented entry count for this function.
End-to-end test: https://github.com/rafaelauler/bolt-tests/pull/24
Reviewed By: #bolt, maksfb
Differential Revision: https://reviews.llvm.org/D152338
Fix debug printing, making it easier to compare two debug logs side by side:
- `BinaryFunction::addRelocation`: print function name instead of `this` ptr,
- `DataAggregator::doTrace`: remove duplicated function name.
Reviewed By: #bolt, maksfb
Differential Revision: https://reviews.llvm.org/D152314
BOLT often has to deal with profiles collected on binaries built from several
revisions behind release. As a result, a certain percentage of functions is
considered stale and not optimized. This diff adds an ability to match profile
to functions that are not 100% binary identical, which increases the
optimization coverage and boosts the performance of applications.
The algorithm consists of two phases: matching and inference:
- At the matching phase, we try to "guess" as many block and jump counts from
the stale profile as possible. To this end, the content of each basic block
is hashed and stored in the (yaml) profile. When BOLT optimizes a binary,
it computes block hashes and identifies the corresponding entries in the
stale profile. It yields a partial profile for every CFG in the binary.
- At the inference phase, we employ a network flow-based algorithm (profi) to
reconstruct "realistic" block and jump counts from the partial profile
generated at the first stage. In practice, we don't always produce proper
profile data but the majority (e.g., >90%) of CFGs get the correct counts.
This is a first part of the change; the next stacked diff extends the block hashing
and provides perf evaluation numbers.
Reviewed By: maksfb
Differential Revision: https://reviews.llvm.org/D144500
A CU can have only one function so CU will have low_pc/high_pc. If this funciton
is GCed by LLD low_pc will become 0x0, and BOLT can't map this to output. We
kind of were getting away with it in monolithic DWARF, but with split DWARF
there is only skeleton CU, so we end up with rnglist with header and array, but
no body. This caused LLDB to report an error.
Reviewed By: maksfb
Differential Revision: https://reviews.llvm.org/D151845
The existing BOLT install targets are broken on Windows becase they
don't properly handle the output extension. We cannot use the existing
LLVM macros since those make assumptions that don't hold for BOLT. This
change instead implements custom macros following the approach used by
Clang and LLD.
Differential Revision: https://reviews.llvm.org/D151595
The existing BOLT install targets are broken on Windows becase they
don't properly handle the output extension. We cannot use the existing
LLVM macros since those make assumptions that don't hold for BOLT. This
change instead implements custom macros following the approach used by
Clang and LLD.
Differential Revision: https://reviews.llvm.org/D151595
`DataAggregator::recordTrace` serves two purposes:
- Attaching LBR fallthrough ("trace") information to CFG (`getBranchInfo`),
which eventually gets emitted as YAML profile.
- Populating vector of offsets that gets added to `FuncBranchData`, which
eventually gets emitted as fdata profile.
`recordTrace` is invoked from `getFallthroughsInTrace` which checks its return
status and passes on the collected vector of offsets to `doTrace`.
However, if a malformed trace is passed to `recordTrace` it might partially
attach the profile to CFG and exit with false, not propagating the vector of
offsets to `doTrace`. This leads to a difference between fdata and yaml profile
collected from the same binary and the same perf file.
(Skylake LBR errata might produce such malformed traces where the last entry
is duplicated, resulting in invalid fallthrough path between the last two
entries).
There are two ways to handle this mismatch: conservative (aligned with fdata),
or aggressive (aligned with yaml). Conservative approach would discard the
trace entirely, buffering the CFG updates until all fallthroughs are confirmed.
Aggressive approach would apply CFG updates and return the matching
fallthroughs in the vector even if the trace is invalid (doesn't correspond to
a valid fallthrough path). I chose to go with the former (conservative/fdata)
approach which produces more accurate profile.
We can't rely on pre-filtering such traces early (in LBR sample processing) as
DataAggregator is used for both perf samples and pre-aggregated perf information
which loses branch stack information.
Test Plan: https://github.com/rafaelauler/bolt-tests/pull/22
Reviewed By: #bolt, rafauler
Differential Revision: https://reviews.llvm.org/D151614
The existing BOLT install targets are broken on Windows becase they
don't properly handle output extension. Rather than reimplementing
this logic in BOLT, reuse the existing LLVM macros which already
handle this aspect correctly.
Differential Revision: https://reviews.llvm.org/D151595
This reverts commit d763c6e5e2.
Adds the patch by @hans from
https://github.com/llvm/llvm-project/issues/62719
This patch fixes the Windows build.
d763c6e5e2 reverted the reviews
D144509 [CMake] Bumps minimum version to 3.20.0.
This partly undoes D137724.
This change has been discussed on discourse
https://discourse.llvm.org/t/rfc-upgrading-llvms-minimum-required-cmake-version/66193
Note this does not remove work-arounds for older CMake versions, that
will be done in followup patches.
D150532 [OpenMP] Compile assembly files as ASM, not C
Since CMake 3.20, CMake explicitly passes "-x c" (or equivalent)
when compiling a file which has been set as having the language
C. This behaviour change only takes place if "cmake_minimum_required"
is set to 3.20 or newer, or if the policy CMP0119 is set to new.
Attempting to compile assembly files with "-x c" fails, however
this is workarounded in many cases, as OpenMP overrides this with
"-x assembler-with-cpp", however this is only added for non-Windows
targets.
Thus, after increasing cmake_minimum_required to 3.20, this breaks
compiling the GNU assembly for Windows targets; the GNU assembly is
used for ARM and AArch64 Windows targets when building with Clang.
This patch unbreaks that.
D150688 [cmake] Set CMP0091 to fix Windows builds after the cmake_minimum_required bump
The build uses other mechanism to select the runtime.
Fixes#62719
Reviewed By: #libc, Mordante
Differential Revision: https://reviews.llvm.org/D151344
This is an ongoing series of commits that are reformatting our
Python code. This catches the last of the python files to
reformat. Since they where so few I bunched them together.
Reformatting is done with `black`.
If you end up having problems merging this commit because you
have made changes to a python file, the best way to handle that
is to run git checkout --ours <yourfile> and then reformat it
with black.
If you run into any problems, post to discourse about it and
we will try to help.
RFC Thread below:
https://discourse.llvm.org/t/rfc-document-and-standardize-python-code-style
Reviewed By: jhenderson, #libc, Mordante, sivachandra
Differential Revision: https://reviews.llvm.org/D150784
D151036 adds an assertions that prohibits iterating over sub- and
super-registers of a null register. This is already the case when
iterating over register units of a null register, and worked by
accident for sub- and super-registers.
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D151285
Resovled the issue that when number of tasks is fewer than cores, we end
up creating as many threads as the number of cores, making the
performance worse than the single thread version.
PUSH[16|32|64]i[8|32] are not arithmetic instructions, so I renamed the
functions.
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D151028
Add helper methods and simplify cases where we want to check if two functions
are parent-child of each other (function-fragment relationship).
Reviewed By: #bolt, rafauler
Differential Revision: https://reviews.llvm.org/D142668
This reverts commit 65429b9af6.
Broke several projects, see https://reviews.llvm.org/D144509#4347562 onwards.
Also reverts follow-up commit "[OpenMP] Compile assembly files as ASM, not C"
This reverts commit 4072c8aee4.
Also reverts fix attempt "[cmake] Set CMP0091 to fix Windows builds after the cmake_minimum_required bump"
This reverts commit 7d47dac5f8.
We have mostly harmless data races when running
BinaryContext::calculateEmittedSize() in parallel, while performing
split function pass. However, it is possible to end up in a state
where some MCSymbols are still registered and our clean up
failed. This happens rarely but it does happen, and when it happens,
it is a difficult to diagnose heisenbug. To avoid this, add a new
clean pass to perform a last check on MCSymbols, before they
undergo our final emission pass, to verify that they are in a sane
state. If we fail to do this, we might resolve some symbols to zero
and crash the output binary.
Reviewed By: #bolt, Amir
Differential Revision: https://reviews.llvm.org/D137984
find_section used to match offsets equal to file_offset + size causing
offsets to sometimes be attributed to the wrong section.
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D149047
Spotted this one while working on new DWARF Rewriter. We were using wrong check
in assertion.
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D150167
Use MCInst opcode name instead of opcode value in hashing.
Opcode values are unstable wrt changes to target tablegen definitions,
and we notice that as output mismatches in NFC testing. This makes BOLT YAML
profile tied to a particular LLVM revision which is less portable than
offset-based fdata profile.
Switch to using opcode names which have 1:1 mapping with opcode values for any
given LLVM revision, and are stable wrt modifications to .td files (except of
course modifications to names themselves).
Test Plan:
D150154 is a test commit adding new X86 instruction which shifts opcode values.
With current change, pre-aggregated-perf.test passes in nfc check mode.
Without current change, pre-aggregated-perf.test expectedly fails.
Reviewed By: #bolt, rafauler
Differential Revision: https://reviews.llvm.org/D150005
Make retpoline functions invariant of X86 register numbers.
retpoline-synthetic.test is known to fail NFC testing due to shifting
register numbers. Use canonical register names instead of tablegen
numbers.
Before:
```
__retpoline_r51_
__retpoline_mem_r58+DATAat0x200fe8
__retpoline_mem_r51+0
__retpoline_mem_r132+0+8*53
```
After:
```
__retpoline_%rax_
__retpoline_mem_%rip+DATAat0x200fe8
__retpoline_mem_%rax+0
__retpoline_mem_%r12+0+8*%rbx
```
Test Plan:
- Revert 67bd3c58c0 that touches X86RegisterInfo.td.
- retpoline-synthetic.test passes in NFC mode with this diff, fails without it.
Reviewed By: #bolt, rafauler
Differential Revision: https://reviews.llvm.org/D150138
They don't convey any useful information and make the documentation
unnecessarily hard to read.
Differential Revision: https://reviews.llvm.org/D149641
There are CUs that have DW_AT_loclists_base, but no DW_AT_location in children
DIEs. Pre-bolt it points to a valid offset. We were not updating it, so it ended
up pointing in the middle of a list and caused LLDB to print out errors. Changed
it to point to first location list. I don't think it should matter since there
are no accesses to it anyway.
Reviewed By: maksfb
Differential Revision: https://reviews.llvm.org/D149798
Extending yaml profile format with block hashes, which are used for stale
profile matching. To avoid duplication of the code, created a new class with a
collection of utilities for computing hashes.
Reviewed By: Amir
Differential Revision: https://reviews.llvm.org/D144306