This reverts commit r345972. Need to update the description + possibly
to update the patch itself after discussion with Eric Christofer.
llvm-svn: 346508
Summary:
lcov tracefiles are used by various coverage reporting tools and build
systems (e.g., Bazel). It is a simple text-based format to parse and
more convenient to use than the JSON export format, which needs
additional processing to map regions/segments back to line numbers.
It's a little unfortunate that "text" format is now overloaded to refer
specifically to JSON for export, but I wanted to avoid making any
breaking changes to the UI of the llvm-cov tool at this time.
Patch by Tony Allevato (@allevato).
Reviewers: Dor1s, vsk
Reviewed By: Dor1s, vsk
Subscribers: mgorny, llvm-commits
Differential Revision: https://reviews.llvm.org/D54266
llvm-svn: 346506
A minor improvement of buildVector() that skips creating an
INSERT_VECTOR_ELT for a Value which has already been used for the
REPLICATE.
Review: Ulrich Weigand
https://reviews.llvm.org/D54315
llvm-svn: 346504
Now that we have mixed type sizes, i1 values need to be explicitly
handled as we want to avoid promoting these values.
Differential Revision: https://reviews.llvm.org/D54308
llvm-svn: 346499
I noticed that we weren't generating broadcasts as much I thought we would with
D54271, and this is part of the problem.
Widening the shuffle elements means adding bitcasts and hiding the relationship
between a splatted scalar and the vector. If we can form a broadcast, do that
before going through the rest of the shuffle lowering because broadcasts should
be cheap and can often be load-folded.
Differential Revision: https://reviews.llvm.org/D54280
llvm-svn: 346498
Summary:
This simplifies the code and moves everything to tablegen for consistency. This
also prepares the ground for adding issue counters.
Reviewers: gchatelet, john.brawn, jsji
Subscribers: nemanjai, mgorny, javed.absar, kbarton, tschuett, llvm-commits
Differential Revision: https://reviews.llvm.org/D54297
llvm-svn: 346489
Use a simple SmallVector to track the lifetime of simulated instructions.
An ordered map was not needed because instructions are already picked in program
order. It is also much faster if we avoid searching for already retired
instructions at the end of every cycle.
The new policy only triggers a "garbage collection" when the number of retired
instructions becomes significantly big when compared with the total size of the
vector.
While working on this, I noticed that instructions were correctly retired, but
their internal state was not updated (i.e. there was no transition from the
EXECUTED state, to the RETIRED state). While this was not a problem for the
views, it prevented the EntryStage from correctly garbage collecting already
retired instructions. That was a bad oversight, and this patch fixes it.
The observed speedup on a debug build of llvm-mca after this patch is ~6%.
On a release build of llvm-mca, the observed speedup is ~%15%.
llvm-svn: 346487
After D45330, Dominators are required for IPSCCP and can be preserved.
This patch preserves DominatorTreeAnalysis in the new pass manager. AFAIK the legacy pass manager cannot preserve function analysis required by a module analysis.
Reviewers: davide, dberlin, chandlerc, efriedma, kuhar, NutshellySima
Reviewed By: chandlerc, kuhar, NutshellySima
Differential Revision: https://reviews.llvm.org/D47259
llvm-svn: 346486
The DAGCombiner tries to SimplifySelectCC as follows:
select_cc(x, y, 16, 0, cc) -> shl(zext(set_cc(x, y, cc)), 4)
It can't cope with the situation of reordered operands:
select_cc(x, y, 0, 16, cc)
In that case we just need to swap the operands and invert the Condition Code:
select_cc(x, y, 16, 0, ~cc)
Differential Revision: https://reviews.llvm.org/D53236
llvm-svn: 346484
We can stop recording conditions once we reached the immediate dominator
for the block containing the call site. Conditions in predecessors of the
that node will be the same for all paths to the call site and splitting
is not beneficial.
This patch makes CallSiteSplitting dependent on the DT anlysis. because
the immediate dominators seem to be the easiest way of finding the node
to stop at.
I had to update some exiting tests, because they were checking for
conditions that were true/false on all paths to the call site. Those
should now be handled by instcombine/ipsccp.
Reviewers: davide, junbuml
Reviewed By: junbuml
Differential Revision: https://reviews.llvm.org/D44627
llvm-svn: 346483
In SimplifyCFG when given a conditional branch that goes to BB1 and BB2, the hoisted common terminator instruction in the two blocks, caused debug line records associated with subsequent select instructions to become ambiguous. It causes the debugger to display unreachable source lines.
Differential Revision: https://reviews.llvm.org/D53390
llvm-svn: 346481
Previously, during the search, all values had to have the same
'TypeSize', which is equal to number of bits of the integer type of
the icmp operand. All values in the tree had to match this size;
meaning that, if we searched from i16, we wouldn't accept i8s. A
change in type size requires zext and truncs to perform the casts so,
to allow mixed narrow types, the handling of these instructions is
now slightly different:
- we allow casts if their result or operand is <= TypeSize.
- zexts are sinks if their result > TypeSize.
- truncs are still sinks if their operand == TypeSize.
- truncs are still sources if their result == TypeSize.
The transformation bails on finding an icmp that operates on data
smaller than the current TypeSize.
Differential Revision: https://reviews.llvm.org/D54108
llvm-svn: 346480
A few code movement things:
- AreSymmetrical is now a method of BinOpChain.
- Created a lambda in CreateParallelMACPairs to reduce loop nesting.
- A Reduction object now gets pasted in a couple of places instead,
including CreateParallelMACPairs so it doesn't need to return a
value.
I've also added RecordSequentialLoads, which is run before the
transformation begins, and caches the interesting loads. This can then
be queried later instead of cross checking many load values.
Differential Revision: https://reviews.llvm.org/D54254
llvm-svn: 346479
Summary:
This change covers a number of things spanning LLVM and compiler-rt,
which are related in a non-trivial way.
In LLVM, we have a library that handles the FDR mode even log loading,
which uses C++'s runtime polymorphism feature to better faithfully
represent the events that are written down by the FDR mode runtime. We
do this by interpreting a trace that's serliased in a common format
agreed upon by both the trace loading library and the FDR mode runtime.
This library is under active development, which consists of features
allowing us to reconstitute a higher-level event log.
This event log is used by the conversion and visualisation tools we have
for interpreting XRay traces.
One of the tools we have is a diagnostic tool in llvm-xray called
`fdr-dump` which we've been using to debug our expectations of what the
FDR runtime should be writing and what the logical FDR event log
structures are. We use this fairly extensively to reason about why some
non-trivial traces we're generating with FDR mode runtimes fail to
convert or fail to parse correctly.
One of these failures we've found in manual debugging of some of the
traces we've seen involve an inconsistency between the buffer extents (a
record indicating how many bytes to follow are part of a logical
thread's event log) and the record of the bytes written into the log --
sometimes it turns out the data could be garbage, due to buffers being
recycled, but sometimes we're seeing the buffer extent indicating a log
is "shorter" than the actual records associated with the buffer. This
case happens particularly with function entry records with a call
argument.
This change for now updates the FDR mode runtime to write the bytes for
the function call and arg record before updating the buffer extents
atomically, allowing multiple threads to see a consistent view of the
data in the buffer using the atomic counter associated with a buffer.
What we're trying to prevent here is partial updates where we see the
intermediary updates to the buffer extents (function record size then
call argument record size) becoming observable from another thread, for
instance, one doing the serialization/flushing.
To do both diagnose this issue properly, we need to be able to honour
the extents being set in the `BufferExtents` records marking the
beginning of the logical buffers when reading an FDR trace. Since LLVM
doesn't use C++'s RTTI mechanism, we instead follow the advice in the
documentation for LLVM Style RTTI
(https://llvm.org/docs/HowToSetUpLLVMStyleRTTI.html). We then rely on
this RTTI feature to ensure that our file-based record producer (our
streaming "deserializer") can honour the extents of individual buffers
as we interpret traces.
This also sets us up to be able to eventually do smart
skipping/continuation of FDR logs, seeking instead to find BufferExtents
records in cases where we find potentially recoverable errors. In the
meantime, we make this change to operate in a strict mode when reading
logical buffers with extent records.
Reviewers: mboerger
Subscribers: hiraditya, llvm-commits, jfb
Differential Revision: https://reviews.llvm.org/D54201
llvm-svn: 346473
CMake invokes rc using the joined spelling which appears to be supported
by Microsoft's rc implementation, so we should support it as well.
Differential Revision: https://reviews.llvm.org/D54191
llvm-svn: 346470
Summary:
Reorders the sections in the SIMD tablegen file to roughly match the
new opcode ordering. Depends on D54126.
Reviewers: aheejin, dschuff
Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D54134
llvm-svn: 346464
CMake generate manifests that contain absolute filenames and these
currently result in assertion error. This change ensures that we
handle these correctly.
Differential Revision: https://reviews.llvm.org/D54194
llvm-svn: 346450
Summary:
The pass incorrectly assumed if there's a longjmp declaration in the
module, there is also a setjmp function declaration. Fixed it, and now
the pass only converts longjmp and does not do any other transformation
when there's no setjmp declaration in the module.
Fixes PR39562.
Reviewers: jgravelle-google, sbc100
Subscribers: dschuff, sunfish, llvm-commits
Differential Revision: https://reviews.llvm.org/D54273
llvm-svn: 346445
Use report_fatal_error instead of crashing or miscompiling. (It's
currently easier than it should be to hit this case because we don't
reuse codes across epilogs.)
llvm-svn: 346440
This patch adds logic to detect reductions across the inner and outer
loop by following the incoming values of PHI nodes in the outer loop. If
the incoming values take part in a reduction in the inner loop or come
from outside the outer loop, we found a reduction spanning across inner
and outer loop.
With this change, ~10% more loops are interchanged in the LLVM
test-suite + SPEC2006.
Fixes https://bugs.llvm.org/show_bug.cgi?id=30472
Reviewers: mcrosier, efriedma, karthikthecool, davide, hfinkel, dmgreen
Reviewed By: efriedma
Differential Revision: https://reviews.llvm.org/D43245
llvm-svn: 346438
Summary:
This fixes PR 37422
In ELF, non-weak symbols can also be non-prevailing. In this particular
PR, the __llvm_profile_* symbols are non-prevailing but weren't getting
dropped - causing multiply-defined errors with lld.
Also add a test, strong_non_prevailing.ll, to ensure that multiple
copies of a strong symbol are dropped.
To fix the test regressions exposed by this fix,
- do not mark prevailing copies for symbols with 'appending' linkage.
There's no one prevailing copy for such symbols.
- fix the prevailing version in dead-strip-fulllto.ll
- explicitly pass exported symbols to llvm-lto in fumcimport.ll and
funcimport_var.ll
Reviewers: tejohnson, pcc
Subscribers: mehdi_amini, inglorion, eraman, steven_wu, dexonsmith,
dang, srhines, llvm-commits
Differential Revision: https://reviews.llvm.org/D54125
llvm-svn: 346436
As discussed in D54073, we have a potential regression from more aggressive vector narrowing here, so let's try to avoid that by changing build-vector lowering slightly.
Insert-vector-element lowering always does this since there's no "pinsr" for ymm/zmm:
// If the vector is wider than 128 bits, extract the 128-bit subvector, insert
// into that, and then insert the subvector back into the result.
...but we can sometimes do better for insert-into-constant-vector by using shuffle lowering.
Differential Revision: https://reviews.llvm.org/D54271
llvm-svn: 346433
FindBetterNeighborChains simulateanously improves the chain
dependencies of a chain of related stores avoiding the generation of
extra token factors. For chains longer than the GatherAllAliasDepths,
stores further down in the chain will necessarily fail, a potentially
significant waste and preventing otherwise trivial parallelization.
This patch directly parallelize the chains of stores before improving
each store. This generally improves DAG-level parallelism.
Reviewers: courbet, spatel, RKSimon, bogner, efriedma, craig.topper, rnk
Subscribers: sdardis, javed.absar, hiraditya, jrtc27, atanasyan, llvm-commits
Differential Revision: https://reviews.llvm.org/D53552
llvm-svn: 346432
In order to accurately put a type into the correct location in the AST
we construct from debug info, we need to be able to determine what
DeclContext (namespace, global, nested class, etc) that it goes into.
PDB doesn't contain this mapping. It does, however, contain the reverse
mapping. That is, for a given class type T, you can determine all
classes Q1, Q2, ..., Qn that are nested inside of T. We need to know,
for a given class type Q, what type T is it nested inside of.
This patch builds this map as a pre-processing step when we first
load the PDB by scanning every type. Initial tests show that while
this can be slow in debug builds of LLDB, it is quite fast in release
builds (less than 2 seconds for a ~1GB PDB, and it only needs to happen
once).
Furthermore, having this pre-processing step in place allows us to
repurpose it for building up other kinds of indexing to it down the
line. For the time being, this gives us very accurate reconstruction
of the DeclContext hierarchy.
Differential Revision: https://reviews.llvm.org/D54216
llvm-svn: 346429
As noted by Andrea Di Biagio in https://bugs.llvm.org/show_bug.cgi?id=39465
both the loads and stores occupy both the store and load queues.
This is clearly wrong.
llvm-svn: 346425