Summary:
The patch to move metadata linking after global value linking didn't
correctly map unmaterialized global values to null as desired. They
were in fact mapped to the source copy. It largely worked by accident
since most module linker clients destroyed the source module which
caused the source GVs to be replaced by null, but caused a failure with
LTO linking on Windows:
http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312869.html
The problem is that a null return value from materializeValueFor is
handled by mapping the value to self. This is the desired behavior when
materializeValueFor is passed a non-GlobalValue. The problem is how to
distinguish that case from the case where we really do want to map to
null.
This patch addresses this by passing in a new flag to the value mapper
indicating that unmapped global values should be mapped to null. Other
Value types are handled as before.
Note that the documented behavior of asserting on unmapped values when
the flag RF_IgnoreMissingValues isn't set is currently disabled with
FIXME notes due to bootstrap failures. I modified these disabled asserts
so when they are eventually enabled again it won't assert for the
unmapped values when the new RF_NullMapMissingGlobalValues flag is set.
I also considered using a callback into the value materializer, but a
flag seemed cleaner given that there are already existing flags.
I also considered modifying materializeValueFor to return the input
value when we want to map to source and then treat a null return
to mean map to null. However, there are other value materializer
subclasses that implement materializeValueFor, and they would all need
to be audited and the return values possibly changed, which seemed
error-prone.
Reviewers: dexonsmith, joker.eph
Subscribers: pcc, llvm-commits
Differential Revision: http://reviews.llvm.org/D14682
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253170 91177308-0d34-0410-b5e6-96231b3b80d8
Global to local demotion can speed up programs that use globals a lot. It is particularly useful with LTO, when the entire call graph is known and most functions have been internalized.
For a global to be demoted, it must only be accessed by one function and that function:
1. Must never recurse directly or indirectly, else the GV would be clobbered.
2. Must never rely on the value in GV at the start of the function (apart from the initializer).
GlobalOpt can already do this, but it is hamstrung and only ever tries to demote globals inside "main", because C++ gives extra guarantees about how main is called - once and only once.
In LTO mode, we can often prove the first property (if the function is internal by this point, we know enough about the callgraph to determine if it could possibly recurse). FunctionAttrs now infers the "norecurse" attribute for this reason.
The second property can be proven for a subset of functions by proving that all loads from GV are dominated by a store to GV. This is conservative in the name of compile time - this only requires a DominatorTree which is fairly cheap in the grand scheme of things. We could do more fancy stuff with MemoryDependenceAnalysis too to catch more cases but this appears to catch most of the useful ones in my testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253168 91177308-0d34-0410-b5e6-96231b3b80d8
The current implementation of GEP visitor in InstCombine fails with assertion on Vector GEP with mix of scalar and vector types, like this:
getelementptr double, double* %a, <8 x i32> %i
(It fails to create a "sext" from <8 x i32> to <8 x i64>)
I fixed it and added some tests.
Differential Revision: http://reviews.llvm.org/D14485
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253162 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Document the differences between the isKnownWindowsMSVC() and isWindowsMSVC() methods on Triple.
Also removed the '\brief' Doxygen annotations - now that 'AUTOBRIEF' is set to on, they are unnecessary.
Subscribers: jfb, tberghammer, danalbert, srhines, dschuff
Differential Revision: http://reviews.llvm.org/D14110
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253159 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
There are currently two blocks with the METADATA_BLOCK id at module
scope. The first has the module-level metadata values (consisting of
some combination of METADATA_* record codes except for METADATA_KIND).
The second consists only of METADATA_KIND records. The latter is used
only in the METADATA_ATTACHMENT block within function blocks (for
metadata attached to instructions).
For ThinLTO we want to delay the parsing of module level metadata
until all functions have been imported from that module (there is some
bookkeeping used to suture it up when we read it during a post-pass).
However, we do need the METADATA_KIND records when parsing the function
body during importing, since those kinds are used as described above.
To simplify identification and parsing of just the block containing
the metadata kinds, use a different block id (METADATA_KIND_BLOCK_ID).
Support older bitcode without the new block id as well.
Reviewers: dexonsmith, joker.eph
Subscribers: davidxl, llvm-commits
Differential Revision: http://reviews.llvm.org/D14654
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253154 91177308-0d34-0410-b5e6-96231b3b80d8
MCRelaxableFragment previously kept a copy of MCSubtargetInfo and
MCInst to enable re-encoding the MCInst later during relaxation. A copy
of MCSubtargetInfo (instead of a reference or pointer) was needed
because the feature bits could be modified by the parser.
This commit replaces the MCSubtargetInfo copy in MCRelaxableFragment
with a constant reference to MCSubtargetInfo. The copies of
MCSubtargetInfo are kept in MCContext, and the target parsers are now
responsible for asking MCContext to provide a copy whenever the feature
bits of MCSubtargetInfo have to be toggled.
With this patch, I saw a 4% reduction in peak memory usage when I
compiled verify-uselistorder.lto.bc using llc.
rdar://problem/21736951
Differential Revision: http://reviews.llvm.org/D14346
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253127 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Currently we always recompute LCSSA for outer loops after unrolling an
inner loop. That leads to compile time problem when we have big loop
nests, and we can solve it by avoiding unnecessary work. For instance,
if w eonly do partial unrolling, we don't break LCSSA, so we don't need
to rebuild it. Also, if all exits from the inner loop are inside the
enclosing loop, then complete unrolling won't break LCSSA either.
I replaced unconditional LCSSA recomputation with conditional recomputation +
unconditional assert and added several tests, which were failing when I
experimented with it.
Soon I plan to follow up with a similar patch for recalculation of dominators
tree.
Reviewers: hfinkel, dexonsmith, bogner, joker.eph, chandlerc
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14526
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253126 91177308-0d34-0410-b5e6-96231b3b80d8
attribute.
Even if the target supports shrink-wrapping, the prologue and epilogue
must not move because a crash can happen anywhere and sanitizers need
to be able to unwind from the PC of the crash.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253116 91177308-0d34-0410-b5e6-96231b3b80d8
Darwin reserves x18, so it's never ABI compliant to generate code that
uses it. Set the default value based on the OS part of the triple
rather than forcing front-ends to set the +reserve-x18 target feature
in order to build correct code for Darwin.
This will make r243310 redundant, so I'll revert that shortly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253102 91177308-0d34-0410-b5e6-96231b3b80d8
When there is only 1 node left in the ready queue and it is picked call
the reason "ONLY1" instead of "NOCAND".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253096 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to transform the below loop into a memcpy.
void test(unsigned *__restrict__ a, unsigned *__restrict__ b) {
for (int i = 2047; i >= 0; --i) {
a[i] = b[i];
}
}
This is the memcpy version of r251518, which added support for memset with
negative strided loops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253091 91177308-0d34-0410-b5e6-96231b3b80d8
The C++ EH personality automatically restores ESP from the C++ EH
registration node after a catchret. I mistakenly thought it was like
SEH, which does not restore ESP.
It makes sense for C++ EH to differ from SEH here because SEH does not
use funclets for catches, and does not allow catching inside of finally.
C++ EH may need to unwind through multiple catch funclets and eventually
catchret to some outer funclet. Therefore, the runtime has to keep track
of which ESP to use with catchret, rather than having the compiler
reload it manually.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253084 91177308-0d34-0410-b5e6-96231b3b80d8
Use ScalarEvolution to calculate memory access bounds.
Handle function calls based on readnone/nocapture attributes.
Handle memory intrinsics with constant size.
This change improves both recall and precision of IsAllocaSafe.
See the new tests (ex. BitCastWide) for the kind of code that was wrongly
classified as safe.
SCEV efficiency seems to be limited by the fact the SafeStack runs late
(in CodeGenPrepare), and many loops are unrolled or otherwise not in LCSSA.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253083 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is enabling combining UNPCKL with vector_shuffle that moves the upper
half of a vector into the lower half, into a UNPCKH instruction. For example:
t2: v16i8 = vector_shuffle<8,9,10,11,12,13,14,15,u,u,u,u,u,u,u,u> t1, undef:v16i8
t3: v16i8 = X86ISD::UNPCKL undef:v16i8, t2
will be combined to:
t3: v16i8 = X86ISD::UNPCKH undef:v16i8, t1
Differential revision: http://reviews.llvm.org/D14399
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253067 91177308-0d34-0410-b5e6-96231b3b80d8
This is a recommit of 252842 which was reverted in 252859. The issue was
using %s format specifier for a StringRef - used Format's
left_justify(StringRef, int) instead.
It'd be nice to have __attribute__((format(..))) on llvm::format, but
apparently it's only implemented for c-style variadics, not C++ variadic
templates. Perhaps we could fix that & conditionalize the attribute on
such...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253065 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
PE files are stripped by default, and only contain the names of exported
symbols.
The actual reason that we bother to do this override by default is
actually due to a quirk of the way -gline-tables-only is implemented, so
I phrased the check as "if we are symbolizing from dwarf, do the symtab
override".
This fixes lots of Windows ASan tests that I broke in r250582.
Reviewers: samsonov
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14594
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253051 91177308-0d34-0410-b5e6-96231b3b80d8
ISD::BITREVERSE matches "rbit" completely, so remove ARMISD::RBIT and mark ISD::BITREVERSE as legal, adding a test for lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@253047 91177308-0d34-0410-b5e6-96231b3b80d8