This reverts commit 187198. It broke the bots.
The soft float test probably needs a -triple because of name differences.
On the hard float test I am getting a "roundss $1, %xmm0, %xmm0", instead of
"vroundss $1, %xmm0, %xmm0, %xmm0".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187201 91177308-0d34-0410-b5e6-96231b3b80d8
CustomLowerNode was not being called during SplitVectorOperand,
meaning custom legalization could not be used by targets.
This also adds a test case for NVPTX that depends on this custom
legalization.
Differential Revision: http://llvm-reviews.chandlerc.com/D1195
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187198 91177308-0d34-0410-b5e6-96231b3b80d8
robust. It now uses an InstVisitor and worklist to actually walk the
uses of the Alloca transitively and detect the pattern which we can
directly promote: loads & stores of the whole alloca and instructions we
can completely ignore.
Also, with this new implementation teach both the predicate for testing
whether we can promote and the promotion engine itself to use the same
code so we no longer have strange divergence between the two code paths.
I've added some silly test cases to demonstrate that we can handle
slightly more degenerate code patterns now. See the below for why this
is even interesting.
Performance impact: roughly 1% regression in the performance of SROA or
ScalarRepl on a large C++-ish test case where most of the allocas are
basically ready for promotion. The reason is because of silly redundant
work that I've left FIXMEs for and which I'll address in the next
commit. I wanted to separate this commit as it changes the behavior.
Once the redundant work in removing the dead uses of the alloca is
fixed, this code appears to be faster than the old version. =]
So why is this useful? Because the previous requirement for promotion
required a *specific* visit pattern of the uses of the alloca to verify:
we *had* to look for no more than 1 intervening use. The end goal is to
have SROA automatically detect when an alloca is already promotable and
directly hand it to the mem2reg machinery rather than trying to
partition and rewrite it. This is a 25% or more performance improvement
for SROA, and a significant chunk of the delta between it and
ScalarRepl. To get there, we need to make mem2reg actually capable of
promoting allocas which *look* promotable to SROA without have SROA do
tons of work to massage the code into just the right form.
This is actually the tip of the iceberg. There are tremendous potential
savings we can realize here by de-duplicating work between mem2reg and
SROA.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187191 91177308-0d34-0410-b5e6-96231b3b80d8
The bitcode representation attribute kinds are encoded into / decoded from
should be independent of the current set of LLVM attributes and their position
in the AttrKind enum. This patch explicitly encodes attributes to fixed bitcode
values.
With this patch applied, LLVM does not silently misread attributes written by
LLVM 3.3. We also enhance the decoding slightly such that an error message is
printed if an unknown AttrKind encoding was dected.
Bonus: Dropping bitcode attributes from AttrKind is now easy, as old AttrKinds
do not need to be kept to support the Bitcode reader.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187186 91177308-0d34-0410-b5e6-96231b3b80d8
This patch provides basic support for powerpc64le as an LLVM target.
However, use of this target will not actually generate little-endian
code. Instead, use of the target will cause the correct little-endian
built-in defines to be generated, so that code that tests for
__LITTLE_ENDIAN__, for example, will be correctly parsed for
syntax-only testing. Code generation will otherwise be the same as
powerpc64 (big-endian), for now.
The patch leaves open the possibility of creating a little-endian
PowerPC64 back end, but there is no immediate intent to create such a
thing.
The LLVM portions of this patch simply add ppc64le coverage everywhere
that ppc64 coverage currently exists. There is nothing of any import
worth testing until such time as little-endian code generation is
implemented. In the corresponding Clang patch, there is a new test
case variant to ensure that correct built-in defines for little-endian
code are generated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187179 91177308-0d34-0410-b5e6-96231b3b80d8
structure not just a pointer. This implements that and thus fixes va_copy
on PPC32. Fixes#15286. Both bug and patch by Florian Zeitz!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187158 91177308-0d34-0410-b5e6-96231b3b80d8
Back in r140220 we removed the autoconf code that would set LLVMCC_OPTION
since it was only used by the test-suite. This patch now removes code
that would only be used if LLVMCC_OPTION was set.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187154 91177308-0d34-0410-b5e6-96231b3b80d8
The previous change to local live range allocation also suppressed
eviction of local ranges. In rare cases, this could result in more
expensive register choices. This commit actually revives a feature
that I added long ago: check if live ranges can be reassigned before
eviction. But now it only happens in rare cases of evicting a local
live range because another local live range wants a cheaper register.
The benefit is improved code size for some benchmarks on x86 and armv7.
I measured no significant compile time increase and performance
changes are noise.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187140 91177308-0d34-0410-b5e6-96231b3b80d8
Also avoid locals evicting locals just because they want a cheaper register.
Problem: MI Sched knows exactly how many registers we have and assumes
they can be colored. In cases where we have large blocks, usually from
unrolled loops, greedy coloring fails. This is a source of
"regressions" from the MI Scheduler on x86. I noticed this issue on
x86 where we have long chains of two-address defs in the same live
range. It's easy to see this in matrix multiplication benchmarks like
IRSmk and even the unit test misched-matmul.ll.
A fundamental difference between the LLVM register allocator and
conventional graph coloring is that in our model a live range can't
discover its neighbors, it can only verify its neighbors. That's why
we initially went for greedy coloring and added eviction to deal with
the hard cases. However, for singly defined and two-address live
ranges, we can optimally color without visiting neighbors simply by
processing the live ranges in instruction order.
Other beneficial side effects:
It is much easier to understand and debug regalloc for large blocks
when the live ranges are allocated in order. Yes, global allocation is
still very confusing, but it's nice to be able to comprehend what
happened locally.
Heuristics could be added to bias register assignment based on
instruction locality (think late register pairing, banks...).
Intuituvely this will make some test cases that are on the threshold
of register pressure more stable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187139 91177308-0d34-0410-b5e6-96231b3b80d8
The last patch corrected some issues, but constant-pool entries had actual
codegen bugs in the large memory model (which MCJIT uses).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187126 91177308-0d34-0410-b5e6-96231b3b80d8
This should actually make the MCJIT tests pass again on AArch64. I don't know
how I missed their failure before.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187120 91177308-0d34-0410-b5e6-96231b3b80d8
For two intrinsics 'llvm.nvvm.texsurf.handle' and 'llvm.nvvm.texsurf.handle.internal',
TableGen was emitting matching code like:
if (Name.startswith("llvm.nvvm.texsurf.handle")) ...
if (Name.startswith("llvm.nvvm.texsurf.handle.internal")) ...
We can never match "llvm.nvvm.texsurf.handle.internal" here because it will
always be erroneously matched by the first condition.
The fix is to sort the intrinsic names and emit them in reverse order.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187119 91177308-0d34-0410-b5e6-96231b3b80d8
Before the patch we took advantage of the fact that the compare and
branch are glued together in the selection DAG and fused them together
(where possible) while emitting them. This seemed to work well in practice.
However, fusing the compare so early makes it harder to remove redundant
compares in cases where CC already has a suitable value. This patch
therefore uses the peephole analyzeCompare/optimizeCompareInstr pair of
functions instead.
No behavioral change intended, but it paves the way for a later patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187116 91177308-0d34-0410-b5e6-96231b3b80d8
As with the stores, these instructions can trap when the condition is false,
so they are only used for things like (cond ? x : *ptr).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187112 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions are allowed to trap even if the condition is false,
so for now they are only used for "*ptr = (cond ? x : *ptr)"-style
constructs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187111 91177308-0d34-0410-b5e6-96231b3b80d8
Make sure the context and type fields are MDNodes. We will generate
verification errors if those fields are non-empty strings.
Fix testing cases to make them pass the verifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187106 91177308-0d34-0410-b5e6-96231b3b80d8
The language reference says that:
"If a symbol appears in the @llvm.used list, then the compiler,
assembler, and linker are required to treat the symbol as if there is
a reference to the symbol that it cannot see"
Since even the linker cannot see the reference, we must assume that
the reference can be using the symbol table. For example, a user can add
__attribute__((used)) to a debug helper function like dump and use it from
a debugger.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187103 91177308-0d34-0410-b5e6-96231b3b80d8
There's no need to specify a flag to omit frame pointer elimination on non-leaf
nodes...(Honestly, I can't parse that option out.) Use the function attribute
stuff instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187093 91177308-0d34-0410-b5e6-96231b3b80d8
Prior to this patch, IfConverter may widen the cases where a sequence of
instructions were executed because of the way it uses nested predicates. This
result in incorrect execution.
For instance, Let A be a basic block that flows conditionally into B and B be a
predicated block.
B can be predicated with A.BrToBPredicate into A iff B.Predicate is less
"permissive" than A.BrToBPredicate, i.e., iff A.BrToBPredicate subsumes
B.Predicate.
The IfConverter was checking the opposite: B.Predicate subsumes
A.BrToBPredicate.
<rdar://problem/14379453>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187071 91177308-0d34-0410-b5e6-96231b3b80d8