This is to fix an incorrect error when trying to initialize
DwarfNumbers with a !cast<int> of a bits initializer.
getValuesAsListOfInts("DwarfNumbers") would not see an IntInit
and instead the cast, so would give up.
It seems likely that this could be generalized to attempt
the convertInitializerTo for any type. I'm not really sure
why the existing code seems to special case the string cast cases
when convertInitializerTo seems like it should generally handle this
sort of thing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243722 91177308-0d34-0410-b5e6-96231b3b80d8
For a modulo (reminder) operation,
clang -target armv7-none-linux-gnueabi generates "__modsi3"
clang -target armv7-none-eabi generates "__aeabi_idivmod"
clang -target armv7-linux-androideabi generates "__modsi3"
Android bionic libc doesn't provide a __modsi3, instead it provides a
"__aeabi_idivmod". This patch fixes the LLVM ARMISelLowering to generate
the correct call when ever there is a modulo operation.
Differential Revision: http://reviews.llvm.org/D11661
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243717 91177308-0d34-0410-b5e6-96231b3b80d8
Fixing MinSize attribute handling was discussed in D11363.
This is a prerequisite patch to doing that.
The handling of OptSize when lowering mem* functions was broken
on Darwin because it wants to ignore -Os for these cases, but the
existing logic also made it ignore -Oz (MinSize).
The Linux change demonstrates a widespread problem. The backend
doesn't usually recognize the MinSize attribute by itself; it
assumes that if the MinSize attribute exists, then the OptSize
attribute must also exist.
Fixing this more generally will be a follow-on patch or two.
Differential Revision: http://reviews.llvm.org/D11568
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243693 91177308-0d34-0410-b5e6-96231b3b80d8
The patch changes the SLPVectorizer::vectorizeStores to choose the immediate
succeeding or preceding candidate for a store instruction when it has multiple
consecutive candidates. In this way it has better chance to find more slp
vectorization opportunities.
Differential Revision: http://reviews.llvm.org/D10445
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243666 91177308-0d34-0410-b5e6-96231b3b80d8
Update the debug info in the check-lines because the change in r243638
introduced a constant initialization before the prologue's end as part
of a register spill.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243640 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This hidden option would disable code generation through FastISel by
default. It was removed from the available options and from the
Fast-ISel tests that required it in order to run the tests.
Reviewers: dsanders
Subscribers: qcolombet, llvm-commits
Differential Revision: http://reviews.llvm.org/D11610
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243638 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Previously, we would sign-extend non-boolean negative constants and
zero-extend otherwise. This was problematic for PHI instructions with
negative values that had a type with bitwidth less than that of the
register used for materialization.
More specifically, ComputePHILiveOutRegInfo() assumes the constants
present in a PHI node are zero extended in their container and
afterwards deduces the known bits.
For example, previously we would materialize an i16 -4 with the
following instruction:
addiu $r, $zero, -4
The register would end-up with the 32-bit 2's complement representation
of -4. However, ComputePHILiveOutRegInfo() would generate a constant
with the upper 16-bits set to zero. The SelectionDAG builder would use
that information to generate an AssertZero node that would remove any
subsequent trunc & zero_extend nodes.
In theory, we should modify ComputePHILiveOutRegInfo() to consult
target-specific hooks about the way they prefer to materialize the
given constants. However, git-blame reports that this specific code
has not been touched since 2011 and it seems to be working well for every
target so far.
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11592
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243636 91177308-0d34-0410-b5e6-96231b3b80d8
Bonus change to remove emacs major mode marker from SystemZMachineFunctionInfo.cpp because emacs already knows it's C++ from the extension. Also fix typo "appeary" in AMDGPUMCAsmInfo.h.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243585 91177308-0d34-0410-b5e6-96231b3b80d8
The dsymutil-classic -v option dumps the tool version rather than
putting it in verbose mode. Rename -v to -verbose and update the
tests that use it (in the process removing it from a few tests that
didn't require it anymore since the -dump-debug-map option was
introduced).
A followup commit will reintroduce the -v option that dumps the
version.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243582 91177308-0d34-0410-b5e6-96231b3b80d8
It's potentially more efficient on Cyclone, and from the optimization guides &
schedulers looks like it has no effect on Cortex-A53 or A57. In general you'd
expect a MOV to be about the most efficient instruction with its semantics,
even though the official "UXTW" alias is really a UBFX.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243576 91177308-0d34-0410-b5e6-96231b3b80d8
This commit serializes the save and restore machine basic block references from
the machine frame information class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243575 91177308-0d34-0410-b5e6-96231b3b80d8
This patch vectorizes the v2i64/v4i64 ASHR shift operations - the last remaining integer vector shifts that are still being transferred to/from the scalar unit to be completed.
Differential Revision: http://reviews.llvm.org/D11439
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243569 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
returns_twice (most importantly, setjmp) functions are
optimization-hostile: if local variable is promoted to register, and is
changed between setjmp() and longjmp() calls, this update will be
undone. This is the reason why "man setjmp" advises to mark all these
locals as "volatile".
This can not be enough for ASan, though: when it replaces static alloca
with dynamic one, optionally called if UAR mode is enabled, it adds a
whole lot of SSA values, and computations of local variable addresses,
that can involve virtual registers, and cause unexpected behavior, when
these registers are restored from buffer saved in setjmp.
To fix this, just disable dynamic alloca and UAR tricks whenever we see
a returns_twice call in the function.
Reviewers: rnk
Subscribers: llvm-commits, kcc
Differential Revision: http://reviews.llvm.org/D11495
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243561 91177308-0d34-0410-b5e6-96231b3b80d8
Given certain shuffle-vector masks, LLVM emits splat instructions
which splat the wrong bytes from the source register. The issue is
that the function PPC::isSplatShuffleMask() in PPCISelLowering.cpp
does not ensure that the splat pattern found is requesting bytes that
are aligned on an EltSize boundary. This patch detects this situation
as not a valid splat mask, resulting in a permute being generated
instead of a splat.
Patch and test case by Tyler Kenney, cleaned up a bit by me.
This is a simple bug fix that would be good to incorporate into 3.7.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243519 91177308-0d34-0410-b5e6-96231b3b80d8
This commit defines subtarget feature strict-align and uses it instead of
cl::opt -aarch64-strict-align to decide whether strict alignment should be
forced.
rdar://problem/21529937
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243516 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
As added initially, statepoints required their call targets to be a
constant pointer null if ``numPatchBytes`` was non-zero. This turns out
to be a problem ergonomically, since there is no way to mark patchable
statepoints as calling a (readable) symbolic value.
This change remove the restriction of requiring ``null`` call targets
for patchable statepoints, and changes PlaceSafepoints to maintain the
symbolic call target through its transformation.
Reviewers: reames, swaroop.sridhar
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11550
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243502 91177308-0d34-0410-b5e6-96231b3b80d8
PR24141: https://llvm.org/bugs/show_bug.cgi?id=24141
contains a test case where we have duplicate entries in a node's uses() list.
After r241826, we use CombineTo() to delete dead nodes when combining the uses into
reciprocal multiplies, but this fails if we encounter the just-deleted node again in
the list.
The solution in this patch is to not add duplicate entries to the list of users that
we will subsequently iterate over. For the test case, this avoids triggering the
combine divisors logic entirely because there really is only one user of the divisor.
Differential Revision: http://reviews.llvm.org/D11345
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243500 91177308-0d34-0410-b5e6-96231b3b80d8
This commit defines subtarget feature strict-align and uses it instead of
cl::opt -arm-strict-align to decide whether strict alignment should be
forced. Also, remove the logic that was checking the OS and architecture
as clang is now responsible for setting strict-align based on the command
line options specified and the target architecute and OS.
rdar://problem/21529937
http://reviews.llvm.org/D11470
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243493 91177308-0d34-0410-b5e6-96231b3b80d8
Reapply 243271 with more fixes; although we are not handling multiple
sources with coalescable copies, we were not properly skipping this
case.
- Teaches the ValueTracker in the PeepholeOptimizer to look through PHI
instructions.
- Add findNextSourceAndRewritePHI method to lookup into multiple sources
returnted by the ValueTracker and rewrite PHIs with new sources.
With these changes we can find more register sources and rewrite more
copies to allow coaslescing of bitcast instructions. Hence, we eliminate
unnecessary VR64 <-> GR64 copies in x86, but it could be extended to
other archs by marking "isBitcast" on target specific instructions. The
x86 example follows:
A:
psllq %mm1, %mm0
movd %mm0, %r9
jmp C
B:
por %mm1, %mm0
movd %mm0, %r9
jmp C
C:
movd %r9, %mm0
pshufw $238, %mm0, %mm0
Becomes:
A:
psllq %mm1, %mm0
jmp C
B:
por %mm1, %mm0
jmp C
C:
pshufw $238, %mm0, %mm0
Differential Revision: http://reviews.llvm.org/D11197
rdar://problem/20404526
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243486 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Currently, we support only the MIPS O32 ABI calling convention for call
lowering. With this change we avoid using the O32 calling convetion for
lowering calls marked as using the fast calling convention.
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11515
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243485 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Generate correct code for the select instruction by zero-extending
it's boolean/condition operand to GPR-width. This is necessary because
the conditional-move instructions operate on the whole register.
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11506
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243469 91177308-0d34-0410-b5e6-96231b3b80d8
If the pointer is the store's value operand, this would produce
a broken module. Make sure the use is actually for the pointer operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243462 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Make Scalar Evolution able to propagate NSW and NUW flags from instructions to SCEVs in some cases. This is based on reasoning about when poison from instructions with these flags would trigger undefined behavior. This gives a 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX.
There does not seem to be clear agreement about when poison should be considered to propagate through instructions. In this analysis, poison propagates only in cases where that should be uncontroversial.
This change makes LSR able to create induction variables for expressions like &ptr[i + offset] for loops like this:
for (int i = 0; i < limit; ++i) {
sum += ptr[i + offset];
}
Here ptr is a 64 bit pointer and offset is a 32 bit integer. For NVPTX, LSR currently creates an induction variable for i + offset instead, which is not as fast. Improving this situation is what brings the 13% speed-up on some Eigen3-based Google-internal microbenchmarks for NVPTX.
There are more details in this discussion on llvmdev.
June: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-June/thread.html#87234
July: http://lists.cs.uiuc.edu/pipermail/llvmdev/2015-July/thread.html#87392
Patch by Bjarke Roune
Reviewers: eliben, atrick, sanjoy
Subscribers: majnemer, hfinkel, jingyue, meheff, llvm-commits
Differential Revision: http://reviews.llvm.org/D11212
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243460 91177308-0d34-0410-b5e6-96231b3b80d8
GR64 <-> VR64 copies).
This commit adds a MIR test case for the commit r242191, which was committed
without one. This test case verifies that the ExpandPostRA pass expands the
GR64 <-> VR64 copies into the appropriate MMX_MOV instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@243457 91177308-0d34-0410-b5e6-96231b3b80d8