The tests !defined(__ppc__) && !defined(__powerpc__) are not needed
or helpful when verifying that code is being compiled for a 64-bit
target. The simpler test provided by this revision is sufficient to
tell if the target is 64-bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187318 91177308-0d34-0410-b5e6-96231b3b80d8
do in the SDag when lowering references to the GOT: use
ARMConstantPoolSymbol rather than creating a dummy global variable. The
computation of the alignment still feels weird (it uses IR types and
datalayout) but it preserves the exact previous behavior. This change
fixes the memory leak of the global variable detected on the valgrind
leak checking bot.
Thanks to Benjamin Kramer for pointing me at ARMConstantPoolSymbol to
handle this use case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187303 91177308-0d34-0410-b5e6-96231b3b80d8
me) should start watching this bot more as its catching lots of bugs.
The fix here is to not construct the global if we aren't going to need
it. That's cheaper anyways, and globals have highly predictable types in
practice. I've added an assert to catch skew between our manual testing
of the type and the actual type just for paranoia's sake.
Note that this pattern is actually fine in most globals because when you
build a global with a module it automatically is moved to be owned by
that module. But here, we're in isel and don't really want to do that.
The solution of not creating a global is simpler anyways.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187302 91177308-0d34-0410-b5e6-96231b3b80d8
than once, and the second time through we leaked memory. Found thanks to
the vg-leak bot, but I can't locally reproduce it with valgrind. The
debugger confirms that it is in fact leaking here.
This whole code is totally gross. Why is initialize being called on each
runOnFunction??? Why aren't these OwningPtr<>s, and why aren't their
lifetimes better defined? Anyways, this is just a surgical change to
help out the leak checking bots.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187299 91177308-0d34-0410-b5e6-96231b3b80d8
Merge consecutive if-regions if they contain identical statements.
Both transformations reduce number of branches. The transformation
is guarded by a target-hook, and is currently enabled only for +R600,
but the correctness has been tested on X86 target using a variety of
CPU benchmarks.
Patch by: Mei Ye
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187278 91177308-0d34-0410-b5e6-96231b3b80d8
Both GCC and LLVM will implicitly define __ppc__ and __powerpc__ for
all PowerPC targets, whether 32- or 64-bit. They will both implicitly
define __ppc64__ and __powerpc64__ for 64-bit PowerPC targets, and not
for 32-bit targets. We cannot be sure that all other possible
compilers used to compile Clang/LLVM define both __ppc__ and
__powerpc__, for example, so it is best to check for both when relying
on either inside the Clang/LLVM code base.
This patch makes sure we always check for both variants. In addition,
it fixes one unnecessary check in lib/Target/PowerPC/PPCJITInfo.cpp.
(At least one of __ppc__ and __powerpc__ should always be defined when
compiling for a PowerPC target, no matter which compiler is used, so
testing for them is unnecessary.)
There are some places in the compiler that check for other variants,
like __POWERPC__ and _POWER, and I have left those in place. There is
no need to add them elsewhere. This seems to be in Apple-specific
code, and I won't take a chance on breaking it.
There is no intended change in behavior; thus, no test cases are
added.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187248 91177308-0d34-0410-b5e6-96231b3b80d8
to have register FCC0 (the first floating point condition code register) in
their Uses/Defs list.
No intended functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187233 91177308-0d34-0410-b5e6-96231b3b80d8
CustomLowerNode was not being called during SplitVectorOperand,
meaning custom legalization could not be used by targets.
This also adds a test case for NVPTX that depends on this custom
legalization.
Differential Revision: http://llvm-reviews.chandlerc.com/D1195
Attempt to fix the buildbots by making the X86 test I just added platform independent
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187202 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit 187198. It broke the bots.
The soft float test probably needs a -triple because of name differences.
On the hard float test I am getting a "roundss $1, %xmm0, %xmm0", instead of
"vroundss $1, %xmm0, %xmm0, %xmm0".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187201 91177308-0d34-0410-b5e6-96231b3b80d8
CustomLowerNode was not being called during SplitVectorOperand,
meaning custom legalization could not be used by targets.
This also adds a test case for NVPTX that depends on this custom
legalization.
Differential Revision: http://llvm-reviews.chandlerc.com/D1195
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187198 91177308-0d34-0410-b5e6-96231b3b80d8
This patch provides basic support for powerpc64le as an LLVM target.
However, use of this target will not actually generate little-endian
code. Instead, use of the target will cause the correct little-endian
built-in defines to be generated, so that code that tests for
__LITTLE_ENDIAN__, for example, will be correctly parsed for
syntax-only testing. Code generation will otherwise be the same as
powerpc64 (big-endian), for now.
The patch leaves open the possibility of creating a little-endian
PowerPC64 back end, but there is no immediate intent to create such a
thing.
The LLVM portions of this patch simply add ppc64le coverage everywhere
that ppc64 coverage currently exists. There is nothing of any import
worth testing until such time as little-endian code generation is
implemented. In the corresponding Clang patch, there is a new test
case variant to ensure that correct built-in defines for little-endian
code are generated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187179 91177308-0d34-0410-b5e6-96231b3b80d8
structure not just a pointer. This implements that and thus fixes va_copy
on PPC32. Fixes#15286. Both bug and patch by Florian Zeitz!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187158 91177308-0d34-0410-b5e6-96231b3b80d8
The last patch corrected some issues, but constant-pool entries had actual
codegen bugs in the large memory model (which MCJIT uses).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187126 91177308-0d34-0410-b5e6-96231b3b80d8
Before the patch we took advantage of the fact that the compare and
branch are glued together in the selection DAG and fused them together
(where possible) while emitting them. This seemed to work well in practice.
However, fusing the compare so early makes it harder to remove redundant
compares in cases where CC already has a suitable value. This patch
therefore uses the peephole analyzeCompare/optimizeCompareInstr pair of
functions instead.
No behavioral change intended, but it paves the way for a later patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187116 91177308-0d34-0410-b5e6-96231b3b80d8
As with the stores, these instructions can trap when the condition is false,
so they are only used for things like (cond ? x : *ptr).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187112 91177308-0d34-0410-b5e6-96231b3b80d8
These instructions are allowed to trap even if the condition is false,
so for now they are only used for "*ptr = (cond ? x : *ptr)"-style
constructs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187111 91177308-0d34-0410-b5e6-96231b3b80d8
There's no need to specify a flag to omit frame pointer elimination on non-leaf
nodes...(Honestly, I can't parse that option out.) Use the function attribute
stuff instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187093 91177308-0d34-0410-b5e6-96231b3b80d8
This removes the need to store the asm variant in each row of the single table that existed before. Shaves ~16K off the size of X86AsmParser.o.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187026 91177308-0d34-0410-b5e6-96231b3b80d8
This commit also implements these functions for R600 and removes a test
case that was relying on the buggy behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187007 91177308-0d34-0410-b5e6-96231b3b80d8
These are really the same address space in hardware. The only
difference is that CONSTANT_ADDRESS uses a special cache for faster
access. When we are unable to use the constant kcache for some reason
(e.g. smaller types or lack of indirect addressing) then the instruction
selector must use GLOBAL_ADDRESS loads instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@187006 91177308-0d34-0410-b5e6-96231b3b80d8
When vectors are built from a single value, the ARM lowering issues a
scalar_to_vector node.
This node is then always morphed into a move from the general purpose unit to
the vector unit.
When the value comes from a load, this can be simplified into a vector load to
the right lane.
This patch changes the lowering of insert_vector_elt to expose a vector
friendly pattern in this situation.
This is a step toward fixing <rdar://problem/14170854>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186999 91177308-0d34-0410-b5e6-96231b3b80d8
This increases the number of opportunites we have for folding. With the
previous implementation we were unable to fold into any instructions
other than the first when multiple instructions were selected from a
single SDNode.
Reviewed-by: Vincent Lejeune <vljn at ovi.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186919 91177308-0d34-0410-b5e6-96231b3b80d8
A side-effect of this is that now the compiler expects kernel arguments
to be 4-byte aligned.
Reviewed-by: Vincent Lejeune <vljn at ovi.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186916 91177308-0d34-0410-b5e6-96231b3b80d8
This makes them consistent with 'bt' which already had this handling. gas has the same behavior. There have been discussions on the mailing list about determining size based on the immediate, but my goal here was just to remove the inconsistency.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186904 91177308-0d34-0410-b5e6-96231b3b80d8
It only didn't use it before because it seems InstAlias handling in the asm printer fails to count tied operands so it tried to find an xor with 2 operands instead of the 3 it wfails to count tied.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186900 91177308-0d34-0410-b5e6-96231b3b80d8
Enable parsing all 32 floating point control registers $0-31 and stop trying to
parse floating point condition code register $fcc0. Also, return ParseFail if
the operand being parsed is not in the expected format.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186861 91177308-0d34-0410-b5e6-96231b3b80d8
instructions. With this patch:
1. ldr.n is recognized as mnemonic for the short encoding
2. ldr.w is recognized as menmonic for the long encoding
3. ldr will map to either short or long encodings depending on the size of the offset
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186831 91177308-0d34-0410-b5e6-96231b3b80d8
After Ulrich's r180677 (thanks!) TableGen is intelligent enough to
handle tied constraints involving complex operands properly, so
virtually all of the ARM custom converters are now unnecessary.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186810 91177308-0d34-0410-b5e6-96231b3b80d8
indirect branches correctly. Under some circumstances, this led to the deletion
of basic blocks that were the destination of indirect branches. In that case it
left indirect branches to nowhere in the code.
This patch replaces, and is more general than either of the previous fixes for
indirect-branch-analysis issues, r181161 and r186461.
For other branches (not indirect) this refactor should have *almost* identical
behavior to the previous version. There are some corner cases where this
refactor is able to analyze blocks that the previous version could not (e.g.
this necessitated the update to thumb2-ifcvt2.ll).
<rdar://problem/14464830>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186735 91177308-0d34-0410-b5e6-96231b3b80d8
Follows the same lines as r186686, but much more limited, since we only
use ADD LOGICAL for multi-i64 additions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186689 91177308-0d34-0410-b5e6-96231b3b80d8
The atomic tests assume the two-operand forms, so I've restricted them to z10.
Running and-01.ll, or-01.ll and xor-01.ll for z196 as well as z10 shows why
using convertToThreeAddress() is better than exposing the three-operand forms
first and then converting back to two operands where possible (which is what
I'd originally tried). Using the three-operand form first stops us from
taking advantage of NG, OG and XG for spills.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186683 91177308-0d34-0410-b5e6-96231b3b80d8
This first step just adds definitions for SLLK, SRLK and SRAK.
The next patch will actually make use of them during codegen.
insn-bad.s tests that some form of error is reported when using these
instructions on z10. More work is needed to get the "instruction requires:
distinct-ops" that we'd ideally like, so I've stubbed that part out for now.
I'll come back and make it mandatory once the necessary changes are in.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186680 91177308-0d34-0410-b5e6-96231b3b80d8
The original code only folded SRA into ROTATE ... SELECTED BITS
if there was no outer shift. This patch splits out that check
and generalises it slightly. The extra cases aren't really that
interesting, but this is paving the way for RNSBG support.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186571 91177308-0d34-0410-b5e6-96231b3b80d8
In hindsight, using "RISBG" for something that can be any type of
R.SBG instruction was a bit confusing, so this renames it to RxSBG.
That might not be the best choice either, since there is an instruction
called RXSBG, but hopefully the lower-case letter stands out enough.
While there I fixed a couple of GNUisms that had crept in --
sorry about that!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186569 91177308-0d34-0410-b5e6-96231b3b80d8
Support for dynamic stack alignments in the PPC backend has been unfinished, in
part because it depends on dynamic stack realignment (which I only just
recently implemented fully). Now we can also support dynamic allocas with
higher than the default target stack alignment (16 bytes).
In order to round-up the requested size to the maximum requested alignment, we
need an additional register to hold the rounded-up size. We're already using one
scavenged register to hold the previous stack-pointer value (which needs to be
stored with the signal-safe stdux update), and so when we have dynamic allocas
and a large alignment, we allocate two emergency spill slots for the scavenger.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186562 91177308-0d34-0410-b5e6-96231b3b80d8
First, this changes the base-pointer implementation to remove an unnecessary
complication (and one that is incompatible with how builtin SjLj is
implemented): instead of using r31 as the base pointer when it is not needed as
a frame pointer, now the base pointer will always be r30 when needed.
Second, we introduce another pseudo register, BP, which is used just like the FP
pseudo register to refer to the base register before we know for certain what
register it will be.
Third, we now save BP into the jmp_buf, and restore r30 from that slot in
longjmp. If the function that called setjmp did not use a base pointer, then
r30 will be overwritten by the setjmp-calling-function's restore code. FP
restoration (which is restored into r31) works the same way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186545 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a new class for non-predicable NEON instructions and a
new DecoderNamespace for v8 NEON instructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186504 91177308-0d34-0410-b5e6-96231b3b80d8
My patch 'r183551 - ARM FastISel integer sext/zext improvements' was incorrect when emitting ARM register-immediate ASR, LSL, LSR instructions: they are pseudo-instructions in ARMInstrInfo.td and I should have used MOVsi instead.
This is not an issue when code is generated through a .s file, but is an issue when generated straight to a .o (-filetype=obj).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186489 91177308-0d34-0410-b5e6-96231b3b80d8
Because the builtin longjmp implementation uses a CTR-based indirect jump, when
the control flow arrives at the builtin setjmp call, the CTR register has
necessarily been clobbered. Correspondingly, this adds CTR to the list of
implicit definitions of the builtin setjmp pseudo instruction.
We don't need to add CTR to the implicit definitions of builtin longjmp
because, even though it does clobber the CTR register, the control flow cannot
return to inside the loop unless there is also a builtin setjmp call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186488 91177308-0d34-0410-b5e6-96231b3b80d8
This builds on some frame-lowering code that has existed since 2005 (r24224)
but was disabled in 2008 (r48188) because it needed base pointer support to
function correctly. This implementation follows the strategy suggested by Dale
Johannesen in r48188 where the following comment was added:
This does not currently work, because the delta between old and new stack
pointers is added to offsets that reference incoming parameters after the
prolog is generated, and the code that does that doesn't handle a variable
delta. You don't want to do that anyway; a better approach is to reserve
another register that retains to the incoming stack pointer, and reference
parameters relative to that.
And now we do exactly that. If we don't need a frame pointer, then we use r31
as a base pointer. If we do need a frame pointer, then we use r30 as a base
pointer. The base pointer retains the value of the stack pointer before it was
decremented in the prologue. We then use the base pointer to resolve all
negative frame indicies. The basic scheme follows that for base pointers in the
X86 backend.
We use a base pointer when we need to dynamically realign the incoming stack
pointer. This currently applies only to static objects (dynamic allocas with
large alignments, and base-pointer support in SjLj lowering will come in future
commits).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186478 91177308-0d34-0410-b5e6-96231b3b80d8
block. Blocks that have an indirect branch terminator, even if it's not the
last terminator, should still be treated as unanalyzable.
<rdar://problem/14437274>
Reducing a useful regression test case is proving difficult - I hope to have
one soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186461 91177308-0d34-0410-b5e6-96231b3b80d8
This adds an instruction alias to make the assembler recognize the alternate literal form: pli [PC, #+/-<imm>]
See A8.8.129 in the ARM ARM (DDI 0406C.b).
Fixes <rdar://problem/14403733>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186459 91177308-0d34-0410-b5e6-96231b3b80d8
This centralizes the handling of O_BINARY and opens the way for hiding more
differences (like how open behaves with directories).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186447 91177308-0d34-0410-b5e6-96231b3b80d8
Use PMIN/PMAX for UGE/ULE vector comparions to reduce the number of required
instructions. This trick also works for UGT/ULT, but there is no advantage in
doing so. It wouldn't reduce the number of instructions and it would actually
reduce performance.
Reviewer: Ben
radar:5972691
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186432 91177308-0d34-0410-b5e6-96231b3b80d8
Previously an asm operand with no operand modifier would give the error
"invalid operand in inline asm".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186407 91177308-0d34-0410-b5e6-96231b3b80d8
We'd forgotten to provide string representations for the special ARMISD atomic
nodes; this adds them in. No effect on CodeGen, just makes the output of
"-view-whatever-dags" slightly more readable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186406 91177308-0d34-0410-b5e6-96231b3b80d8
Another patch in the series to make more use of R.SBG. This one extends
r186072 and r186073 to handle cases where the AND is inside the shift.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186399 91177308-0d34-0410-b5e6-96231b3b80d8
Intrinsics already existed for the 64-bit variants, so these support operations
of size at most 32-bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186392 91177308-0d34-0410-b5e6-96231b3b80d8
This patch enables calls to __aeabi_idivmod when in EABI mode,
by using the remainder value returned on registers (R1),
enabled by the ARM triple "none-eabi". Note that Darwin and
GNUEABI triples will continue lowering on GNU style, that is,
using the stack for the remainder.
Still need to add SREM/UREM support fix for 64-bit lowering.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186390 91177308-0d34-0410-b5e6-96231b3b80d8
This change mirrors the changes that were made to the X86 and ARM targets to
support subtarget feature changing. As indicated in r182899, the mechanism is
still undergoing revision, and so as with the X86 and ARM targets, there is no
test case yet (there is no effective functionality change).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186357 91177308-0d34-0410-b5e6-96231b3b80d8
PPCInstrInfo::insertSelect and PPCInstrInfo::canInsertSelect were computing the
common subclass of the true and false inputs, and then selecting either the
32-bit or the 64-bit isel variant based on the result of calling
PPC::GPRCRegClass.hasSubClassEq(RC) and PPC::G8RCRegClass.hasSubClassEq(RC)
(where RC is the common subclass). Unfortunately, this is not quite right: if
we have something like this:
%vreg8<def> = SELECT_CC_I8 %vreg4<kill>, %vreg7<kill>, %vreg6<kill>, 76;
G8RC_and_G8RC_NOX0:%vreg8 CRRC:%vreg4 G8RC_NOX0:%vreg7,%vreg6
then the common subclass of G8RC_and_G8RC_NOX0 and G8RC_NOX0 is G8RC_NOX0, and
G8RC_NOX0 is not a subclass of G8RC (because it also contains the ZERO8
pseudo-register). As a result, we also need to check the common subclass
against GPRC_NOR0 and G8RC_NOX0 explicitly.
This had not been a problem for clients of insertSelect that called
canInsertSelect first (because it had a compensating mistake), but insertSelect
is also used by the PPC pseudo-instruction expander, and this error was causing
a problem in that context.
This problem was found by csmith.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186343 91177308-0d34-0410-b5e6-96231b3b80d8
ARM paired GPR COPY was being lowered to two MOVr without CC. This
patch puts the CC back.
My test is a reduction of the case where I encountered the issue,
64-bit atomics use paired GPRs.
The issue only occurs with selectionDAG, FastISel doesn't encounter it
so I didn't bother calling it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186226 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes a 35% degradation compared to unvectorized code in
MiBench/automotive-susan and an equally serious regression on a private
image processing benchmark.
radar://14351991
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186188 91177308-0d34-0410-b5e6-96231b3b80d8
Address calculation for gather/scather in vectorized code can incur a
significant cost making vectorization unbeneficial. Add infrastructure to add
cost.
Tests and cost model for targets will be in follow-up commits.
radar://14351991
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186187 91177308-0d34-0410-b5e6-96231b3b80d8
In particular:
movsbw %al, %ax --> cbtw
movswl %ax, %eax --> cwtl
movslq %eax, %rax --> cltq
According to Intel's manual those have the same performance characteristics but
come with a smaller encoding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186174 91177308-0d34-0410-b5e6-96231b3b80d8
Normal (sext (setcc ...)) sequences are optimised into
(select_cc ..., -1, 0) by DAGCombiner::visitSIGN_EXTEND.
However, this is deliberately not done for vectors, and after
vector type legalization we have (sext_inreg (setcc ...)) instead.
I wondered about trying to extend DAGCombiner to handle this case too,
but it seemed to be a loss on some other targets I tried, even those for
which SETCC isn't "legal" and SELECT_CC is.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186149 91177308-0d34-0410-b5e6-96231b3b80d8
GPR and FPR constraints like "{r2}" and "{f2}" weren't handled correctly
because the name-to-regno mapping depends on the value type and
(because of that) the internal names in RegStrings are not the
same as the AsmName.
CC constraints like "{cc}" didn't work either because there was no
associated register class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186148 91177308-0d34-0410-b5e6-96231b3b80d8
If the source of these instructions is spilled we should load the destination.
If the destination is spilled we should store the source.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186147 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch adds explicit calling convention types for the Win64 and
System V/x86-64 ABIs. This allows code to override the default, and use
the Win64 convention on a target that wants to use SysV (and
vice-versa). This is needed to implement the `ms_abi` and `sysv_abi` GNU
attributes.
Reviewers:
CC:
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186144 91177308-0d34-0410-b5e6-96231b3b80d8
We had patterns to match v4i32 immAllZerosV -> V_SET0, but not patterns for
v8i16 (which occurs in the test case) or v16i8. The same was true for
V_SETALLONES (so I added the associated patterns for those as well).
Another bug found by llvm-stress.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186108 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes a bug (found by csmith) at -O0 where we attempt to create a RLWIMI
with an out-of-range operand. Most uses of the isRunOfOnes function are guarded
by a condition that the value is not zero. This was not true in two places, and
in both places a zero input would result in an out-of-rage MB value (= 32).
To fix this, isRunOfOnes returns false on a zero input (and I've remove one
now-redundant guard).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186101 91177308-0d34-0410-b5e6-96231b3b80d8
RISBG can handle some ANDs for which no AND IMMEDIATE exists.
It also acts as a three-operand AND for some cases where an
AND IMMEDIATE could be used instead.
It might be worth adding a pass to replace RISBG with AND IMMEDIATE
in cases where the register operands end up being the same and where
AND IMMEDIATE is smaller.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186072 91177308-0d34-0410-b5e6-96231b3b80d8
RISBG has three 8-bit operands (I3, I4 and I5). I'd originally
restricted all three to 6 bits, since that's the only range we intended
to use at the time. However, the top bit of I4 acts as a "zero" flag for
RISBG, while the top bit of I3 acts as a "test" flag for RNSBG & co.
This patch therefore allows them to have the full 8-bit range.
I've left the fifth operand as a 6-bit value for now since the
upper 2 bits have no defined meaning.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186070 91177308-0d34-0410-b5e6-96231b3b80d8
Enough for the radeonsi driver to use it for calculating derivatives.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186012 91177308-0d34-0410-b5e6-96231b3b80d8
In discussing this change with Bill Schmidt, it was decided that the original
comment about negative FIs was incorrect. We'll still exclude them for now, but
now with a more-accurate explanation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@186005 91177308-0d34-0410-b5e6-96231b3b80d8
Currently ARM is the only backend that supports FMA instructions (for at least some subtargets) but does not implement this virtual, so FMAs are never generated except from explicit fma intrinsic calls. Apparently this is due to the fact that it supports both fused (one rounding step) and unfused (two rounding step) multiply + add instructions. This patch clarifies that this the case without changing behavior by implementing the virtual function to simply return false, as the default TargetLoweringBase version does.
It is possible that some cpus perform the fused version faster than the unfused version and vice-versa, so the function implementation should be revisited if hard data is found.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185994 91177308-0d34-0410-b5e6-96231b3b80d8
Propagate the fix from r185712 to Thumb2 codegen as well. Original
commit message applies here as well:
A "pkhtb x, x, y asr #num" uses the lower 16 bits of "y asr #num" and
packs them in the bottom half of "x". An arithmetic and logic shift are
only equivalent in this context if the shift amount is 16. We would be
shifting in ones into the bottom 16bits instead of zeros if "y" is
negative.
rdar://14338767
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185982 91177308-0d34-0410-b5e6-96231b3b80d8
A more complete example of the bug in PR16556 was recently provided,
showing that the previous fix was not sufficient. The previous fix is
reverted herein.
The real problem is that ReplaceNodeResults() uses LowerFP_TO_INT as
custom lowering for FP_TO_SINT during type legalization, without
checking whether the input type is handled by that routine.
LowerFP_TO_INT requires the input to be f32 or f64, so we fail when
the input is ppcf128.
I'm leaving the test case from the initial fix (r185821) in place, and
adding the new test as another crash-only check.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185959 91177308-0d34-0410-b5e6-96231b3b80d8
in-tree implementations of TargetLoweringBase::isFMAFasterThanMulAndAdd in
order to resolve the following issues with fmuladd (i.e. optional FMA)
intrinsics:
1. On X86(-64) targets, ISD::FMA nodes are formed when lowering fmuladd
intrinsics even if the subtarget does not support FMA instructions, leading
to laughably bad code generation in some situations.
2. On AArch64 targets, ISD::FMA nodes are formed for operations on fp128,
resulting in a call to a software fp128 FMA implementation.
3. On PowerPC targets, FMAs are not generated from fmuladd intrinsics on types
like v2f32, v8f32, v4f64, etc., even though they promote, split, scalarize,
etc. to types that support hardware FMAs.
The function has also been slightly renamed for consistency and to force a
merge/build conflict for any out-of-tree target implementing it. To resolve,
see comments and fixed in-tree examples.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185956 91177308-0d34-0410-b5e6-96231b3b80d8
In the commit message to r185476 I wrote:
>The PowerPC-specific modifiers VK_PPC_TLSGD and VK_PPC_TLSLD
>correspond exactly to the generic modifiers VK_TLSGD and VK_TLSLD.
>This causes some confusion with the asm parser, since VK_PPC_TLSGD
>is output as @tlsgd, which is then read back in as VK_TLSGD.
>
>To avoid this confusion, this patch removes the PowerPC-specific
>modifiers and uses the generic modifiers throughout. (The only
>drawback is that the generic modifiers are printed in upper case
>while the usual convention on PowerPC is to use lower-case modifiers.
>But this is just a cosmetic issue.)
This was unfortunately incorrect, there is is fact another,
serious drawback to using the default VK_TLSLD/VK_TLSGD
variant kinds: using these causes ELFObjectWriter::RelocNeedsGOT
to return true, which in turn causes the ELFObjectWriter to emit
an undefined reference to _GLOBAL_OFFSET_TABLE_.
This is a problem on powerpc64, because it uses the TOC instead
of the GOT, and the linker does not provide _GLOBAL_OFFSET_TABLE_,
so the symbol remains undefined. This means shared libraries
using TLS built with the integrated assembler are currently
broken.
While the whole RelocNeedsGOT / _GLOBAL_OFFSET_TABLE_ situation
probably ought to be properly fixed at some point, for now I'm
simply reverting the r185476 commit. Now this in turn exposes
the breakage of handling @tlsgd/@tlsld in the asm parser that
this check-in was originally intended to fix.
To avoid this regression, I'm also adding a different fix for
this problem: while common code now parses @tlsgd as VK_TLSGD,
a special hack in the asm parser translates this code to the
platform-specific VK_PPC_TLSGD that the back-end now expects.
While this is not really pretty, it's self-contained and
shouldn't hurt anything else for now. One the underlying
problem is fixed, this hack can be reverted again.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185945 91177308-0d34-0410-b5e6-96231b3b80d8
Test is not included as it is several 1000 lines long.
To test this functionnality, a test case must generate at least 2 ALU clauses,
where an ALU clause is ~110 instructions long.
NOTE: This is a candidate for the stable branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@185943 91177308-0d34-0410-b5e6-96231b3b80d8