Using PatLeaf rather than ImmLeaf when defining immediate predicates
prevents simple patterns using those predicates from being recognized
for fast instruction selection. This patch replaces the immSExt16
PatLeaf predicate with two ImmLeaf predicates, imm32SExt16 and
imm64SExt16, allowing a few more patterns to be recognized (ADDI,
ADDIC, MULLI, ADDI8, and ADDIC8). Using the new predicates does not
help for LI, LI8, SUBFIC, and SUBFIC8 because these are rejected for
other reasons, but I see no reason to retain the PatLeaf predicate.
No functional change intended, and thus no test cases yet. This is
preliminary work for enabling fast-isel support for PowerPC. When
that support is ready, we'll be able to test this function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182510 91177308-0d34-0410-b5e6-96231b3b80d8
Addresses a review comment from Ulrich Weigand. No functional change intended.
I'm not sure whether the old TODO that this patch touches still holds,
but that's something we'd get to when adding a targetted scheduling
description.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182474 91177308-0d34-0410-b5e6-96231b3b80d8
The original version of the pass could underestimate the length of a backward
branch in cases like:
alignment to N bytes or more
...
relaxable branch A
...
foo: (aligned to M<N bytes)
...
bar: (aligned to N bytes)
...
relaxable branch B to foo
We don't add any misalignment gap for "bar" because N bytes of alignment
had already been reached earlier in the function. In this case, assuming
that A is relaxed can push "foo" closer to "bar", and make B appear to be
in range. Similar problems can occur for forward branches.
I don't think it's possible to create blocks with mixed alignments as
things stand, not least because we haven't yet defined getPrefLoopAlignment()
for SystemZ (that would need benchmarking). So I don't think we can test
this yet.
Thanks to Rafael Espíndola for spotting the bug.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182460 91177308-0d34-0410-b5e6-96231b3b80d8
Allow LLVM to take advantage of shift instructions that set the ZF flag,
making instructions that test the destination superfluous.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182454 91177308-0d34-0410-b5e6-96231b3b80d8
MSC is confused about "memcpy" between <cstring> and llvm::Intrinsic::memcpy, when llvm::Intrinsic were exposed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182452 91177308-0d34-0410-b5e6-96231b3b80d8
Although I had added some support for the BDZ/BDNZ branches into the selector
(in r158204), I had not correctly adjusted the condition at the top of the
loop. As a result, these branches were still essentially unsupported.
This fixes PR16086. Unfortunately, any test case would be very large (because
it would need to force the loop backedge to exceed the range of the 16-bit
immediate).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182385 91177308-0d34-0410-b5e6-96231b3b80d8
pic calls. These need to be there so we don't try and use helper
functions when we call those.
As part of this, make sure that we properly exclude helper functions in pic
mode when indirect calls are involved.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182343 91177308-0d34-0410-b5e6-96231b3b80d8
By default, a teq instruction is inserted after integer divide. No divide-by-zero
checks are performed if option "-mnocheck-zero-division" is used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182306 91177308-0d34-0410-b5e6-96231b3b80d8
Now that the preheader insertion logic in LoopSimplify is externally exposed,
use it, and remove the copy-and-pasted version.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182300 91177308-0d34-0410-b5e6-96231b3b80d8
As the pairing of this instruction form with the bdnz/bdz branches is now
enforced by the verification pass, make it clear from the name that these
are used only for counter-based loops.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182296 91177308-0d34-0410-b5e6-96231b3b80d8
When asserts are enabled, this adds a verification pass for PPC counter-loop
formation. Unfortunately, without sacrificing code quality, there is no better
way of forming counter-based loops except at the (late) IR level. This means
that we need to recognize, at the IR level, anything which might turn into a
function call (or indirect branch). Because this is currently a finite set of
things, and because SelectionDAG lowering is basic-block local, this can be
done. Nevertheless, it is fragile, and failure results in a miscompile. This
verification pass checks that all (reachable) counter-based branches are
dominated by a loop mtctr instruction, and that no instructions in between
clobber the counter register. If these conditions are not satisfied, then an
ICE will be triggered.
In short, this is to help us sleep better at night.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182295 91177308-0d34-0410-b5e6-96231b3b80d8
R600TextureIntrinsicsReplacer.cpp:232: warning: the address of ‘ArgsType’ will always evaluate as ‘true’
This doesn't have any effect on the output as a vararg intrinsic behaves the
same way as a non-vararg one.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182293 91177308-0d34-0410-b5e6-96231b3b80d8
This will simplify the instructions and also the pattern definitions.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182288 91177308-0d34-0410-b5e6-96231b3b80d8
This makes it possible to reorder the operands without breaking the
encoding.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182283 91177308-0d34-0410-b5e6-96231b3b80d8
Before this change, the SystemZ backend would use BRCL for all branches
and only consider shortening them to BRC when generating an object file.
E.g. a branch on equal would use the JGE alias of BRCL in assembly output,
but might be shortened to the JE alias of BRC in ELF output. This was
a useful first step, but it had two problems:
(1) The z assembler isn't traditionally supposed to perform branch shortening
or branch relaxation. We followed this rule by not relaxing branches
in assembler input, but that meant that generating assembly code and
then assembling it would not produce the same result as going directly
to object code; the former would give long branches everywhere, whereas
the latter would use short branches where possible.
(2) Other useful branches, like COMPARE AND BRANCH, do not have long forms.
We would need to do something else before supporting them.
(Although COMPARE AND BRANCH does not change the condition codes,
the plan is to model COMPARE AND BRANCH as a CC-clobbering instruction
during codegen, so that we can safely lower it to a separate compare
and long branch where necessary. This is not a valid transformation
for the assembler proper to make.)
This patch therefore moves branch relaxation to a pre-emit pass.
For now, calls are still shortened from BRASL to BRAS by the assembler,
although this too is not really the traditional behaviour.
The first test takes about 1.5s to run, and there are likely to be
more tests in this vein once further branch types are added. The feeling
on IRC was that 1.5s is a bit much for a single test, so I've restricted
it to SystemZ hosts for now.
The patch exposes (and fixes) some typos in the main CodeGen/SystemZ tests.
A later patch will remove the {{g}}s from that directory.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182274 91177308-0d34-0410-b5e6-96231b3b80d8
This converter currently only handles global variables in address space 0. For
these variables, they are promoted to address space 1 (global memory), and all
uses are updated to point to the result of a cvta.global instruction on the new
variable.
The motivation for this is address space 0 global variables are illegal since we
cannot declare variables in the generic address space. Instead, we place the
variables in address space 1 and explicitly convert the pointer to address
space 0. This is primarily intended to help new users who expect to be able to
place global variables in the default address space.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182254 91177308-0d34-0410-b5e6-96231b3b80d8
Introduction:
In case when stack alignment is 8 and GPRs parameter part size is not N*8:
we add padding to GPRs part, so part's last byte must be recovered at
address K*8-1.
We need to do it, since remained (stack) part of parameter starts from
address K*8, and we need to "attach" "GPRs head" without gaps to it:
Stack:
|---- 8 bytes block ----| |---- 8 bytes block ----| |---- 8 bytes...
[ [padding] [GPRs head] ] [ ------ Tail passed via stack ------ ...
FIX:
Note, once we added padding we need to correct *all* Arg offsets that are going
after padded one. That's why we need this fix: Arg offsets were never corrected
before this patch. See new test-cases included in patch.
We also don't need to insert padding for byval parameters that are stored in GPRs
only. We need pad only last byval parameter and only in case it outsides GPRs
and stack alignment = 8.
Though, stack area, allocated for recovered byval params, must satisfy
"Size mod 8 = 0" restriction.
This patch reduces stack usage for some cases:
We can reduce ArgRegsSaveArea since inner N*4 bytes sized byval params my be
"packed" with alignment 4 in some cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182237 91177308-0d34-0410-b5e6-96231b3b80d8
Also clean up the arguments to all the MOVCC instructions so the
operands always are (true-val, false-val, cond-code).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182221 91177308-0d34-0410-b5e6-96231b3b80d8
We don't need to reject all inline asm as using the counter register (most does
not). Only those that explicitly clobber the counter register need to prevent
the transformation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182191 91177308-0d34-0410-b5e6-96231b3b80d8
The peephole tries to reorder MOV32r0 instructions such that they are
before the instruction that modifies EFLAGS.
The problem is that the peephole does not consider the case where the
instruction that modifies EFLAGS also depends on the previous state of
EFLAGS.
Instead, walk backwards until we find an instruction that has a def for
EFLAGS but does not have a use.
If we find such an instruction, insert the MOV32r0 before it.
If it cannot find such an instruction, skip the optimization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182184 91177308-0d34-0410-b5e6-96231b3b80d8
This patch matches GCC behavior: the code used to only allow unaligned
load/store on ARM for v6+ Darwin, it will now allow unaligned load/store
for v6+ Darwin as well as for v7+ on Linux and NaCl.
The distinction is made because v6 doesn't guarantee support (but LLVM
assumes that Apple controls hardware+kernel and therefore have
conformant v6 CPUs), whereas v7 does provide this guarantee (and
Linux/NaCl behave sanely).
The patch keeps the -arm-strict-align command line option, and adds
-arm-no-strict-align. They behave similarly to GCC's -mstrict-align and
-mnostrict-align.
I originally encountered this discrepancy in FastIsel tests which expect
unaligned load/store generation. Overall this should slightly improve
performance in most cases because of reduced I$ pressure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182175 91177308-0d34-0410-b5e6-96231b3b80d8
The errors were:
non-constant-expression cannot be narrowed from type 'int64_t' (aka 'long') to 'uint32_t' (aka 'unsigned int') in initializer list
and
non-constant-expression cannot be narrowed from type 'long' to 'uint32_t' (aka 'unsigned int') in initializer list
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182168 91177308-0d34-0410-b5e6-96231b3b80d8
It should increase PV substitution opportunities and lower gpr
usage (pending computations path are "flushed" sooner)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182128 91177308-0d34-0410-b5e6-96231b3b80d8
Dot4 now uses 8 scalar operands instead of 2 vectors one which allows register
coalescer to remove some unneeded COPY.
This patch also defines some structures/functions that can be used to handle
every vector instructions (CUBE, Cayman special instructions...) in a similar
fashion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182126 91177308-0d34-0410-b5e6-96231b3b80d8
Almost all instructions that takes a 128 bits reg as input (fetch, export...)
have the abilities to swizzle their argument and output. Instead of printing
default swizzle for each 128 bits reg, rename T*.XYZW to T* and let instructions
print potentially optimized swizzles themselves.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182124 91177308-0d34-0410-b5e6-96231b3b80d8
Shuffles that only move an element into position 0 of the vector are common in
the output of the loop vectorizer and often generate suboptimal code when SSSE3
is not available. Lower them to vector shifts if possible.
We still prefer palignr over psrldq because it has higher throughput on
sandybridge.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182102 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements the equivalent change to r182091/r182092
in the old-style code emitter. Instead of having two separate
16-bit immediate encoding routines depending on the instruction,
this patch introduces a single encoder that checks the machine
operand flags to decide whether the low or high half of a
symbol address is required.
Since now both encoders make no further distinction between
"symbolLo" and "symbolHi", the .td operand can now use a
single getS16ImmEncoding method.
Tested by running the old-style JIT tests on 32-bit Linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182097 91177308-0d34-0410-b5e6-96231b3b80d8
Now that fixup_ppc_ha16 and fixup_ppc_lo16 are being treated exactly
the same everywhere, it no longer makes sense to have two fixup types.
This patch merges them both into a single type fixup_ppc_half16,
and renames fixup_ppc_lo16_ds to fixup_ppc_half16ds for consistency.
(The half16 and half16ds names are taken from the description of
relocation types in the PowerPC ABI.)
No change in code generation expected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182092 91177308-0d34-0410-b5e6-96231b3b80d8
The current PowerPC MC back end distinguishes between fixup_ppc_ha16
and fixup_ppc_lo16, which are determined by the instruction the fixup
applies to, and uses this distinction to decide whether a fixup ought
to resolve to the high or the low part of a symbol address.
This isn't quite correct, however. It is valid -if unusual- assembler
to use, e.g.
li 1, symbol@ha
or
lis 1, symbol@l
Whether the high or the low part of the address is used depends solely
on the @ suffix, not on the instruction.
In addition, both
li 1, symbol
and
lis 1, symbol
are valid, assuming the symbol address fits into 16 bits; again, both
will then refer to the actual symbol value (so li will load the value
itself, while lis will load the value shifted by 16).
To fix this, two places need to be adapted. If the fixup cannot be
resolved at assembler time, a relocation needs to be emitted via
PPCELFObjectWriter::getRelocType. This routine already looks at
the VK_ type to determine the relocation. The only problem is that
will reject any _LO modifier in a ha16 fixup and vice versa. This
is simply incorrect; any of those modifiers ought to be accepted
for either fixup type.
If the fixup *can* be resolved at assembler time, adjustFixupValue
currently selects the high bits of the symbol value if the fixup
type is ha16. Again, this is incorrect; see the above example
lis 1, symbol
Now, in theory we'd have to respect a VK_ modifier here. However,
in fact common code never even attempts to resolve symbol references
using any nontrivial VK_ modifier at assembler time; it will always
fall back to emitting a reloc and letting the linker handle it.
If this ever changes, presumably there'd have to be a target callback
to resolve VK_ modifiers. We'd then have to handle @ha etc. there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182091 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, three instructions were needed:
trunc.w.s $f0, $f2
mfc1 $4, $f0
sw $4, 0($2)
Now we need only two:
trunc.w.s $f0, $f2
swc1 $f0, 0($2)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182053 91177308-0d34-0410-b5e6-96231b3b80d8
This patch removes alias definition for addiu $rs,$imm
and instead uses the TwoOperandAliasConstraint field in
the ArithLogicI instruction class.
This way all instructions that inherit ArithLogicI class
have the same macro defined.
The usage examples are added to test files.
Patch by Vladimir Medic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182048 91177308-0d34-0410-b5e6-96231b3b80d8
Some IR-level instructions (such as FP <-> i64 conversions) are not chained
w.r.t. the mtctr intrinsic and yet may become function calls that clobber the
counter register. At the selection-DAG level, these might be reordered with the
mtctr intrinsic causing miscompiles. To avoid this situation, if an existing
preheader has instructions that might use the counter register, create a new
preheader for the mtctr intrinsic. This extra block will be remerged with the
old preheader at the MI level, but will prevent unwanted reordering at the
selection-DAG level.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182045 91177308-0d34-0410-b5e6-96231b3b80d8
invalid instruction sequence.
Rather than emitting an int-to-FP move instruction and an int-to-FP conversion
instruction during instruction selection, we emit a pseudo instruction which gets
expanded post-RA. Without this change, register allocation can possibly insert a
floating point register move instruction between the two instructions, which is not
valid according to the ISA manual.
mtc1 $f4, $4 # int-to-fp move instruction.
mov.s $f2, $f4 # move contents of $f4 to $f2.
cvt.s.w $f0, $f2 # int-to-fp conversion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182042 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds bnez and beqz instructions which represent alias definitions for bne and beq instructions as follows:
bnez $rs,$imm => bne $rs,$zero,$imm
beqz $rs,$imm => beq $rs,$zero,$imm
The corresponding test cases are added.
Patch by Vladimir Medic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182040 91177308-0d34-0410-b5e6-96231b3b80d8
This is the second part of the change to always return "true"
offset values from getPreIndexedAddressParts, tackling the
case of "memrix" type operands.
This is about instructions like LD/STD that only have a 14-bit
field to encode immediate offsets, which are implicitly extended
by two zero bits by the machine, so that in effect we can access
16-bit offsets as long as they are a multiple of 4.
The PowerPC back end currently handles such instructions by
carrying the 14-bit value (as it will get encoded into the
actual machine instructions) in the machine operand fields
for such instructions. This means that those values are
in fact not the true offset, but rather the offset divided
by 4 (and then truncated to an unsigned 14-bit value).
Like in the case fixed in r182012, this makes common code
operations on such offset values not work as expected.
Furthermore, there doesn't really appear to be any strong
reason why we should encode machine operands this way.
This patch therefore changes the encoding of "memrix" type
machine operands to simply contain the "true" offset value
as a signed immediate value, while enforcing the rules that
it must fit in a 16-bit signed value and must also be a
multiple of 4.
This change must be made simultaneously in all places that
access machine operands of this type. However, just about
all those changes make the code simpler; in many cases we
can now just share the same code for memri and memrix
operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182032 91177308-0d34-0410-b5e6-96231b3b80d8
On PPC32, i64 FP conversions are implemented using runtime calls (which clobber
the counter register). These must be excluded.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182023 91177308-0d34-0410-b5e6-96231b3b80d8
DAGCombiner::CombineToPreIndexedLoadStore calls a target routine to
decompose a memory address into a base/offset pair. It expects the
offset (if constant) to be the true displacement value in order to
perform optional additional optimizations; in particular, to convert
other uses of the original pointer into uses of the new base pointer
after pre-increment.
The PowerPC implementation of getPreIndexedAddressParts, however,
simply calls SelectAddressRegImm, which returns a TargetConstant.
This value is appropriate for encoding into the instruction, but
it is not always usable as true displacement value:
- Its type is always MVT::i32, even on 64-bit, where addresses
ought to be i64 ... this causes the optimization to simply
always fail on 64-bit due to this line in DAGCombiner:
// FIXME: In some cases, we can be smarter about this.
if (Op1.getValueType() != Offset.getValueType()) {
- Its value is truncated to an unsigned 16-bit value if negative.
This causes the above opimization to generate wrong code.
This patch fixes both problems by simply returning the true
displacement value (in its original type). This doesn't
affect any other user of the displacement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@182012 91177308-0d34-0410-b5e6-96231b3b80d8
getExceptionHandlingType is not ExceptionHandling::DwarfCFI on xcore, so
etFrameInstructions is never called. There is no point creating cfi
instructions if they are never used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181979 91177308-0d34-0410-b5e6-96231b3b80d8
This creates stubs that help Mips32 functions call Mips16
functions which have floating point parameters that are normally passed
in floating point registers.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181972 91177308-0d34-0410-b5e6-96231b3b80d8
Increase the number of instructions LLVM recognizes as setting the ZF
flag. This allows us to remove test instructions that redundantly
recalculate the flag.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181937 91177308-0d34-0410-b5e6-96231b3b80d8
The old PPCCTRLoops pass, like the Hexagon pass version from which it was
derived, could only handle some simple loops in canonical form. We cannot
directly adapt the new Hexagon hardware loops pass, however, because the
Hexagon pass contains a fundamental assumption that non-constant-trip-count
loops will contain a guard, and this is not always true (the result being that
incorrect negative counts can be generated). With this commit, we replace the
pass with a late IR-level pass which makes use of SE to calculate the
backedge-taken counts and safely generate the loop-count expressions (including
any necessary max() parts). This IR level pass inserts custom intrinsics that
are lowered into the desired decrement-and-branch instructions.
The most fragile part of this new implementation is that interfering uses of
the counter register must be detected on the IR level (and, on PPC, this also
includes any indirect branches in addition to function calls). Also, to make
all of this work, we need a variant of the mtctr instruction that is marked
as having side effects. Without this, machine-code level CSE, DCE, etc.
illegally transform the resulting code. Hopefully, this can be improved
in the future.
This new pass is smaller than the original (and much smaller than the new
Hexagon hardware loops pass), and can handle many additional cases correctly.
In addition, the preheader-creation code has been copied from LoopSimplify, and
after we decide on where it belongs, this code will be refactored so that it
can be explicitly shared (making this implementation even smaller).
The new test-case files ctrloop-{le,lt,ne}.ll have been adapted from tests for
the new Hexagon pass. There are a few classes of loops that this pass does not
transform (noted by FIXMEs in the files), but these deficiencies can be
addressed within the SE infrastructure (thus helping many other passes as well).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181927 91177308-0d34-0410-b5e6-96231b3b80d8
We want the order to be deterministic on all platforms. NAKAMURA Takumi
fixed that in r181864. This patch is just two small cleanups:
* Move the function to the cpp file. It is only passed to array_pod_sort.
* Remove the ppc implementation which is now redundant
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181910 91177308-0d34-0410-b5e6-96231b3b80d8
This patch matches GCC behavior: the code used to only allow unaligned
load/store on ARM for v6+ Darwin, it will now allow unaligned load/store for
v6+ Darwin as well as for v7+ on other targets.
The distinction is made because v6 doesn't guarantee support (but LLVM assumes
that Apple controls hardware+kernel and therefore have conformant v6 CPUs),
whereas v7 does provide this guarantee (and Linux behaves sanely).
Overall this should slightly improve performance in most cases because of
reduced I$ pressure.
Patch by JF Bastien
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181897 91177308-0d34-0410-b5e6-96231b3b80d8
Now that applyFixup understands differently-sized fixups, we can define
fixup_ppc_lo16/fixup_ppc_lo16_ds/fixup_ppc_ha16 to properly be 2-byte
fixups, applied at an offset of 2 relative to the start of the
instruction text.
This has the benefit that if we actually need to generate a real
relocation record, its address will come out correctly automatically,
without having to fiddle with the offset in adjustFixupOffset.
Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181894 91177308-0d34-0410-b5e6-96231b3b80d8
The PPCAsmBackend::applyFixup routine handles the case where a
fixup can be resolved within the same object file. However,
this routine is currently hard-coded to assume the size of
any fixup is always exactly 4 bytes.
This is sort-of correct for fixups on instruction text; even
though it only works because several of what really would be
2-byte fixups are presented as 4-byte fixups instead (requiring
another hack in PPCELFObjectWriter::adjustFixupOffset to clean
it up).
However, this assumption breaks down completely for fixups
on data, which legitimately can be of any size (1, 2, 4, or 8).
This patch makes applyFixup aware of fixups of varying sizes,
introducing a new helper routine getFixupKindNumBytes (along
the lines of what the ARM back end does). Note that in order
to handle fixups of size 8, we also need to fix the return type
of adjustFixupValue to uint64_t to avoid truncation.
Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181891 91177308-0d34-0410-b5e6-96231b3b80d8
The transformation happening here is that we want to turn a
"mul(ext(X), ext(X))" into a "vmull(X, X)", stripping off the extension. We have
to make sure that X still has a valid vector type - possibly recreate an
extension to a smaller type. In case of a extload of a memory type smaller than
64 bit we used create a ext(load()). The problem with doing this - instead of
recreating an extload - is that an illegal type is exposed.
This patch fixes this by creating extloads instead of ext(load()) sequences.
Fixes PR15970.
radar://13871383
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181842 91177308-0d34-0410-b5e6-96231b3b80d8
The changes to CR spill handling missed a case for 32-bit PowerPC.
The code in PPCFrameLowering::processFunctionBeforeFrameFinalized()
checks whether CR spill has occurred using a flag in the function
info. This flag is only set by storeRegToStackSlot and
loadRegFromStackSlot. spillCalleeSavedRegisters does not call
storeRegToStackSlot, but instead produces MI directly. Thus we don't
see the CR is spilled when assigning frame offsets, and the CR spill
ends up colliding with some other location (generally the FP slot).
This patch sets the flag in spillCalleeSavedRegisters for PPC32 so
that the CR spill is properly detected and gets its own slot in the
stack frame.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181800 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Alex Deucher
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
NOTE: This is a candidate for the 3.3 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181792 91177308-0d34-0410-b5e6-96231b3b80d8
The GNU assembler treats things like:
brasl %r14, 100
in the same way as:
brasl %r14, .+100
rather than as a branch to absolute address 100. We implemented this in
LLVM by creating an immediate operand rather than the usual expr operand,
and by handling immediate operands specially in the code emitter.
This was undesirable for (at least) three reasons:
- the specialness of immediate operands was exposed to the backend MC code,
rather than being limited to the assembler parser.
- in disassembly, an immediate operand really is an absolute address.
(Note that this means reassembling printed disassembly can't recreate
the original code.)
- it would interfere with any assembly manipulation that we might
try in future. E.g. operations like branch shortening can change
the relative position of instructions, but any code that updates
sym+offset addresses wouldn't update an immediate "100" operand
in the same way as an explicit ".+100" operand.
This patch changes the implementation so that the assembler creates
a "." label for immediate PC-relative operands, so that the operand
to the MCInst is always the absolute address. The patch also adds
some error checking of the offset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181773 91177308-0d34-0410-b5e6-96231b3b80d8
Marking instructions as isAsmParserOnly stops them from being disassembled.
However, in cases where separate asm and codegen versions exist, we actually
want to disassemble to the asm ones.
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181772 91177308-0d34-0410-b5e6-96231b3b80d8
The SystemZ port currently relies on the order of the instruction operands
matching the order of the instruction field lists. This isn't desirable
for disassembly, where the two are matched only by name. E.g. the R1 and R2
fields of an RR instruction should have corresponding R1 and R2 operands.
The main complication is that addresses are compound operands,
and as far as I know there is no mechanism to allow individual
suboperands to be selected by name in "let Inst{...} = ..." assignments.
Luckily it doesn't really matter though. The SystemZ instruction
encoding groups all address fields together in a predictable order,
so it's just as valid to see the entire compound address operand as
a single field. That's the approach taken in this patch.
Matching by name in turn means that the operands to COPY SIGN and
CONVERT TO FIXED instructions can be given in natural order.
(It was easier to do this at the same time as the rename,
since otherwise the intermediate step was too confusing.)
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181771 91177308-0d34-0410-b5e6-96231b3b80d8
The SystemZ port currently relies on the order of the instruction operands
matching the order of the instruction field lists. This isn't desirable
for disassembly, where the two are matched only by name. E.g. the R1 and R2
fields of an RR instruction should have corresponding R1 and R2 operands.
The main complication is that addresses are compound operands,
and as far as I know there is no mechanism to allow individual
suboperands to be selected by name in "let Inst{...} = ..." assignments.
Luckily it doesn't really matter though. The SystemZ instruction
encoding groups all address fields together in a predictable order,
so it's just as valid to see the entire compound address operand as
a single field. That's the approach taken in this patch.
Matching by name in turn means that the operands to COPY SIGN and
CONVERT TO FIXED instructions can be given in natural order.
(It was easier to do this at the same time as the rename,
since otherwise the intermediate step was too confusing.)
No functional change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181769 91177308-0d34-0410-b5e6-96231b3b80d8
Mips16/32 floating point interoperability.
When Mips16 code calls external functions that would normally have some
of its parameters or return values passed in floating point registers,
it needs (Mips32) helper functions to do this because while in Mips16 mode
there is no ability to access the floating point registers.
In Pic mode, this is done with a set of predefined functions in libc.
This case is already handled in llvm for Mips16.
In static relocation mode, for efficiency reasons, the compiler generates
stubs that the linker will use if it turns out that the external function
is a Mips32 function. (If it's Mips16, then it does not need the helper
stubs).
These stubs are identically named and the linker knows about these tricks
and will not create multiple copies and will delete them if they are not
needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181753 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds alias for addiu instruction which enables following syntax:
addiu $rs,imm
The macro is translated as:
addiu $rs,$rs,imm
Contributer: Vladimir Medic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181729 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes warning messages observed in the oggenc application test in
projects/test-suite. Special handling is needed for the 64-bit
PowerPC SVR4 ABI when a constant is initialized with a pointer to a
function in a shared library. Because a function address is
implemented as the address of a function descriptor, the use of copy
relocations can lead to problems with initialization. GNU ld
therefore replaces copy relocations with dynamic relocations to be
resolved by the dynamic linker. This means the constant cannot reside
in the read-only data section, but instead belongs in .data.rel.ro,
which is designed for constants containing dynamic relocations.
The implementation creates a class PPC64LinuxTargetObjectFile
inheriting from TargetLoweringObjectFileELF, which behaves like its
parent except to place constants of this sort into .data.rel.ro.
The test case is reduced from the oggenc application.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181723 91177308-0d34-0410-b5e6-96231b3b80d8
This option is used when the user wants to avoid emitting double precision FP
loads and stores. Double precision FP loads and stores are expanded to single
precision instructions after register allocation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181718 91177308-0d34-0410-b5e6-96231b3b80d8
return values are bitcasts.
The chain had previously been being clobbered with the entry node to
the dag, which sometimes caused other code in the function to be
erroneously deleted when tailcall optimization kicked in.
<rdar://problem/13827621>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181696 91177308-0d34-0410-b5e6-96231b3b80d8
It was just a less powerful and more confusing version of
MCCFIInstruction. A side effect is that, since MCCFIInstruction uses
dwarf register numbers, calls to getDwarfRegNum are pushed out, which
should allow further simplifications.
I left the MachineModuleInfo::addFrameMove interface unchanged since
this patch was already fairly big.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181680 91177308-0d34-0410-b5e6-96231b3b80d8
To add a frame now there is a dedicated addFrameMove which also takes
care of constructing the move itself.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181657 91177308-0d34-0410-b5e6-96231b3b80d8
mips16/mips32 floating point interoperability.
This patch fixes returns from mips16 functions so that if the function
was in fact called by a mips32 hard float routine, then values
that would have been returned in floating point registers are so returned.
Mips16 mode has no floating point instructions so there is no way to
load values into floating point registers.
This is needed when returning float, double, single complex, double complex
in the Mips ABI.
Helper functions in libc for mips16 are available to do this.
For efficiency purposes, these helper functions have a different calling
convention from normal Mips calls.
Registers v0,v1,a0,a1 are used to pass parameters instead of
a0,a1,a2,a3.
This is because v0,v1,a0,a1 are the natural registers used to return
floating point values in soft float. These values can then be moved
to the appropriate floating point registers with no extra cost.
The only register that is modified is ra in this call.
The helper functions make sure that the return values are in the floating
point registers that they would be in if soft float was not in effect
(which it is for mips16, though the soft float is implemented using a mips32
library that uses hard float).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181641 91177308-0d34-0410-b5e6-96231b3b80d8
The issue was that the MatchingInlineAsm and VariantID args to the
MatchInstructionImpl function weren't being set properly. Specifically, when
parsing intel syntax, the parser thought it was parsing inline assembly in the
at&t dialect; that will never be the case.
The crash was caused when the emitter tried to emit the instruction, but the
operands weren't set. When parsing inline assembly we only set the opcode, not
the operands, which is used to lookup the instruction descriptor.
rdar://13854391 and PR15945
Also, this commit reverts r176036. Now that we're correctly parsing the intel
syntax the pushad/popad don't match properly. I've reimplemented that fix using
a MnemonicAlias.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181620 91177308-0d34-0410-b5e6-96231b3b80d8
This commit implements the AsmParser for fnstart, fnend,
cantunwind, personality, handlerdata, pad, setfp, save, and
vsave directives.
This commit fixes some minor issue in the ARMELFStreamer:
* The switch back to corresponding section after the .fnend
directive.
* Emit the unwind opcode while processing .fnend directive
if there is no .handlerdata directive.
* Emit the unwind opcode to .ARM.extab while processing
.handlerdata even if .personality directive does not exist.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181603 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
NOTE: This is a candidate for the 3.3 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181579 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes piglit test for OpenCL builtin mul24, and allows mad24 to run.
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
NOTE: This is a candidate for the 3.3 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181578 91177308-0d34-0410-b5e6-96231b3b80d8
v2: Add v4i32 test
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
NOTE: This is a candidate for the 3.3 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181577 91177308-0d34-0410-b5e6-96231b3b80d8
v2: Add vselect v4i32 test
Patch by: Aaron Watry
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
NOTE: This is a candidate for the 3.3 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181576 91177308-0d34-0410-b5e6-96231b3b80d8
We generate a `push' of a random register (%rax) if the stack needs to be
aligned by the size of that register. However, this could mess up compact unwind
generation. In particular, we want to still generate compact unwind in the
presence of this monstrosity.
Check if the push of of the %rax/%eax register. If it is and it's marked with
the `FrameSetup' flag, then we can generate a compact unwind encoding for the
function only if the push is the last FrameSetup instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181540 91177308-0d34-0410-b5e6-96231b3b80d8
The compact unwind registers were defined in two different
places. It's better just to place them in the function that uses them
and specify that this is a 64-bit or 32-bit machine.
No functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181529 91177308-0d34-0410-b5e6-96231b3b80d8
Previously we only checked if the LR required saving if the frame size was
non zero. However because the caller reserves 1 word for the callee to use
that doesn't count towards our frame size it is possible for the LR to need
saving and for the frame size to be 0.
We didn't hit when the LR needed saving because of a function calls because
the 1 word of stack we must allocate for our callee means the frame size
is always non zero in this case. However we can hit this case if the LR is
clobbered in inline asm.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181520 91177308-0d34-0410-b5e6-96231b3b80d8
The patch I committed as revision 167864 introduced a regression that
causes LLVM to no longer generate appropriate relocs for @ha/@l symbol
references (but fail an assertion instead).
This is fixed here by re-enabling support for the VK_PPC_GAS_HA16/
VK_PPC_GAS_LO16 variant kinds (and their Darwin variants) in
PPCELFObjectWriter.cpp.
Tested by running projects/test-suite in -m32 mode with the integrated
assembler forced on. A standalone test case will be committed shortly
as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181450 91177308-0d34-0410-b5e6-96231b3b80d8
The floating-point record forms on PPC don't set the condition register bits
based on a comparison with zero (like the integer record forms do), but rather
based on the exception status bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181423 91177308-0d34-0410-b5e6-96231b3b80d8
createSystemZMCCodeGenInfo was not passing the optimization level to
InitMCCodeGenInfo(), so -O0 would be ignored. Fixes DebugInfo/namespace.ll
after the changes in r181271.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181312 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Michel Dänzer
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181269 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Michel Dänzer
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181268 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Michel Dänzer
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181267 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Michel Dänzer
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181266 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Michel Dänzer
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181265 91177308-0d34-0410-b5e6-96231b3b80d8
Patch by: Michel Dänzer
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181263 91177308-0d34-0410-b5e6-96231b3b80d8
This adds the actual lib/Target/SystemZ target files necessary to
implement the SystemZ target. Note that at this point, the target
cannot yet be built since the configure bits are missing. Those
will be provided shortly by a follow-on patch.
This version of the patch incorporates feedback from reviews by
Chris Lattner and Anton Korobeynikov. Thanks to all reviewers!
Patch by Richard Sandiford.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181203 91177308-0d34-0410-b5e6-96231b3b80d8
As pointed out by Evgeniy Stepanov, assigning a std::string temporary
to a StringRef is not a good idea. Rework MatchRegisterName to avoid
using the .lower routine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181192 91177308-0d34-0410-b5e6-96231b3b80d8
indirect branch at the end of the BB. Otherwise if-converter, branch folding
pass may incorrectly update its successor info if it consider BB as fallthrough
to the next BB.
rdar://13782395
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181161 91177308-0d34-0410-b5e6-96231b3b80d8
Instead operands are treated as negative immediates
where the sign bit is implicit in the instruction
encoding.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181151 91177308-0d34-0410-b5e6-96231b3b80d8
Now even the small structures could be passed within byval (small enough
to be stored in GPRs).
In regression tests next function prototypes are checked:
PR15293:
%artz = type { i32 }
define void @foo(%artz* byval %s)
define void @foo2(%artz* byval %s, i32 %p, %artz* byval %s2)
foo: "s" stored in R0
foo2: "s" stored in R0, "s2" stored in R2.
Next AAPCS rules are checked:
5.5 Parameters Passing, C.4 and C.5,
"ParamSize" is parameter size in 32bit words:
-- NSAA != 0, NCRN < R4 and NCRN+ParamSize > R4.
Parameter should be sent to the stack; NCRN := R4.
-- NSAA != 0, and NCRN < R4, NCRN+ParamSize < R4.
Parameter stored in GPRs; NCRN += ParamSize.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181148 91177308-0d34-0410-b5e6-96231b3b80d8
X86ISelLowering has support to treat:
(icmp ne (and (xor %flags, -1), (shl 1, flag)), 0)
as if it were actually:
(icmp eq (and %flags, (shl 1, flag)), 0)
However, r179386 has code at the InstCombine level to handle this.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181145 91177308-0d34-0410-b5e6-96231b3b80d8
This removes dire warnings about AArch64 being unsupported and enables
the tests when appropriate on this platform.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181135 91177308-0d34-0410-b5e6-96231b3b80d8
R_AARCH64_PCREL32 is present in even trivial .eh_frame sections and so
is required to compile any function without the "nounwind" attribute.
This change implements very basic infrastructure in the RuntimeDyldELF
file and allows (for example) the test-shift.ll MCJIT test to pass
on AArch64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181131 91177308-0d34-0410-b5e6-96231b3b80d8
This let us to remove some custom code that matched constant offsets
from globals at instruction selection time as a special addressing mode.
No intended functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181126 91177308-0d34-0410-b5e6-96231b3b80d8
The code now makes use of ComputeMaskedBits,
SelectionDAG::isBaseWithConstantOffset and TargetLowering::isGAPlusOffset
where appropriate reducing the amount of logic needed in XCoreISelLowering.
No intended functionality change.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181125 91177308-0d34-0410-b5e6-96231b3b80d8
Thread local storage is not supported by the XMOS linker so we handle
thread local variables by lowering the variable to an array of n elements
(where n is the number of hardware threads per core, currently 8
for all XMOS devices) indexed by the the current thread ID.
Previously this lowering was spread across the XCoreISelLowering and the
XCoreAsmPrinter classes. Moving this to a separate pass should be much
cleaner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181124 91177308-0d34-0410-b5e6-96231b3b80d8
Supporting TLS in the large memory model is rather difficult at the
moment, so make sure no-one gets into difficulties by mistake.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181121 91177308-0d34-0410-b5e6-96231b3b80d8
The MOVZ/MOVK instruction sequence may not be the most efficient (a
literal-pool load could be better) but adding that would require
reinstating the ConstantIslands pass.
For now the sequence is correct, and that's enough. Beware, as of
commit GNU ld does not appear to support the relocations needed for
this. Its primary purpose (for now) will be to support JITed code,
since in that case there is no guarantee of where your code will end
up in memory relative to external symbols it references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181117 91177308-0d34-0410-b5e6-96231b3b80d8
This allows us to get get rid of a hack in XCoreTargetObjectFile where the
the DataRel* sections were overridden.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181116 91177308-0d34-0410-b5e6-96231b3b80d8
PowerPC assemblers are supposed to support a stand-alone '$' symbol
as an alternative of '.' to refer to the current PC. This does not
work in the LLVM assembler parser yet.
To avoid bootstrap failures when using the LLVM assembler as system
assembler, this patch modifies the assembler source code generated
by LLVM to avoid using '$' (and simply use '.' instead).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181054 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds a couple of Book II instructions (isync, icbi) to the
PowerPC assembler parser. These are needed when bootstrapping clang
with the integrated assembler forced on, because they are used in
inline asm statements in the code base.
The test case adds the full list of Book II storage control instructions,
including associated extended mnemonics. Again, those that are not yet
supported as marked as FIXME.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181052 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds infrastructure to support extended mnemonics in the
PowerPC assembler parser. It adds support specifically for those
extended mnemonics that LLVM will itself generate.
The test case lists *all* extended mnemonics according to the
PowerPC ISA v2.06 Book I, but marks those not yet supported
as FIXME.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181051 91177308-0d34-0410-b5e6-96231b3b80d8
This adds assembler parser support to the PowerPC back end.
The parser will run for any powerpc-*-* and powerpc64-*-* triples,
but was tested only on 64-bit Linux. The supported syntax is
intended to be compatible with the GNU assembler.
The parser does not yet support all PowerPC instructions, but
it does support anything that is generated by LLVM itself.
There is no support for testing restricted instruction sets yet,
i.e. the parser will always accept any instructions it knows,
no matter what feature flags are given.
Instruction operands will be checked for validity and errors
generated. (Error handling in general could still be improved.)
The patch adds a number of test cases to verify instruction
and operand encodings. The tests currently cover all instructions
from the following PowerPC ISA v2.06 Book I facilities:
Branch, Fixed-point, Floating-Point, and Vector.
Note that a number of these instructions are not yet supported
by the back end; they are marked with FIXME.
A number of follow-on check-ins will add extra features. When
they are all included, LLVM passes all tests (including bootstrap)
when using clang -cc1as as the system assembler.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181050 91177308-0d34-0410-b5e6-96231b3b80d8
its fields.
This removes false dependencies between DSP instructions which access different
fields of the the control register. Implicit register operands are added to
instructions RDDSP and WRDSP after instruction selection, depending on the
value of the mask operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181041 91177308-0d34-0410-b5e6-96231b3b80d8
Build attribute sections can now be read if they exist via ELFObjectFile, and
the llvm-readobj tool has been extended with an option to dump this information
if requested. Regression tests are also included which exercise these features.
Also update the docs with a fixed ARM ABI link and a new link to the Addenda
which provides the build attributes specification.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@181009 91177308-0d34-0410-b5e6-96231b3b80d8
the "identifier" parsed by the frontend callback by skipping forward
until we've consumed a token that ends at the point dictated by the
callback.
In addition, inform the callback when it's parsing an unevaluated
operand (e.g. mov eax, LENGTH A::x) as opposed to an evaluated one
(e.g. mov eax, [A::x]).
This commit depends on a clang commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180978 91177308-0d34-0410-b5e6-96231b3b80d8
register.
- Define pseudo instructions which store or load ccond field of the DSP
control register.
- Emit the pseudos in MipsSEInstrInfo::storeRegToStack and loadRegFromStack.
- Expand the pseudos before callee-scan save.
- Emit instructions RDDSP or WRDSP to copy between ccond field and GPRs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180969 91177308-0d34-0410-b5e6-96231b3b80d8
* lib/Target/Hexagon/HexagonInstrInfo.td: Add patterns to combine a
sequence of a pair of i32->i64 extensions followed by a "bitwise or"
into COMBINE_rr.
* lib/Target/Hexagon/HexagonPeephole.cpp: Copy propagate Rx in the
instruction Rp = COMBINE_Ir_V4(0, Rx) to the uses of Rp:subreg_loreg.
* test/CodeGen/Hexagon/union-1.ll: New test.
* test/CodeGen/Hexagon/combine_ir.ll: Fix test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180946 91177308-0d34-0410-b5e6-96231b3b80d8
* lib/Target/Hexagon/HexagonInstrInfo.cpp (GetDotNewPredOp):
Given a jump opcode return the right pred.new jump opcode with
a taken vs not-taken hint based on branch probabilities provided
by the target independent module.
* lib/Target/Hexagon/HexagonVLIWPacketizer.cpp: Use the above function.
* lib/Target/Hexagon/HexagonNewValueJump.cpp(getNewvalueJumpOpcode):
Enhance existing function use branch probabilities like
HexagonInstrInfo::GetDotNewPredOp but for New Value (GPR) Jumps.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180923 91177308-0d34-0410-b5e6-96231b3b80d8
All but two patterns have been converted to the new syntax. The
remaining two patterns will require COPY_TO_REGCLASS instructions, which
the VLIW DAG Scheduler cannot handle.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180922 91177308-0d34-0410-b5e6-96231b3b80d8
Fortunately this pattern never matched, otherwise
we would have generated incorrect code.
Signed-off-by: Christian K??nig <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180921 91177308-0d34-0410-b5e6-96231b3b80d8
CodeModel: It's now possible to create an MCJIT instance with any CodeModel you like. Previously it was only possible to
create an MCJIT that used CodeModel::JITDefault.
EnableFastISel: It's now possible to turn on the fast instruction selector.
The CodeModel option required some trickery. The problem is that previously, we were ensuring future binary compatibility in
the MCJITCompilerOptions by mandating that the user bzero's the options struct and passes the sizeof() that he saw; the
bindings then bzero the remaining bits. This works great but assumes that the bitwise zero equivalent of any field is a
sensible default value.
But this is not the case for LLVMCodeModel, or its internal equivalent, llvm::CodeModel::Model. In both of those, the default
for a JIT is CodeModel::JITDefault (or LLVMCodeModelJITDefault), which is not bitwise zero.
Hence this change introduces LLVMInitializeMCJITCompilerOptions(), which will initialize the user's options struct with
defaults. The user will use this in the same way that they would have previously used memset() or bzero(). MCJITCAPITest.cpp
illustrates the change, as does the comment in ExecutionEngine.h.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180893 91177308-0d34-0410-b5e6-96231b3b80d8
the things, and renames it to CBindingWrapping.h. I also moved
CBindingWrapping.h into Support/.
This new file just contains the macros for defining different wrap/unwrap
methods.
The calls to those macros, as well as any custom wrap/unwrap definitions
(like for array of Values for example), are put into corresponding C++
headers.
Doing this required some #include surgery, since some .cpp files relied
on the fact that including Wrap.h implicitly caused the inclusion of a
bunch of other things.
This also now means that the C++ headers will include their corresponding
C API headers; for example Value.h must include llvm-c/Core.h. I think
this is harmless, since the C API headers contain just external function
declarations and some C types, so I don't believe there should be any
nasty dependency issues here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180881 91177308-0d34-0410-b5e6-96231b3b80d8
Expand copy instructions between two accumulator registers before callee-saved
scan is done. Handle copies between integer GPR and hi/lo registers in
MipsSEInstrInfo::copyPhysReg. Delete pseudo-copy instructions that are not
needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180827 91177308-0d34-0410-b5e6-96231b3b80d8
1. VarArgStyleRegisters: functionality that emits "store" instructions for byval regs moved out into separated method "StoreByValRegs". Before this patch VarArgStyleRegisters had confused use-cases. It was used for both variadic functions and for regular functions with byval parameters. In last case it created new stack-frame and registered it as VarArg frame, that is wrong.
This patch replaces VarArgsStyleRegisters usage for byval parameters with StoreByValRegs method.
2. In ARMMachineFunctionInfo, "get/setVarArgsRegSaveSize" was renamed to "get/setArgRegsSaveSize". By the same reason. Sometimes it was used for variadic functions, and sometimes for byval parameters in regular functions. Actually, this property means the size of registers, that keeps arguments, and thats why it was renamed.
3. In ARMISelLowering.cpp, ARMTargetLowering class, in methods computeRegArea and StoreByValRegs, VARegXXXXXX was renamed to ArgRegsXXXXXX still by the same reasons.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180774 91177308-0d34-0410-b5e6-96231b3b80d8
We need to intialize this to something and since clang does not set
the shader type attribute and clang is used only for compute shaders,
initializing it to COMPUTE seems like the best choice.
Reviewed-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180620 91177308-0d34-0410-b5e6-96231b3b80d8
"hint" space for Thumb actually overlaps the encoding space of the CPS
instruction. In actuality, hints can be defined as CPS instructions where imod
and M bits are all nil.
Handle decoding of permitted nop-compatible hints (i.e. nop, yield, wfi, wfe,
sev) in DecodeT2CPSInstruction.
This commit adds a proper diagnostic message for Imm0_4 and updates all tests.
Patch by Mihail Popa <Mihail.Popa@arm.com>.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180617 91177308-0d34-0410-b5e6-96231b3b80d8
In the default PowerPC assembler syntax, registers are specified simply
by number, so they cannot be distinguished from immediate values (without
looking at the opcode). This means that the default operand matching logic
for the asm parser does not work, and we need to specify custom matchers.
Since those can only be specified with RegisterOperand classes and not
directly on the RegisterClass, all instructions patterns used by the asm
parser need to use a RegisterOperand (instead of a RegisterClass) for
all their register operands.
This patch adds one RegisterOperand for each RegisterClass, using the
same name as the class, just in lower case, and updates all instruction
patterns to use RegisterOperand instead of RegisterClass operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180611 91177308-0d34-0410-b5e6-96231b3b80d8
When testing the asm parser, I noticed wrong encodings for the
above instructions (wrong sub-opcodes).
Tests will be added together with the asm parser.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180608 91177308-0d34-0410-b5e6-96231b3b80d8
When testing the asm parser, I noticed wrong encodings for the
above instructions (wrong sub-opcodes). Note that apparently
the compiler currently never generates pre-inc instructions
for floating point types for some reason ...
Tests will be added together with the asm parser.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180607 91177308-0d34-0410-b5e6-96231b3b80d8
When testing the asm parser, I noticed wrong encodings for the
above instructions (wrong operand name in rldimi, wrong form
and sub-opcode for rldcl).
Tests will be added together with the asm parser.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180606 91177308-0d34-0410-b5e6-96231b3b80d8
When testing the asm parser, I ran into an error when using a conditional
branch to an external symbol (this doesn't occur in compiler-generated
code) due to missing support in PPCELFObjectWriter::getRelocTypeInner.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180605 91177308-0d34-0410-b5e6-96231b3b80d8
Mips have delayslots for certain instructions
like jumps and branches. These are instructions
that follow the branch or jump and are executed
before the jump or branch is completed.
Early Mips compilers could not cope with delayslots
and left them up to the assembler. The assembler would
fill the delayslots with the appropriate instruction,
usually just a nop to allow correct runtime behavior.
The default behavior for this is set with .set reorder.
To tell the assembler that you don't want it to mess with
the delayslot one used .set noreorder.
For backwards compatibility we need to support
.set reorder and have it be the default behavior in the
assembler.
Our support for it is to insert a NOP directly after an
instruction with a delayslot when in .set reorder mode.
Contributer: Vladimir Medic
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180584 91177308-0d34-0410-b5e6-96231b3b80d8
latency for certain models of the Intel Atom family, by converting
instructions into their equivalent LEA instructions, when it is both
useful and possible to do so.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180573 91177308-0d34-0410-b5e6-96231b3b80d8
This exposed an issue with PowerPC AltiVec where it appears it was setting the wrong vector boolean contents. The included change
fixes the PowerPC tests, and was OK'd by Hal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180129 91177308-0d34-0410-b5e6-96231b3b80d8
AArch64 always demands a register-scavenger, so the pointer should never be
NULL. However, in the spirit of paranoia, we'll assert it before use just in
case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180080 91177308-0d34-0410-b5e6-96231b3b80d8
now taken care of by the frontend, which allows us to parse arbitrary C/C++
variables.
Part of rdar://13663589
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180037 91177308-0d34-0410-b5e6-96231b3b80d8
-- C.4 and C.5 statements, when NSAA is not equal to SP.
-- C.1.cp statement for VA functions. Note: There are no VFP CPRCs in a
variadic procedure.
Before this patch "NSAA != 0" means "don't use GPRs anymore ". But there are
some exceptions in AAPCS.
1. For non VA function: allocate all VFP regs for CPRC. When all VFPs are allocated
CPRCs would be sent to stack, while non CPRCs may be still allocated in GRPs.
2. Check that for VA functions all params uses GPRs and then stack.
No exceptions, no CPRCs here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@180011 91177308-0d34-0410-b5e6-96231b3b80d8
Rather than just splitting the input type and hoping for the best, apply
a bit more cleverness. Just splitting the types until the source is
legal often leads to an illegal result time, which is then widened and a
scalarization step is introduced which leads to truly horrible code
generation. With the loop vectorizer, these sorts of operations are much
more common, and so it's worth extra effort to do them well.
Add a legalization hook for the operands of a TRUNCATE node, which will
be encountered after the result type has been legalized, but if the
operand type is still illegal. If simple splitting of both types
ends up with the result type of each half still being legal, just
do that (v16i16 -> v16i8 on ARM, for example). If, however, that would
result in an illegal result type (v8i32 -> v8i8 on ARM, for example),
we can get more clever with power-two vectors. Specifically,
split the input type, but also widen the result element size, then
concatenate the halves and truncate again. For example on ARM,
To perform a "%res = v8i8 trunc v8i32 %in" we transform to:
%inlo = v4i32 extract_subvector %in, 0
%inhi = v4i32 extract_subvector %in, 4
%lo16 = v4i16 trunc v4i32 %inlo
%hi16 = v4i16 trunc v4i32 %inhi
%in16 = v8i16 concat_vectors v4i16 %lo16, v4i16 %hi16
%res = v8i8 trunc v8i16 %in16
This allows instruction selection to generate three VMOVN instructions
instead of a sequences of moves, stores and loads.
Update the ARMTargetTransformInfo to take this improved legalization
into account.
Consider the simplified IR:
define <16 x i8> @test1(<16 x i32>* %ap) {
%a = load <16 x i32>* %ap
%tmp = trunc <16 x i32> %a to <16 x i8>
ret <16 x i8> %tmp
}
define <8 x i8> @test2(<8 x i32>* %ap) {
%a = load <8 x i32>* %ap
%tmp = trunc <8 x i32> %a to <8 x i8>
ret <8 x i8> %tmp
}
Previously, we would generate the truly hideous:
.syntax unified
.section __TEXT,__text,regular,pure_instructions
.globl _test1
.align 2
_test1: @ @test1
@ BB#0:
push {r7}
mov r7, sp
sub sp, sp, #20
bic sp, sp, #7
add r1, r0, #48
add r2, r0, #32
vld1.64 {d24, d25}, [r0:128]
vld1.64 {d16, d17}, [r1:128]
vld1.64 {d18, d19}, [r2:128]
add r1, r0, #16
vmovn.i32 d22, q8
vld1.64 {d16, d17}, [r1:128]
vmovn.i32 d20, q9
vmovn.i32 d18, q12
vmov.u16 r0, d22[3]
strb r0, [sp, #15]
vmov.u16 r0, d22[2]
strb r0, [sp, #14]
vmov.u16 r0, d22[1]
strb r0, [sp, #13]
vmov.u16 r0, d22[0]
vmovn.i32 d16, q8
strb r0, [sp, #12]
vmov.u16 r0, d20[3]
strb r0, [sp, #11]
vmov.u16 r0, d20[2]
strb r0, [sp, #10]
vmov.u16 r0, d20[1]
strb r0, [sp, #9]
vmov.u16 r0, d20[0]
strb r0, [sp, #8]
vmov.u16 r0, d18[3]
strb r0, [sp, #3]
vmov.u16 r0, d18[2]
strb r0, [sp, #2]
vmov.u16 r0, d18[1]
strb r0, [sp, #1]
vmov.u16 r0, d18[0]
strb r0, [sp]
vmov.u16 r0, d16[3]
strb r0, [sp, #7]
vmov.u16 r0, d16[2]
strb r0, [sp, #6]
vmov.u16 r0, d16[1]
strb r0, [sp, #5]
vmov.u16 r0, d16[0]
strb r0, [sp, #4]
vldmia sp, {d16, d17}
vmov r0, r1, d16
vmov r2, r3, d17
mov sp, r7
pop {r7}
bx lr
.globl _test2
.align 2
_test2: @ @test2
@ BB#0:
push {r7}
mov r7, sp
sub sp, sp, #12
bic sp, sp, #7
vld1.64 {d16, d17}, [r0:128]
add r0, r0, #16
vld1.64 {d20, d21}, [r0:128]
vmovn.i32 d18, q8
vmov.u16 r0, d18[3]
vmovn.i32 d16, q10
strb r0, [sp, #3]
vmov.u16 r0, d18[2]
strb r0, [sp, #2]
vmov.u16 r0, d18[1]
strb r0, [sp, #1]
vmov.u16 r0, d18[0]
strb r0, [sp]
vmov.u16 r0, d16[3]
strb r0, [sp, #7]
vmov.u16 r0, d16[2]
strb r0, [sp, #6]
vmov.u16 r0, d16[1]
strb r0, [sp, #5]
vmov.u16 r0, d16[0]
strb r0, [sp, #4]
ldm sp, {r0, r1}
mov sp, r7
pop {r7}
bx lr
Now, however, we generate the much more straightforward:
.syntax unified
.section __TEXT,__text,regular,pure_instructions
.globl _test1
.align 2
_test1: @ @test1
@ BB#0:
add r1, r0, #48
add r2, r0, #32
vld1.64 {d20, d21}, [r0:128]
vld1.64 {d16, d17}, [r1:128]
add r1, r0, #16
vld1.64 {d18, d19}, [r2:128]
vld1.64 {d22, d23}, [r1:128]
vmovn.i32 d17, q8
vmovn.i32 d16, q9
vmovn.i32 d18, q10
vmovn.i32 d19, q11
vmovn.i16 d17, q8
vmovn.i16 d16, q9
vmov r0, r1, d16
vmov r2, r3, d17
bx lr
.globl _test2
.align 2
_test2: @ @test2
@ BB#0:
vld1.64 {d16, d17}, [r0:128]
add r0, r0, #16
vld1.64 {d18, d19}, [r0:128]
vmovn.i32 d16, q8
vmovn.i32 d17, q9
vmovn.i16 d16, q8
vmov r0, r1, d16
bx lr
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179989 91177308-0d34-0410-b5e6-96231b3b80d8
With a little help from the frontend, it looks like the standard va_*
intrinsics can do the job.
Also clean up an old bitcast hack in LowerVAARG that dealt with
unaligned double loads. Load SDNodes can specify an alignment now.
Still missing: Calling varargs functions with float arguments.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179961 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, when spilling 64-bit paired registers, an LDMIA with both
a FrameIndex and an offset was produced. This kind of instruction
shouldn't exist, and the extra operand was being confused with the
predicate, causing aborts later on.
This removes the invalid 0-offset from the instruction being
produced.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179956 91177308-0d34-0410-b5e6-96231b3b80d8
I think it's almost impossible to fold atomic fences profitably under
LLVM/C++11 semantics. As a result, this is now unused and just
cluttering up the target interface.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179940 91177308-0d34-0410-b5e6-96231b3b80d8
The getSwappedPredicate function can be used in other places (such as in
improvements to the PPCCTRLoops pass). Instead of trapping it as a static
function in PPCInstrInfo, move it into PPCPredicates with other
predicate-related things.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179926 91177308-0d34-0410-b5e6-96231b3b80d8
trying to move as much FastISel logic as possible out of the main path in
SelectionDAGISel - intermixing them just adds confusion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179902 91177308-0d34-0410-b5e6-96231b3b80d8
When matching a compare with a subtract where the arguments of the compare are
swapped w.r.t. the arguments of the subtract, we need to negate the predicates
(or CR bit indices) of the users. This, however, is not the same as inverting
the predicate (negating LT -> GT, but inverting LT -> GE, for example). The ARM
backend seems to do this correctly, but when I adapted the code for the PPC
backend, I introduced an error in this logic.
Comparison optimization is now enabled again by default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179899 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds support for recoded (meaning assembly-language compatible to
standard mips32) arithmetic 32-bit instructions.
Patch by Zoran Jovanovic.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@179873 91177308-0d34-0410-b5e6-96231b3b80d8