This should hopefully lead to minor improvements in code generation, and
more accurate spill/reload comments in assembly.
Also fix isLoadFromStackSlotPostFE/isStoreToStackSlotPostFE so they
don't lead to misleading assembly comments for merged memory operands;
this is technically orthogonal, but in practice the relevant memory
operand lists don't show up without this change.
Differential Revision: https://reviews.llvm.org/D59713
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356963 91177308-0d34-0410-b5e6-96231b3b80d8
We currently use only VLDR/VSTR for all 64-bit loads/stores, so the
memory operands must be word-aligned. Mark aligned operations as legal
and narrow non-aligned ones to 32 bits.
While we're here, also mark non-power-of-2 loads/stores as unsupported.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356872 91177308-0d34-0410-b5e6-96231b3b80d8
This takes sequences like "mov r4, sp; str r0, [r4]", and optimizes them
to something like "str r0, [sp]".
For regular stack variables, this optimization was already implemented:
we lower loads and stores using frame indexes, which are expanded later.
However, when constructing a call frame for a call with more than four
arguments, the existing optimization doesn't apply. We need to use
stores which are actually relative to the current value of sp, and don't
have an associated frame index.
This patch adds a special case to handle that construct. At the DAG
level, this is an ISD::STORE where the address is a CopyFromReg from SP
(plus a small constant offset).
This applies only to Thumb1: in Thumb2 or ARM mode, a regular store
instruction can access SP directly, so the COPY gets eliminated by
existing code.
The change to ARMDAGToDAGISel::SelectThumbAddrModeSP is a related
cleanup: we shouldn't pretend that it can select anything other than
frame indexes.
Differential Revision: https://reviews.llvm.org/D59568
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356601 91177308-0d34-0410-b5e6-96231b3b80d8
This change does two things. One, it ensures compilation will abort
instead of miscompiling if ARMFrameLowering::determineCalleeSaves
chooses not to save LR in a case where it's necessary. Two, it changes
the way we estimate the size of a function to be more conservative in
the presence of constant pool entries and jump tables.
EstimateFunctionSizeInBytes probably still isn't really conservative
enough, but I'm not sure how we can come up with a reliable estimate
before constant islands runs.
Differential Revision: https://reviews.llvm.org/D59439
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356527 91177308-0d34-0410-b5e6-96231b3b80d8
These changes are related to PR37743 and include:
SelectionDAGBuilder::visitSelect handles the unary SelectPatternFlavor::SPF_ABS case to build ABS node.
Delete the redundant recognizer of the integer ABS pattern from the DAGCombiner.
Add promoting the integer ABS node in the LegalizeIntegerType.
Expand-based legalization of integer result for the ABS nodes.
Expand-based legalization of ABS vector operations.
Add some integer abs testcases for different typesizes for Thumb arch
Add the custom ABS expanding and change the SAD pattern recognizer for X86 arch: The i64 result of the ABS is expanded to:
tmp = (SRA, Hi, 31)
Lo = (UADDO tmp, Lo)
Hi = (XOR tmp, (ADDCARRY tmp, hi, Lo:1))
Lo = (XOR tmp, Lo)
The "detectZextAbsDiff" function is changed for the recognition of pattern with the ABS node. Given a ABS node, detect the following pattern:
(ABS (SUB (ZERO_EXTEND a), (ZERO_EXTEND b))).
Change integer abs testcases for codegen with the ABS node support for AArch64.
Indicate that the ABS is legal for the i64 type when the NEON is supported.
Change the integer abs testcases to show changing of codegen.
Add combine and legalization of ABS nodes for Thumb arch.
Extend 'matchSelectPattern' to recognize the ABS patterns with ICMP_SGE condition.
For discussion, see https://bugs.llvm.org/show_bug.cgi?id=37743
Patch by: @ikulagin (Ivan Kulagin)
Differential Revision: https://reviews.llvm.org/D49837
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356468 91177308-0d34-0410-b5e6-96231b3b80d8
I am about to introduce some non-power-of-2 width vector MVTs. This
commit fixes a power-of-2 assumption that my forthcoming change would
otherwise break, as shown by test/CodeGen/ARM/vcvt_combine.ll and
vdiv_combine.ll.
Differential Revision: https://reviews.llvm.org/D58927
Change-Id: I56a282e365d3874ab0621e5bdef98a612f702317
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356341 91177308-0d34-0410-b5e6-96231b3b80d8
The constant island pass currently only looks at the instruction immediately
before a branch for a CMP to fold into a CBZ/CBNZ. This extends it to search
backwards for the instruction that defines CPSR. We need to ensure that the
register is not overridden between the CMP and the branch.
Differential Revision: https://reviews.llvm.org/D59317
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356336 91177308-0d34-0410-b5e6-96231b3b80d8
tMOVr and tPUSH/tPOP/tPOP_RET have register constraints which can't be
expressed in TableGen, so check them explicitly. I've unfortunately run
into issues with both of these recently; hopefully this saves some time
for someone else in the future.
Differential Revision: https://reviews.llvm.org/D59383
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356303 91177308-0d34-0410-b5e6-96231b3b80d8
When choosing whether a pair of loads can be combined into a single
wide load, we check that the load only has a sext user and that sext
also only has one user. But this can prevent the transformation in
the cases when parallel macs use the same loaded data multiple times.
To enable this, we need to fix up any other uses after creating the
wide load: generating a trunc and a shift + trunc pair to recreate
the narrow values. We also need to keep a record of which loads have
already been widened.
Differential Revision: https://reviews.llvm.org/D59215
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@356132 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds an XCOFF triple object format type into LLVM.
This XCOFF triple object file type will be used later by object file and assembly generation for the AIX platform.
Differential Revision: https://reviews.llvm.org/D58930
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355989 91177308-0d34-0410-b5e6-96231b3b80d8
AMDGPU target run out of Subtarget feature flags hitting the limit of 64.
AssemblerPredicates uses at most uint64_t for their representation.
At the same time CodeGen has exhausted this a long time ago and switched
to a FeatureBitset with the current limit of 192 bits.
This patch completes transition to the bitset for feature bits extending
it to asm matcher and MC code emitter.
Differential Revision: https://reviews.llvm.org/D59002
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355839 91177308-0d34-0410-b5e6-96231b3b80d8
The indexed variant of vfmal.f16 and vfmsl.f16
instructions use the uppser bits of the indexed
operand to store the index (1 bit for the double
variant, 2 bits for the quad).
This limits the usable registers to d0 - d7 or
s0 - s15. This patch enforces this limitation.
Differential Revision: https://reviews.llvm.org/D59021
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355707 91177308-0d34-0410-b5e6-96231b3b80d8
When lowering a select_cc node where the true and false values are of type f16,
we can't use a general conditional move because the FP16 instructions do not
support conditional execution. Instead, we must ensure that the condition code
is one of the four supported by the VSEL instruction.
Differential revision: https://reviews.llvm.org/D58813
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355385 91177308-0d34-0410-b5e6-96231b3b80d8
The isScaledConstantInRange function takes upper and lower bounds which are
checked after dividing by the scale, so the bounds checks for half, single and
double precision should all be the same. Previously, we had wrong bounds checks
for half precision, so selected an immediate the instructions can't actually
represent.
Differential revision: https://reviews.llvm.org/D58822
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355305 91177308-0d34-0410-b5e6-96231b3b80d8
1) GCC complains that KnownValid is set but not used.
2) In ARMInstructionSelector::selectGlobal() the code is mixing "enumeral
and non-enumeral type in conditional expression". Solve this by casting
to unsigned which is the final type anyway.
Differential Revision: https://reviews.llvm.org/D58834
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355304 91177308-0d34-0410-b5e6-96231b3b80d8
The new addressing mode added for the v8.2A FP16 instructions uses bit 8 of the
immediate to encode the sign of the offset, like the other FP loads/stores, so
need to be treated the same way.
Differential revision: https://reviews.llvm.org/D58816
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355201 91177308-0d34-0410-b5e6-96231b3b80d8
This function was not checking for the condition code variants which are
undefined if either input is NaN, so we were missing selection of the VSEL
instruction in some cases when using -fno-honor-nans or -ffast-math.
Differential revision: https://reviews.llvm.org/D58812
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355199 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The description of KnownBits::zext() and
KnownBits::zextOrTrunc() has confusingly been telling
that the operation is equivalent to zero extending the
value we're tracking. That has not been true, instead
the user has been forced to explicitly set the extended
bits as known zero afterwards.
This patch adds a second argument to KnownBits::zext()
and KnownBits::zextOrTrunc() to control if the extended
bits should be considered as known zero or as unknown.
Reviewers: craig.topper, RKSimon
Reviewed By: RKSimon
Subscribers: javed.absar, hiraditya, jdoerfert, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D58650
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355099 91177308-0d34-0410-b5e6-96231b3b80d8
This gets rid of some duplication in the TableGen definition, but it
forces us to keep both a pointer and a reference to the subtarget in the
ARMInstructionSelector. That is pretty ugly but it might be a reasonable
trade-off, since the TableGen descriptions should outlive the code in
the selector (or in the worst case we can update to use just the
reference when we get rid of DAGISel).
Differential Revision: https://reviews.llvm.org/D58031
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355083 91177308-0d34-0410-b5e6-96231b3b80d8
Add the same level of support as for ARM mode (i.e. still no TLS
support).
In most cases, it is sufficient to replace the opcodes with the
t2-equivalent, but there are some idiosyncrasies that I decided to
preserve because I don't understand the full implications:
* For ARM we use LDRi12 to load from constant pools, but for Thumb we
use t2LDRpci (I'm not sure if the ideal would be to use t2LDRi12 for
Thumb as well, or to use LDRcp for ARM).
* For Thumb we don't have an equivalent for MOV|LDRLIT_ga_pcrel_ldr, so
we have to generate MOV|LDRLIT_ga_pcrel plus a load from GOT.
The tests are in separate files because they're hard enough to read even
without doubling the number of checks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@355077 91177308-0d34-0410-b5e6-96231b3b80d8
The --disassembler-options, or -M, are used to customize
the disassembler and affect its output.
The two implemented options allow selecting register names on ARM:
* With -Mreg-names-raw, the disassembler uses rNN for all registers.
* With -Mreg-names-std it prints sp, lr and pc for r13, r14 and r15,
which is the default behavior of llvm-objdump.
Differential Revision: https://reviews.llvm.org/D57680
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@354870 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a few extra Thumb1 opcodes to improve the peephole opimisers
ability to remove redundant cmp instructions. tADC and tSBC require
a small fixup to prevent MOVS being moved past the instruction, giving
the wrong flags.
Differential Revision: https://reviews.llvm.org/D58281
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@354791 91177308-0d34-0410-b5e6-96231b3b80d8
More or less all the instructions defined in the v8.2a full-fp16
extension are defined as UNPREDICTABLE if you put them in an IT block
(Thumb) or use with any condition other than AL (ARM). LLVM didn't
know that, and was happy to conditionalise them.
In order to force these instructions to count as not predicable, I had
to make a small Tablegen change. The code generation back end mostly
decides if an instruction was predicable by looking for something it
can identify as a predicate operand; there's an isPredicable bit flag
that overrides that check in the positive direction, but nothing that
overrides it in the negative direction.
(I considered the alternative approach of actually removing the
predicate operand from those instructions, but thought that it would
be more painful overall for instructions differing only in data type
to have different shapes of operand list. This way, the only code that
has to notice the difference is the if-converter.)
So I've added an isUnpredicable bit alongside isPredicable, and set
that bit on the right subset of FP16 instructions, and also on the
VSEL, VMAXNM/VMINNM and VRINT[ANPM] families which should be
unpredicable for all data types.
I've included a couple of representative regression tests, both of
which previously caused an fp16 instruction to be conditionalised in
ARM state and (with -arm-no-restrict-it) to be put in an IT block in
Thumb.
Reviewers: SjoerdMeijer, t.p.northover, efriedma
Reviewed By: efriedma
Subscribers: jdoerfert, javed.absar, kristof.beyls, hiraditya, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D57823
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@354768 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a number of missing Thumb1 opcodes so that the peephole optimiser can
remove redundant CMP instructions.
Reapplying this after the first attempt broke non-thumb1 code as the t2ADDri
instruction can be used with frame indices. In thumb1 we use tADDframe.
Differential Revision: https://reviews.llvm.org/D57833
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@354667 91177308-0d34-0410-b5e6-96231b3b80d8
This is exactly the same as arm mode, so for the instruction selector
tests we just extract them to a new file and run with the same checks
for both arm and thumb mode.
For the legalizer we need to update the tests for soft float a bit, but
only because BL and tBL are slightly different. We could be pedantic and
check that we get a well-formed BL for arm mode and a tBL for thumb, but
for the purposes of the legalizer test it's sufficient to just skip over
the predicate operands in the checks. Also note that we have the
pedantic checks in the divmod test, so we're covered.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@354665 91177308-0d34-0410-b5e6-96231b3b80d8