Whenever an add/sub immediate needs a fixup, we set that immediate field to zero,
which is correct, but we also set the shift bits to zero, which is not true for
instructions that use lsl #12. This patch makes sure that if lsl #12 was used,
it will appear in the encoding of the instruction.
Differential Revision: https://reviews.llvm.org/D23930
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281898 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
In case s_branch instruction target is itself backend should emit offset -1 but instead it emit 0.
'''
label:
s_branch label // should emit [0xff,0xff,0x82,0xbf]
'''
Tom, Matt: why are we adjusting fixup values in applyFixup() method instead of processFixup()? processFixup() is calling adjustFixupValue() but does nothing with its result.
Reviewers: vpykhtin, artem.tamazov, tstellarAMD
Subscribers: arsenm, kzhuravl, wdng, nhaehnle, yaxunl
Differential Revision: https://reviews.llvm.org/D24671
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281896 91177308-0d34-0410-b5e6-96231b3b80d8
The initial mapping symbol state is set from the triple, but we only checked
for the little-endian thumb triple, so could end up with an ARM mapping symbol
for big-endian thumb.
Differential Revision: https://reviews.llvm.org/D24553
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281894 91177308-0d34-0410-b5e6-96231b3b80d8
ldm and stm instructions always require 4-byte alignment on the pointer, but we
weren't checking this before trying to reduce code-size by replacing a
post-indexed load/store with them. Unfortunately, we were also dropping this
incormation in DAG ISel too, but that's easy enough to fix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281893 91177308-0d34-0410-b5e6-96231b3b80d8
SUBREG_TO_REG is supposed to indicate that the super register has been zeroed, but we can't prove that if we don't know where it came from.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281885 91177308-0d34-0410-b5e6-96231b3b80d8
With D24253 we can now use SelectionDAG::SignBitIsZero with vector operations.
This patch uses SelectionDAG::SignBitIsZero to recognise that a zero sign bit means that we can use a sitofp instead of a uitofp (which is not directly support on pre-AVX512 hardware).
While AVX512 does provide support for uitofp, the conversion to sitofp should not cause any regressions.
Differential Revision: https://reviews.llvm.org/D24343
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281852 91177308-0d34-0410-b5e6-96231b3b80d8
We were trying to avoid using a FrameIndex operand in non-pointer
operands in a convoluted way, and would break because of
using TargetFrameIndex. The TargetFrameIndex should only be used
in the case where it makes sense to fold it as part of the addressing
mode, otherwise it requires materialization like a normal constant.
This wasn't working reliably and failed in the added testcase, hitting
the assert when processing the frame index.
The TargetFrameIndex was coming from trying to produce an AssertZext
limiting the maximum stack size. I'm not sure this was correct to begin
with, because it is apparently possible to have a single workitem
dispatch that requires all 4G of private memory.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281824 91177308-0d34-0410-b5e6-96231b3b80d8
This reduces the number of copies and reg_sequences
when using fp constant vectors. This significantly
reduces the code size in local-stack-alloc-bug.ll
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281822 91177308-0d34-0410-b5e6-96231b3b80d8
These clean up some unnecessary or instructions in
cases with complex loops.
In the original testcase I noticed this, the same
or with exec was repeated 5 or 6 times in a row. With
this only one is emitted or sometimes a copy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281786 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The main challenge in lowering kernel arguments for AMDGPU is determing the
memory type of the argument. The generic calling convention code assumes
that only legal register types can be stored in memory, but this is not the
case for AMDGPU.
This consolidates all the logic AMDGPU uses for deducing memory types into a single
function. This will make it much easier to support different ABIs in the future.
Reviewers: arsenm
Subscribers: arsenm, wdng, nhaehnle, llvm-commits, yaxunl
Differential Revision: https://reviews.llvm.org/D24614
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281781 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
mesa3d will use the same kernel calling convention as amdhsa, but it will
handle everything else like the default 'unknown' OS type.
Reviewers: arsenm
Subscribers: arsenm, llvm-commits, kzhuravl
Differential Revision: https://reviews.llvm.org/D22783
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281779 91177308-0d34-0410-b5e6-96231b3b80d8
Recommitting after fixing AsmParser initialization and X86 inline asm
error cleanup.
Allow errors to be deferred and emitted as part of clean up to simplify
and shorten Assembly parser code. This will allow error messages to be
emitted in helper functions and be modified by the caller which has
better context.
As part of this many minor cleanups to the Parser:
* Unify parser cleanup on error
* Add Workaround for incorrect return values in ParseDirective instances
* Tighten checks on error-signifying return values for parser functions
and fix in-tree TargetParsers to be more consistent with the changes.
* Fix AArch64 test cases checking for spurious error messages that are
now fixed.
These changes should be backwards compatible with current Target Parsers
so long as the error status are correctly returned in appropriate
functions.
Reviewers: rnk, majnemer
Subscribers: aemerson, jyknight, llvm-commits
Differential Revision: https://reviews.llvm.org/D24047
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281762 91177308-0d34-0410-b5e6-96231b3b80d8
We used to only support instructions with same-type operands.
Instead, use the per-register type information to map each
operand more accurately.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281734 91177308-0d34-0410-b5e6-96231b3b80d8
When a phi node is finally lowered to a machine instruction it is
important that the lowered "load" instruction is placed before the
associated DEBUG_VALUE entry describing the value loaded.
Renamed the existing SkipPHIsAndLabels to SkipPHIsLabelsAndDebug to
more fully describe that it also skips debug entries. Then used the
"new" function SkipPHIsAndLabels when the debug information should not
be skipped when placing the lowered "load" instructions so that it is
placed before the debug entries.
Differential Revision: https://reviews.llvm.org/D23760
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281727 91177308-0d34-0410-b5e6-96231b3b80d8
For compatiblity with binutils, define these instructions to take
two registers with a 16bit unsigned immediate. Both of the registers
have to be same for dahi and dati.
Reviewers: vkalintiris, dsanders, zoran.jovanovic
Differential Review: https://reviews.llvm.org/D21473
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281724 91177308-0d34-0410-b5e6-96231b3b80d8
This lets generic logic handle the common case, instead of having to
implement applyMappingImpl for each instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281720 91177308-0d34-0410-b5e6-96231b3b80d8
(and the same for SREM)
This was causing buildbot failures earlier (time outs in the LNT suite).
However, we haven't been able to reproduce this and are suspecting this
was caused by another (reverted) patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281719 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
`TargetLoweringObjectFile` can be re-used and thus `TargetLoweringObjectFile::Initialize()`
can be called multiple times causing `Mang` pointer memory leak.
Reviewers: echristo
Subscribers: llvm-commits, mehdi_amini
Differential Revision: https://reviews.llvm.org/D24659
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281718 91177308-0d34-0410-b5e6-96231b3b80d8
If a constant is unamed_addr and is only used within one function, we can save
on the code size and runtime cost of an indirection by changing the global's storage
to inside the constant pool. For example, instead of:
ldr r0, .CPI0
bl printf
bx lr
.CPI0: &format_string
format_string: .asciz "hello, world!\n"
We can emit:
adr r0, .CPI0
bl printf
bx lr
.CPI0: .asciz "hello, world!\n"
This can cause significant code size savings when many small strings are used in one
function (4 bytes per string).
This recommit contains fixes for a nasty bug related to fast-isel fallback - because
fast-isel doesn't know about this optimization, if it runs and emits references to
a string that we inline (because fast-isel fell back to SDAG) we will end up
with an inlined string and also an out-of-line string, and we won't emit the
out-of-line string, causing backend failures.
It also contains fixes for emitting .text relocations which made the sanitizer
bots unhappy.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281715 91177308-0d34-0410-b5e6-96231b3b80d8
Currently, the machine combiner can proceed matching when -ffast-math is on.
It should also match when only -ffp-contract=fast is specified as was the
case before when DAGCombiner was doing the job.
Patch by: Abderrazek Zaafrani <a.zaafrani@samsung.com>.
Differential Revision: https://reviews.llvm.org/D24366
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281649 91177308-0d34-0410-b5e6-96231b3b80d8
Unfortunately we can't enable it for all N64 because it is not yet possible to
distinguish N32 from N64.
N64 has been confirmed to produce identical (within reason) objects to GAS
during stage 2 of compiler recursion on N64-abit Fedora. Unfortunately,
Fedora's triples do not distinguish N32 from N64 so I can't enable it by
default there. I'm currently repeating this testing for Debian mips64el but
it's very unlikely to produce a different result.
Patch by: Daniel Sanders
Reviewers: sdardis
Differential Review: https://reviews.llvm.org/D22678
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281607 91177308-0d34-0410-b5e6-96231b3b80d8
If a constant is unamed_addr and is only used within one function, we can save
on the code size and runtime cost of an indirection by changing the global's storage
to inside the constant pool. For example, instead of:
ldr r0, .CPI0
bl printf
bx lr
.CPI0: &format_string
format_string: .asciz "hello, world!\n"
We can emit:
adr r0, .CPI0
bl printf
bx lr
.CPI0: .asciz "hello, world!\n"
This can cause significant code size savings when many small strings are used in one
function (4 bytes per string).
This recommit contains fixes for a nasty bug related to fast-isel fallback - because
fast-isel doesn't know about this optimization, if it runs and emits references to
a string that we inline (because fast-isel fell back to SDAG) we will end up
with an inlined string and also an out-of-line string, and we won't emit the
out-of-line string, causing backend failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281604 91177308-0d34-0410-b5e6-96231b3b80d8
It was only really there as a sentinel when instructions had to have precisely
one type. Now that registers are typed, each register really has to have a type
that is sized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281599 91177308-0d34-0410-b5e6-96231b3b80d8