Adds support for bitcasting a little endian 'small element' vector to 'large element' scalar/vector (e.g. v16i8 to v4i32 or v2i32 to i64), which is required for PR30845. We extract the knownbits for each 'small element' part and concatenate the results together.
We can add support for big endian and 'large element' scalar/vector to 'small element' vector bitcasting once we have test cases for them.
Differential Revision: https://reviews.llvm.org/D27129
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289200 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r288916 as it is currently causing a crasher in
Halide. Reproducer on llvm.org/PR31323. While it might be that halide is
generating invalid IR, llc shouldn't crash.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289194 91177308-0d34-0410-b5e6-96231b3b80d8
sse_load_f32/f64 can also match loads that are zero extended to vectors. We shouldn't match that because we wouldn't be able to get the instruction to zero the upper bits like the intrinsic semantics would require for such a case.
There is a test case that does depend on this behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289193 91177308-0d34-0410-b5e6-96231b3b80d8
We could previously select an integer which would hit an assertion error
in pseudo expansion.
The new type will also generate the appropriate fixups if needed, which
wasn't done beforehand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289192 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Scalar intrinsics have specific semantics about the which input's upper bits are passed through to the output. The same input is also supposed to be the input we use for the lower element when the mask bit is 0 in a masked operation. We aren't currently keeping these semantics with instruction selection.
This patch corrects this by introducing new scalar FMA ISD nodes that indicate whether operand 1(one of the multiply inputs) or operand 3(the additon/subtraction input) should pass thru its upper bits.
We use this information to select 213/132 form for the operand 1 version and the 231 form for the operand 3 version.
We also use this information to suppress combining FNEG operations on the passthru input since semantically the passthru bits aren't negated. This is stronger than the earlier check added for a user being SELECTS so we can remove that.
This fixes PR30913.
Reviewers: delena, zvi, v_klochkov
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D27144
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289190 91177308-0d34-0410-b5e6-96231b3b80d8
These were selecting directly to the VOP2 form instead
of VOP3 like the i32 instructions. Fixes regressions in
future commits where an immediate isn't folded because it was
initially used for the second operand.
Because uniform 16-bit operations are promoted to i32, it's
difficult to get a simple testcase where this matters. Fold
failures in SIFoldOperands here tend to be hidden by commute
and fold in SIShrinkInstructions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289189 91177308-0d34-0410-b5e6-96231b3b80d8
The motivating example is:
extern int patatino;
int goo() {
int x = 0;
for (int i = 0; i < 1000000; ++i) {
x *= patatino;
}
return x;
}
Currently SCCP will not realize that this function returns always zero,
therefore will try to unroll and vectorize the loop at -O3 producing an
awful lot of (useless) code. With this change, it will just produce:
0000000000000000 <g>:
xor %eax,%eax
retq
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289175 91177308-0d34-0410-b5e6-96231b3b80d8
This just hoists the check for declarations up a layer which allows
various sets used in the walk to be smaller. Also moves the relevant
comments to match, and catches a few other cleanups in this code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289163 91177308-0d34-0410-b5e6-96231b3b80d8
Supporting them properly is a reasonably complex chunk of work, so to allow bot
testing before then we should at least be able to fall back to DAG ISel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289150 91177308-0d34-0410-b5e6-96231b3b80d8
Currently SCCP folds the value to -1, while ConstantProp folds to
0. This changes SCCP to do what ConstantFolding does.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289147 91177308-0d34-0410-b5e6-96231b3b80d8
We were falsely claiming that we had an LSDA for the relevant EH
personality before this change, which could lead to the EH machinery
interpreting random adjacent data as an LSDA.
Fixes PR31317
This change is safe because cleanups can't contain exception handlers
today. We do these things to maintain that invariant:
- C++ destructors are naturally out-of-line
- __finally blocks are outlined in clang
- LLVM's inliner will not inline EH constructs into cleanups
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289101 91177308-0d34-0410-b5e6-96231b3b80d8
This was exposed by some code that used more than one level of sub-
registers. There is no testcase, because there is no such code in the
Hexagon backend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289099 91177308-0d34-0410-b5e6-96231b3b80d8
Not having this legal led to combine failures, resulting
in dumb things like bitcasts of constants not being folded
away.
The only reason I'm leaving the v_mov_b32 hack that f32
already uses is to avoid madak formation test regressions.
PeepholeOptimizer has an ordering issue where the immediate
fold attempt is into the sgpr->vgpr copy instead of the actual
use. Running it twice avoids that problem.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289096 91177308-0d34-0410-b5e6-96231b3b80d8
The correct commutable opcode was set to itself, so this
was simply swapping the operands to commute instead of also
changing the opcode to v_subrev_u16.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289093 91177308-0d34-0410-b5e6-96231b3b80d8
Multiple metadata values for records such as opencl.ocl.version, llvm.ident
and similar are created after linking several modules. For some of them, notably
opencl.ocl.version, this creates semantic problem because we cannot tell which
version of OpenCL the composite module conforms.
Moreover, such repetitions of identical values often create a huge list of
unneeded metadata, which grows bitcode size both in memory and stored on disk.
It can go up to several Mb when linked against our OpenCL library. Lastly, such
long lists obscure reading of dumped IR.
The pass unifies metadata after linking.
Differential Revision: https://reviews.llvm.org/D25381
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289092 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Attaching !absolute_symbol to a global variable does two things:
1) Marks it as an absolute symbol reference.
2) Specifies the value range of that symbol's address.
Teach the X86 backend to allow absolute symbols to appear in place of
immediates by extending the relocImm and mov64imm32 matchers. Start using
relocImm in more places where it is legal.
As previously proposed on llvm-dev:
http://lists.llvm.org/pipermail/llvm-dev/2016-October/105800.html
Differential Revision: https://reviews.llvm.org/D25878
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289087 91177308-0d34-0410-b5e6-96231b3b80d8
Since all the DWARF classes are in a DWARFYAML namespace having every class start with DWARF seems like a bit of overkill.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289080 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
LC can currently select scalar load for uniform memory access
basing on readonly memory address space only. This restriction
originated from the fact that in HW prior to VI vector and scalar caches
are not coherent. With MemoryDependenceAnalysis we can check that the
memory location corresponding to the memory operand of the LOAD is not
clobbered along the all paths from the function entry.
Reviewers: rampitec, tstellarAMD, arsenm
Subscribers: wdng, arsenm, nhaehnle
Differential Revision: https://reviews.llvm.org/D26917
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289076 91177308-0d34-0410-b5e6-96231b3b80d8
ConstantFolding tried to cast one of the scalar indices to a vector
type. Instead, use the vector type only for the first index (which
is the only one allowed to be a vector) and use its scalar type
otherwise.
Fixes PR31250.
Reviewers: majnemer
Differential Revision: https://reviews.llvm.org/D27389
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289073 91177308-0d34-0410-b5e6-96231b3b80d8
The dwarfgen::Generator::StringPool was in a unique_ptr but it was owned by the Allocator member variable so it was being free twice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289070 91177308-0d34-0410-b5e6-96231b3b80d8