This enables some early outs to avoid repeatedly using IsX86 check to qualify. I hope to continue to improve this to shorten the lengths of some of the string comparisons.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295424 91177308-0d34-0410-b5e6-96231b3b80d8
The new 512-bit unmasked intrinsics will make it easy to handle these with the SSE/AVX intrinsics in InstCombine where we currently have a TODO.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295290 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes PR 31921
Summary:
Predicateinfo requires an ugly workaround to try to avoid literal
struct types due to the intrinsic mangling not being implemented.
This workaround actually does not work in all cases (you can hit the
assert by bootstrapping with -print-predicateinfo), and can't be made
to work without DFS'ing the type (IE copying getMangledStr and using a
version that detects if it would crash).
Rather than do that, i just implemented the mangling. It seems
simple, since they are unified structurally.
Looking at the overloaded-mangling testcase we have, it actually turns
out the gc intrinsics will *also* crash if you try to use a literal
struct. Thus, the testcase added fails before this patch, and works
after, without needing to resort to predicateinfo.
Reviewers: chandlerc, davide
Subscribers: llvm-commits, sanjoy
Differential Revision: https://reviews.llvm.org/D29925
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295253 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Specifically, we upgrade llvm.nvvm.:
* brev{32,64}
* clz.{i,ll}
* popc.{i,ll}
* abs.{i,ll}
* {min,max}.{i,ll,u,ull}
* h2f
These either map directly to an existing LLVM target-generic
intrinsic or map to a simple LLVM target-generic idiom.
In all cases, we check that the code we generate is lowered to PTX as we
expect.
These builtins don't need to be backfilled in clang: They're not
accessible to user code from nvcc.
Reviewers: tra
Subscribers: majnemer, cfe-commits, llvm-commits, jholewinski
Differential Revision: https://reviews.llvm.org/D28793
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@292694 91177308-0d34-0410-b5e6-96231b3b80d8
The same thing was done to 32-bit and 64-bit element sizes previously.
This will allow us to support these shuffls in InstCombineCalls along with the other variable shift intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287312 91177308-0d34-0410-b5e6-96231b3b80d8
Both the (V)CVTDQ2PD (i32 to f64) and (V)CVTUDQ2PD (u32 to f64) conversion instructions are lossless and can be safely represented as generic SINT_TO_FP/UINT_TO_FP calls instead of x86 intrinsics without affecting final codegen.
LLVM counterpart to D26686
Differential Revision: https://reviews.llvm.org/D26736
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287108 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: These intrinsics have been unused for clang for a while. This patch removes them. We auto upgrade them to extractelements, a scalar operation and then an insertelement. This matches the sequence used by clangs intrinsic file.
Reviewers: zvi, delena, RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D26660
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287083 91177308-0d34-0410-b5e6-96231b3b80d8
After this I'll add the unmasked intrinsics to InstCombineCalls to finish making our handling of these types of shuffles consistent between AVX-512 and the legacy intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286725 91177308-0d34-0410-b5e6-96231b3b80d8
This mostly reuses earlier autoupgrade support for the sse and avx equivalents. Just needed to add the code to add the select.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286092 91177308-0d34-0410-b5e6-96231b3b80d8
It currently fires an assert if you even try. Looking back, I don't think it ever worked because it only changed the name of the function object, but not the intrinsic ID stored in it. Given that, I think it can be removed since no one has noticed or complained in the past 4 years.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@286031 91177308-0d34-0410-b5e6-96231b3b80d8
If the llvm. prefix is dropped other parts of llvm don't see this as
an intrinsic. This means that the number of regular symbols depends
on the context the module is loaded into, which causes LTO to abort.
Fixes PR30509.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@283117 91177308-0d34-0410-b5e6-96231b3b80d8
Previous we were issuing an error when linking a module containing
the new Objective-C metadata structure for class properties with an
"old" one.
Now instead we downgrade the module flag so that the Objective-C
runtime does not expect the new metadata structure.
This is consistent with what ld64 is doing on binary files.
Differential Revision: https://reviews.llvm.org/D24620
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281685 91177308-0d34-0410-b5e6-96231b3b80d8
If TBAA is on an intrinsic and it gets upgraded, it'll delete the call
instruction that we collected in a vector. Even if we were to use
WeakVH, it'll drop the TBAA and we'll hit the assert on the upgrade
path.
r263673 gave a shot to make sure the TBAA upgrade happens before
intrinsics upgrade, but failed to account for all cases.
Instead of collecting instructions in a vector, this patch makes it
just upgrade the TBAA on the fly, because metadata are always
already loaded at this point.
Differential Revision: https://reviews.llvm.org/D24533
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281549 91177308-0d34-0410-b5e6-96231b3b80d8
a sufficiently low alignment for the IR load created.
There is no test case because we don't have any test cases for the *IR*
produced by the autoupgrade, only the x86 assembly, and it happens that
the x86 assembly for this intrinsic as it is tested in the autoupgrade
path just happens to not produce a separate load instruction where we
might have observed the alignment.
I'm going to follow up on the original commit to suggest getting
IR-level testing in addition to the asm level testing here so that we
can see and test these kinds of issues. We might never get an x86
instruction out with an alignment constraint, but we could stil
miscompile code by folding against the alignment marked on (or inferred
for in this case) the load.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278203 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The llvm.invariant.start and llvm.invariant.end intrinsics currently
support specifying invariant memory objects only in the default address
space.
With this change, these intrinsics are overloaded for any adddress space
for memory objects
and we can use these llvm invariant intrinsics in non-default address
spaces.
Example: llvm.invariant.start.p1i8(i64 4, i8 addrspace(1)* %ptr)
This overloaded intrinsic is needed for representing final or invariant
memory in managed languages.
Reviewers: apilipenko, reames
Subscribers: llvm-commits
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276447 91177308-0d34-0410-b5e6-96231b3b80d8