The AVX2 v16i16 shift lowering works by unpacking to 2 x v8i32, performing the shift and then truncating the result.
The unpacking is used to place the values in the upper 16-bits so that we can correctly sign-extend for SRA shifts. Unfortunately we weren't ensuring that the lower 16-bits were zero to ensure that SHL correctly shifts in zero bits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271796 91177308-0d34-0410-b5e6-96231b3b80d8
C++ has a builtin type called wchar_t. Clang also provides a type
called __wchar_t in C mode.
In C mode, wchar_t can be a typedef to unsigned short.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271793 91177308-0d34-0410-b5e6-96231b3b80d8
Add the MMX implementation to the SimplifyDemandedUseBits SSE/AVX MOVMSK support added in D19614
Requires a minor tweak as llvm.x86.mmx.pmovmskb takes a x86_mmx argument - so we have to be explicit about the implied v8i8 vector type.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271789 91177308-0d34-0410-b5e6-96231b3b80d8
There appears to be a strange exception thrown and crash using call_once
on a PPC build bot, and a *really* weird windows link error for
GCMetadata.obj. Still need to investigate the cause of both problems.
Original change summary:
[LPM] Reinstate r271652 to replace the CALL_ONCE_... macro in the legacy
pass manager with the new llvm::call_once facility.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271788 91177308-0d34-0410-b5e6-96231b3b80d8
pass manager with the new llvm::call_once facility.
This reverts commit r271657 and re-applies r271652 with a fix to
actually work with arguments. In the original version, we just ended up
directly calling std::call_once via ADL because of the std::once_flag
argument. The llvm::call_once never worked with arguments. Now,
llvm::call_once is a variadic template that perfectly forwards
everything. As a part of this it had to move to the header and we use
a generic functor rather than an explict function pointer. It would be
nice to use std::invoke here but we don't have it yet. That means
pointer to members won't work here, but that seems a tolerable
compromise.
I've also tested this by forcing the fallback path, so hopefully it
sticks this time.
Original commit message:
----
[LPM] Replace the CALL_ONCE_... macro in the legacy pass manager with
the new llvm::call_once facility.
This facility matches the standard APIs and when the platform supports
it actually directly uses the standard provided functionality. This is
both more efficient on some platforms and much more TSan friendly.
The only remaining user of the cas_flag and home-rolled atomics is the
fallback implementation of call_once. I have a patch that removes them
entirely, but it needs a Windows patch to land first.
This alone substantially cleans up the macros for the legacy pass
manager, and should subsume some of the work Mehdi was doing to clear
the path for TSan testing of ThinLTO, a really important step to have
reliable upstream testing of ThinLTO in all forms.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271781 91177308-0d34-0410-b5e6-96231b3b80d8
The truncateToSize function already has assertion to check the
lower boundary for the number bytes, but it does not check the
upper boundary which could still lead to usage errors.
Differential Revision: http://reviews.llvm.org/D20755
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271773 91177308-0d34-0410-b5e6-96231b3b80d8
This is currently used by clang to lock access to modules; improve the
error message so that clang can use better output messages from locking
error issues.
rdar://problem/26529101
Differential Review: http://reviews.llvm.org/D20942
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271755 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Adds an option -esan-assume-intra-cache-line which causes esan to assume
that a single memory access touches just one cache line, even if it is not
aligned, for better performance at a potential accuracy cost. Experiments
show that the performance difference can be 2x or more, and accuracy loss
is typically negligible, so we turn this on by default. This currently
applies just to the working set tool.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20978
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271743 91177308-0d34-0410-b5e6-96231b3b80d8
My first attempt at this had an overly aggressive assert - chain nodes
will only be removed, but we could hit the assert if a non-chain node
was CSE'd (NodeToMatch, for instance).
This reapplies r271706 by reverting r271713 and fixing an assert.
Original message:
Avoid relying on UB by looking into deleted nodes for a marker value.
Instead, update the list of chain nodes as we go.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271733 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Previously we would try to load PDBs for every PE executable we tried to
symbolize. If that failed, we would fall back to DWARF. If there wasn't
any DWARF, we'd print mostly useless symbol information using the export
table.
With this change, we only try to load PDBs for executables that claim to
have them. If that fails, we can now print an error rather than falling
back silently. This should make it a lot easier to diagnose and fix
common symbolization issues, such as not having DIA or not having a PDB.
Reviewers: zturner, eugenis
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D20982
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271725 91177308-0d34-0410-b5e6-96231b3b80d8
This is very similar to r271677, but for extracts from i32 with the SIGN_EXTEND
acting on a arithmetic shift.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271717 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Adds a global variable to specify the tool, to support handling early
interceptors that invoke instrumented code and require shadow memory to be
initialized prior to __esan_init() being invoked.
Reviewers: aizatsky
Subscribers: vitalybuka, zhaoqin, kcc, eugenis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20973
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271715 91177308-0d34-0410-b5e6-96231b3b80d8
Avoid relying on UB by looking into deleted nodes for a marker value.
Instead, update the list of chain nodes as we go.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271706 91177308-0d34-0410-b5e6-96231b3b80d8
Under emscripten, C code can take the address of a function implemented
in Javascript (which is exposed via an import in wasm). Because imports
do not have linear memory address in wasm, we need to generate a thunk
to be the target of the indirect call; it call the import directly.
To make this possible, LLVM needs to emit the type signatures for these
functions, because they may not be called directly or referred to other
than where the address is taken.
This uses s new .s directive (.functype) which specifies the signature.
Differential Revision: http://reviews.llvm.org/D20891
Re-apply r271599 but instead of bailing with an error when a declared
function has multiple returns, replace it with a pointer argument. Also
add the test case I forgot to 'git add' last time around.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271703 91177308-0d34-0410-b5e6-96231b3b80d8
We were assuming all SBFX-like operations would have the shl/asr form, but often
when the field being extracted is an i8 or i16, we end up with a
SIGN_EXTEND_INREG acting on a shift instead.
This is a port of r213754 from ARM to AArch64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271677 91177308-0d34-0410-b5e6-96231b3b80d8
There was concern that creating bitcasts for the simpler potential select pattern:
define <2 x i64> @vecBitcastOp1(<4 x i1> %cmp, <2 x i64> %a) {
%a2 = add <2 x i64> %a, %a
%sext = sext <4 x i1> %cmp to <4 x i32>
%bc = bitcast <4 x i32> %sext to <2 x i64>
%and = and <2 x i64> %a2, %bc
ret <2 x i64> %and
}
might lead to worse code for some targets, so this patch is matching the larger
patterns seen in the test cases.
The motivating example for this patch is this IR produced via SSE intrinsics in C:
define <2 x i64> @gibson(<2 x i64> %a, <2 x i64> %b) {
%t0 = bitcast <2 x i64> %a to <4 x i32>
%t1 = bitcast <2 x i64> %b to <4 x i32>
%cmp = icmp sgt <4 x i32> %t0, %t1
%sext = sext <4 x i1> %cmp to <4 x i32>
%t2 = bitcast <4 x i32> %sext to <2 x i64>
%and = and <2 x i64> %t2, %a
%neg = xor <4 x i32> %sext, <i32 -1, i32 -1, i32 -1, i32 -1>
%neg2 = bitcast <4 x i32> %neg to <2 x i64>
%and2 = and <2 x i64> %neg2, %b
%or = or <2 x i64> %and, %and2
ret <2 x i64> %or
}
For an AVX target, this is currently:
vpcmpgtd %xmm1, %xmm0, %xmm2
vpand %xmm0, %xmm2, %xmm0
vpandn %xmm1, %xmm2, %xmm1
vpor %xmm1, %xmm0, %xmm0
retq
With this patch, it becomes:
vpmaxsd %xmm1, %xmm0, %xmm0
Differential Revision: http://reviews.llvm.org/D20774
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271676 91177308-0d34-0410-b5e6-96231b3b80d8
forces having special checks in ArmInstPrinter::printInstruction. This
patch addresses this issue.
Not all special checks could be removed: either they involve elaborated
conditions under which the alias is emitted (e.g. ldm/stm on sp may be
pop/push but only if the number of registers is >= 2) or the number
of registers is multivalued (like happens again with ldm/stm) and they
do not match the InstAlias pattern which assumes single-valued operands
in the pattern.
Patch by: Roger Ferrer Ibanez
Differential Revision: http://reviews.llvm.org/D20237
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271667 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
There are no tests*, no EABI buildbots, and simple test cases do not work.
* There is a single MIPS16 test using a mips*-gnueabi triple but this test
doesn't test EABI and the triple doesn't cause EABI to be used.
Reviewers: sdardis
Subscribers: tberghammer, danalbert, srhines, dsanders, sdardis, llvm-commits
Differential Revision: http://reviews.llvm.org/D20906
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271658 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Added custom converters for SDWA instruction to support optional operands and modifiers.
Support for _dpp and _sdwa suffixes that allows to force DPP or SDWA encoding for instructions.
Reviewers: tstellarAMD, vpykhtin, artem.tamazov
Subscribers: arsenm, kzhuravl
Differential Revision: http://reviews.llvm.org/D20625
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271655 91177308-0d34-0410-b5e6-96231b3b80d8
subclasses. These are not passes proper. We don't support registering
them, they can't be constructed with default arguments, and the ID is
actually in a base class.
Only these two targets even had any boiler plate to try to do this, and
it had to be munged out of the INITIALIZE_PASS macros to work. What's
worse, the boiler plate has rotted and the "name" of the pass is
actually the description string now!!! =/ All of this is completely
unnecessary. No other target bothers, and nothing breaks if you don't
initialize them because CodeGen has an entirely separate initialization
path that is somewhat more durable than relying on the implicit
initialization the way the 'opt' tool does for registered passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271650 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
N32 support will follow in a later patch since the symbol version of 'la'
incorrectly believes N32 to have 64-bit pointers and rejects it early.
This fixes the three incorrectly expanded 'la' macros found in bionic.
Reviewers: sdardis
Subscribers: dsanders, llvm-commits, sdardis
Differential Revision: http://reviews.llvm.org/D20820
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271644 91177308-0d34-0410-b5e6-96231b3b80d8
This patch begins adding support for lowering to the XOP VPERMIL2PD/VPERMIL2PS shuffle instructions - adding the X86ISD::VPERMIL2 opcode and cleaning up the usage.
The internal llvm intrinsics were assuming the shuffle mask operand was the same type as the float/double input operands (I guess to simplify the intrinsic definitions in X86InstrXOP.td to a single value type). These needed changing to integer types (matching the clang builtin and the AMD intrinsics definitions), an auto upgrade path is added to convert old calls.
Mask decoding/target shuffle support will be added in future patches.
Differential Revision: http://reviews.llvm.org/D20049
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@271633 91177308-0d34-0410-b5e6-96231b3b80d8