142051 Commits

Author SHA1 Message Date
Sanjay Patel
123996f5e7 remove stale FIXME note from test; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289445 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 16:20:21 +00:00
Simon Pilgrim
54121a6ebd [X86] Regenerate vector bitcast/widening tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289443 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 16:15:45 +00:00
Sanjay Patel
bbe630400d [InstCombine] fix bug when offsetting case values of a switch (PR31260)
We could truncate the condition and then try to fold the add into the
original condition value causing wrong case constants to be used.

Move the offset transform ahead of the truncate transform and return
after each transform, so there's no chance of getting confused values.

Fix for:
https://llvm.org/bugs/show_bug.cgi?id=31260


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289442 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 16:13:52 +00:00
Teresa Johnson
23137e57a7 [ThinLTO] Import only necessary DICompileUnit fields
Summary:
As discussed on mailing list, for ThinLTO importing we don't need
to import all the fields of the DICompileUnit. Don't import enums,
macros, retained types lists. Also only import local scoped imported
entities. Since we don't currently import any global variables,
we also don't need to import the list of global variables (added an
assert to verify none are being imported).

This is being done by pre-populating the value map entries to map
the unneeded metadata to nullptr. For the imported entities, we can
simply replace the source module's list with a new list containing
only those needed imported entities. This is done in the IRLinker
constructor so that value mapping automatically does the desired
mapping.

Reviewers: mehdi_amini, dexonsmith, dblaikie, aprantl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D27635

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289441 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 16:09:30 +00:00
Sanjay Patel
d616cee95a [InstCombine] clean up range-for-loops in visitSwitchInst(); NFCI
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289439 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 15:52:56 +00:00
Simon Pilgrim
ad31e861e9 [X86] Regenerate test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289438 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 15:47:53 +00:00
Sanjay Patel
3ca6ce4aa6 [InstCombine] add test to show PR31260 miscompile; NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289437 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 15:28:44 +00:00
Sanjoy Das
650050c7c2 [SCEVExpander] Add a test case related to r289412
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289435 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 14:57:11 +00:00
Simon Pilgrim
f00c869d13 Update inline argument comment. NFCI.
combineX86ShufflesRecursively 'HasPSHUFB' flag has been the more generic 'HasVariableMask' flag for some time.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289430 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 13:43:15 +00:00
Simon Pilgrim
e5753c5b0d [X86][SSE] Add support for combining SSE VSHLI/VSRLI uniform constant shifts.
Fixes some missed constant folding opportunities and allows us to combine shuffles that end with a logical bit shift.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289429 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 13:33:58 +00:00
Simon Pilgrim
dcc9618afe [X86][SSE] Lower suitably sign-extended mul vXi64 using PMULDQ
PMULDQ returns the 64-bit result of the signed multiplication of the lower 32-bits of vXi64 vector inputs, we can lower with this if the sign bits stretch that far.

Differential Revision: https://reviews.llvm.org/D27657

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289426 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 10:49:15 +00:00
Simon Pilgrim
2dc2fca37b [SelectionDAG] Add support for EXTRACT_SUBVECTOR to ComputeNumSignBits
Pre-commit as discussed on D27657

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289425 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 10:29:43 +00:00
Craig Topper
548f86a91b [X86] Teach selectScalarSSELoad to accept full 128-bit vector loads and the X86ISD::VZEXT_LOAD opcode.
Disable peephole on some of the tests that no longer require it to properly fold scalar intrinsics.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289424 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 07:57:24 +00:00
Craig Topper
d7dfe64614 [X86] Change CMPSS/CMPSD intrinsic instructions to use sse_load_f32/f64 as its memory pattern instead of full vector load.
These intrinsics only load a single element. We should use sse_loadf32/f64 to give more options of what loads it can match.

Currently these instructions are often only getting their load folded thanks to the load folding in the peephole pass. I plan to add more types of loads to sse_load_f32/64 so we can match without the peephole.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289423 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 07:57:21 +00:00
Craig Topper
4f6cd5addf [X86] Remove some intrinsic instructions from hasPartialRegUpdate
Summary:
These intrinsic instructions are all selected from intrinsics that have well defined behavior for where the upper bits come from. It's not the same place as the lower bits.

As you can see we were suppressing load folding for these instructions in some cases. In none of the cases was the separate load helping avoid a partial dependency on the destination register. So we should just go ahead and allow the load to be folded.

Only foldMemoryOperand was suppressing folding for these. They all have patterns for folding sse_load_f32/f64 that aren't gated with OptForSize, but sse_load_f32/f64 doesn't allow 128-bit vector loads. It only allows scalar_to_vector and vzmovl of scalar loads to match. There's no reason we can't allow a 128-bit vector load to be narrowed so I would like to fix sse_load_f32/f64 to allow that. And if I do that it changes some of these same test cases to fold the load too.

Reviewers: spatel, zvi, RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D27611

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289419 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 05:07:17 +00:00
Sebastian Pop
b226b5d5c0 [SCEVExpand] do not hoist divisions by zero (PR30935)
SCEVExpand computes the insertion point for the components of a SCEV to be code
generated.  When it comes to generating code for a division, SCEVexpand would
not be able to check (at compilation time) all the conditions necessary to avoid
a division by zero.  The patch disables hoisting of expressions containing
divisions by anything other than non-zero constants in order to avoid hoisting
these expressions past conditions that should hold before doing the division.

The patch passes check-all on x86_64-linux.

Differential Revision: https://reviews.llvm.org/D27216

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289412 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-12 02:52:51 +00:00
Craig Topper
e25a2790d2 [InstCombine][XOP] The instructions for the scalar frcz intrinsics are defined to put 0 in the upper bits, not pass bits through like other intrinsics. So we should return a zero vector instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289411 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 22:32:38 +00:00
Simon Pilgrim
f47a06ee95 [X86][SSE] Add support for combining target shuffles to SHUFPD.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289407 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 21:26:25 +00:00
Davide Italiano
24d39f3563 [SCCP] Use the appropriate helper function. NFCI.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289406 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 21:19:03 +00:00
Ayman Musa
159e7f5aa8 [X86][AVX512] Add missing patterns for broadcast fallback in case load node has multiple uses (for v4i64 and v4f64).
When the load node which the broadcast instruction broadcasts has multiple uses, it cannot be folded.
A fallback pattern is added to catch these cases and provide another solution.

Differential Revision: https://reviews.llvm.org/D27661



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289404 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 20:11:17 +00:00
Sanjoy Das
965b15e108 [TBAA] Don't generate invalid TBAA when merging nodes
Summary:
Fix a corner case in `MDNode::getMostGenericTBAA` where we can sometimes
generate invalid TBAA metadata.

Reviewers: chandlerc, hfinkel, mehdi_amini, manmanren

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D26635

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289403 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 20:07:25 +00:00
Sanjoy Das
a942d77488 [Verifier] Add verification for TBAA metadata
Summary:
This change adds some verification in the IR verifier around struct path
TBAA metadata.

Other than some basic sanity checks (e.g. we get constant integers where
we expect constant integers), this checks:

 - That by the time an struct access tuple `(base-type, offset)` is
   "reduced" to a scalar base type, the offset is `0`.  For instance, in
   C++ you can't start from, say `("struct-a", 16)`, and end up with
   `("int", 4)` -- by the time the base type is `"int"`, the offset
   better be zero.  In particular, a variant of this invariant is needed
   for `llvm::getMostGenericTBAA` to be correct.

 - That there are no cycles in a struct path.

 - That struct type nodes have their offsets listed in an ascending
   order.

 - That when generating the struct access path, you eventually reach the
   access type listed in the tbaa tag node.

Reviewers: dexonsmith, chandlerc, reames, mehdi_amini, manmanren

Subscribers: mcrosier, llvm-commits

Differential Revision: https://reviews.llvm.org/D26438

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289402 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 20:07:15 +00:00
Sanjay Patel
587c78e31a [Constants] don't die processing non-ConstantInt GEP indices in isGEPWithNoNotionalOverIndexing() (PR31262)
This should fix:
https://llvm.org/bugs/show_bug.cgi?id=31262


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289401 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 20:07:02 +00:00
Simon Pilgrim
73f20efb6b [X86][AVX512] Add target shuffle test showing missing PSHUFPD combine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289400 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 19:41:23 +00:00
Sebastian Pop
cfc6ce983e instr-combiner: sum up all latencies of the transformed instructions
We have found that -- when the selected subarchitecture has a scheduling model
and we are not optimizing for size -- the machine-instruction combiner uses a
too-simple algorithm to compute the cost of one of the two alternatives [before
and after running a combining pass on a section of code], and therefor it throws
away the combination results too often.

This fix has the potential to help any ISA with the potential to combine
instructions and for which at least one subarchitecture has a scheduling model.
As of now, this is only known to definitely affect AArch64 subarchitectures with
a scheduling model.

Regression tested on AMD64/GNU-Linux, new test case tested to fail on an
unpatched compiler and pass on a patched compiler.

Patch by Abe Skolnik and Sebastian Pop.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289399 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 19:39:32 +00:00
Simon Pilgrim
c9fbdfc237 [X86][XOP] Add target shuffle tests showing missing PSHUFPD combine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289398 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 19:36:25 +00:00
Sanjoy Das
7af3278b6a [SCEVExpander] Explicitly expand AddRec starts into loop preheader
This is NFC today, but won't be once D27216 (or an equivalent patch) is
in.

This change fixes a design problem in SCEVExpander -- it relied on a
hoisting optimization to generate correct code for add recurrences.
This meant changing the hoisting optimization to not kick in under
certain circumstances (to avoid speculating faulting instructions, say)
would break correctness.

The fix is to make the correctness requirements explicit, and have it
not rely on the hoisting optimization for correctness.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289397 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 19:02:21 +00:00
Oren Ben Simhon
c4650e7f1c [X86] Regcall - Adding support for mask types
Regcall calling convention passes mask types arguments in x86 GPR registers.
The review includes the changes required in order to support v32i1, v16i1 and v8i1.

Differential Revision: https://reviews.llvm.org/D27148



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289383 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 14:10:52 +00:00
Chandler Carruth
fb88888f75 [FileCheck] Re-implement the logic to find each check prefix in the
check file to not be unreasonably slow in the face of multiple check
prefixes.

The previous logic would repeatedly scan potentially large portions of
the check file looking for alternative prefixes. In the worst case this
would scan most of the file looking for a rare prefix between every
single occurance of a common prefix. Even if we bounded the scan, this
would do bad things if the order of the prefixes was "unlucky" and the
distant prefix was scanned for first.

None of this is necessary. It is straightforward to build a state
machine that recognizes the first, longest of the set of alternative
prefixes. That is in fact exactly whan a regular expression does.

This patch builds a regular expression once for the set of prefixes and
then uses it to search incrementally for the next prefix. This requires
some threading of state but actually makes the code dramatically
simpler. I've also added a big comment describing the algorithm as it
was not at all obvious to me when I started.

With this patch, several previously pathological test cases in
test/CodeGen/X86 are 5x and more faster. Overall, running all tests
under test/CodeGen/X86 uses 10% less CPU after this, and because all the
slowest tests were hitting this, finishes in 40% less wall time on my
system (going from just over 5.38s to just over 3.23s) on a release
build! This patch substantially improves the time of all 7 X86 tests
that were in the top 20 reported by --time-tests, 5 of them are
completely off the list and the remaining 2 are much lower. (Sadly, the
new tests on the list include 2 new X86 ones that are slow for unrelated
reasons, so the count stays at 4 of the top 20.)

It isn't clear how much this helps debug builds in aggregate in part
because of the noise, but it again makes mane of the slowest x86 tests
significantly faster (10% or more improvement).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289382 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 12:49:05 +00:00
Chandler Carruth
9585dcb77c [FileCheck] Remove a parameter that was simply always set to
a commandline flag and test the flag directly. NFC.

If we ever need this generality it can be added back.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289381 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 10:22:17 +00:00
Chandler Carruth
ac7830ed77 [FileCheck] Clean up doxygen comments throughout. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289380 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 10:16:21 +00:00
Chandler Carruth
b541b81d2d [FileCheck] Run clang-format over this code. NFC.
This fixes one formatting goof I left in my previous commit and *many*
other inconsistencies.

I'm planning to make substantial changes here and so wanted to get to
a clean baseline.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289379 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 09:54:36 +00:00
Chandler Carruth
79aaa0b849 Refactor FileCheck some to reduce memory allocation and copying. Also
make some readability improvements.

Both the check file and input file have to be fully buffered to
normalize their whitespace. But previously this would be done in a stack
SmallString and then copied into a heap allocated MemoryBuffer. That
seems pretty wasteful, especially for something like FileCheck where
there are only ever two such entities.

This just rearranges the code so that we can keep the canonicalized
buffers on the stack of the main function, use reasonably large stack
buffers to reduce allocation. A rough estimate seems to show that about
80% of LLVM's .ll and .s files will fit into a 4k buffer, so this should
completely avoid heap allocation for the buffer in those cases. My
system's malloc is fast enough that the allocations don't directly show
up in timings. However, on some very slow test cases, this saves 1% - 2%
by avoiding the copy into the heap allocated buffer.

This also splits out the code which checks the input into a helper much
like the code to build the checks as that made the code much more
readable to me. Nit picks and suggestions welcome here. It has really
exposed a *bunch* of stuff that could be cleaned up though, so I'm
probably going to go and spring clean all of this code as I have more
changes coming to speed things up.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289378 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 09:50:05 +00:00
Craig Topper
98435b8bdf [X86][InstCombine] Add support for scalar FMA intrinsics to SimplifyDemandedVectorElts.
This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289377 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 08:54:52 +00:00
Craig Topper
70493cff6f [X86][InstCombine] Add the test cases for r289370, r289371, and r289372.
I forgot to add the new files before commiting.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289374 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 08:00:51 +00:00
Chandler Carruth
aa58b624e7 Tweak the core loop in StringRef::find to avoid calling memcmp on every
iteration.

Instead, load the byte at the needle length, compare it directly, and
save it to use in the lookup table of lengths we can skip forward.

I also added an annotation to expect that the comparison fails so that
the loop gets laid out contiguously without the call to memcpy (and the
substantial register shuffling that the ABI requires of that call).

Finally, because this behaves especially badly with a needle length of
one (by calling memcmp with a zero length) special case that to directly
call memchr, which is what we should have been doing anyways.

This was motivated by the fact that there are a large number of test
cases in 'check-llvm' where FileCheck's performance is dominated by
calls to StringRef::find (in a release, no-asserts build). I'm working
on patches to generally improve matters there, but this alone was worth
a 12.5% improvement in one test case where FileCheck spent 92% of its
time in this routine.

I experimented a bunch with different minor variations on this theme,
for example setting the pointer *at* the last byte and indexing
backwards for the call to memcmp. That didn't improve anything on this
version and seemed more complex. I also tried other things to make the
loop flow more nicely and none worked. =/ It is a bit unfortunate, the
generated code here remains pretty gross, but I don't see any obvious
ways to improve it. At this point, most of my ideas would be really
elaborate:

1) While the remainder of the string is long enough, we could load
   a 16-byte or 32-byte vector at the address of the last byte and use
   palignr to rotate that and check the first 15- or 31-bytes at the
   front of the next segment, essentially pre-loading the first several
   bytes of the next iteration so we could quickly detect a mismatch in
   those bytes without an additional memory access. Down side would be
   the code complexity, having a fallback loop, and likely misaligned
   vector load. Plus it would make the common case of the last byte not
   matching somewhat slower (need some extraction from a vector).
2) While we have space, we could do an aligned load of a 16- or 32-byte
   vector that *contains* the end byte, and use any peceding bytes to
   have a more precise "no" test, and any subsequent bytes could be
   saved for the next iteration. This remove any unaligned load penalty,
   but still requires us to pay the overhead of vector extraction for
   the cases where we didn't need to do anything other than load and
   compare the last byte.
3) Try to walk from the last byte in a way that is more friendly to
   cache and/or memory pre-fetcher considering we have to poke the last
   byte anyways.

No idea if any of these are really worth pursuing though. They all seem
somewhat unlikely to yield big wins in practice and to be a lot of work
and complexity. So I settled here, which at least seems like a strict
improvement over the previous version.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289373 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 07:46:21 +00:00
Craig Topper
d07981b634 [X86][InstCombine] Teach InstCombineCalls to simplify demanded elements for scalar FMA intrinsics.
These intrinsics don't read the upper bits of their second and third inputs so we can try to simplify them.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289372 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 07:42:06 +00:00
Craig Topper
0a2fc781e3 [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded for scalar cmp intrinsics with masking and rounding.
These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289371 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 07:42:04 +00:00
Craig Topper
d469865e61 [AVX-512][InstCombine] Teach InstCombineCalls how to simplify demanded elements for scalar add,div,mul,sub,max,min intrinsics with masking and rounding.
These intrinsics don't read the upper bits of their second input. And the third input is the passthru for masking and that only uses the lower element as well.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289370 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 07:42:01 +00:00
Dylan McKay
15d7b5d79e [AVR] Add calling convention CodeGen tests
This adds CodeGen tests for the AVR C calling convention.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289369 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 07:09:45 +00:00
Kostya Serebryany
d123ac5f20 [libFuzzer] don't depend on time in a test
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289368 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 06:28:09 +00:00
Dylan McKay
b966884b95 [AVR] Add a test to validate a simple 'blinking led' program
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289362 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 04:59:39 +00:00
Craig Topper
f3e3617e77 [AVX-512][InstCombine] Add 512-bit vpermilvar intrinsics to InstCombineCalls to match 128 and 256-bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289354 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 01:59:36 +00:00
Craig Topper
dbbc2b8fbd [X86] Fix a comment to say 'an FMA' instead of 'a FMA'. NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289352 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 01:28:08 +00:00
Craig Topper
2d270b3115 [X86] Remove masking from 512-bit VPERMIL intrinsics in preparation for being able to constant fold them in InstCombineCalls like we do for 128/256-bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289350 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 01:26:44 +00:00
Dylan McKay
a6dba14e5c [AVR] Fix a signed vs unsigned compiler warning
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289349 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 00:24:13 +00:00
Craig Topper
7c8796bdf5 [X86][InstCombine] Teach InstCombineCalls to turn pshufb intrinsic into a shufflevector if the indices are constant.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289348 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-11 00:23:50 +00:00
Dylan McKay
8c11e2cda6 [AVR] Remove incorrect comment
This should've been removed in r289323.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289346 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-10 23:50:30 +00:00
Craig Topper
df9e980077 [X86] Remove masking from 512-bit PSHUFB intrinsics in preparation for being able to constant fold it in InstCombineCalls like we do for 128/256-bit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289344 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-10 23:09:43 +00:00
Sanjay Patel
6f7d6747b6 [InstCombine] add helper for shift-by-shift folds; NFCI
These are currently limited to integer types, but we should
be able to extend to splat vectors and possibly general vectors.


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289343 91177308-0d34-0410-b5e6-96231b3b80d8
2016-12-10 22:16:29 +00:00