pattern.
Also hoist the creation routine out of the generic header and into the
pass header now that we have one.
I've worked to not make any changes, even formatting ones here. I'll
clean up the formatting and other things in a follow-up patch now that
the code is in the right place.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@245004 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch implements my promised optimization to reunites certain sexts from
operands after we extract the constant offset. See the header comment of
reuniteExts for its motivation.
One key building block that enables this optimization is Bjarke's poison value
analysis (D11212). That helps to prove "a +nsw b" can't overflow.
Reviewers: broune
Subscribers: jholewinski, sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D12016
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@245003 91177308-0d34-0410-b5e6-96231b3b80d8
AliasAnalysis in LoopIdiomRecognize.
The previous commit to LIR, r244879, exposed some scary bug in the loop
pass pipeline with an assert failure that showed up on several bots.
This patch got reverted as part of getting that revision reverted, but
they're actually independent and unrelated. This patch has no functional
change and should be completely safe. It is also useful for my current
work on the AA infrastructure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244993 91177308-0d34-0410-b5e6-96231b3b80d8
This commit modifies the way the machine basic blocks are serialized - now the
machine basic blocks are serialized using a custom syntax instead of relying on
YAML primitives. Instead of using YAML mappings to represent the individual
machine basic blocks in a machine function's body, the new syntax uses a single
YAML block scalar which contains all of the machine basic blocks and
instructions for that function.
This is an example of a function's body that uses the old syntax:
body:
- id: 0
name: entry
instructions:
- '%eax = MOV32r0 implicit-def %eflags'
- 'RETQ %eax'
...
The same body is now written like this:
body: |
bb.0.entry:
%eax = MOV32r0 implicit-def %eflags
RETQ %eax
...
This syntax change is motivated by the fact that the bundled machine
instructions didn't map that well to the old syntax which was using a single
YAML sequence to store all of the machine instructions in a block. The bundled
machine instructions internally use flags like BundledPred and BundledSucc to
determine the bundles, and serializing them as MI flags using the old syntax
would have had a negative impact on the readability and the ease of editing
for MIR files. The new syntax allows me to serialize the bundled machine
instructions using a block construct without relying on the internal flags,
for example:
BUNDLE implicit-def dead %itstate, implicit-def %s1 ... {
t2IT 1, 24, implicit-def %itstate
%s1 = VMOVS killed %s0, 1, killed %cpsr, implicit killed %itstate
}
This commit also converts the MIR testcases to the new syntax. I developed
a script that can convert from the old syntax to the new one. I will post the
script on the llvm-commits mailing list in the thread for this commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244982 91177308-0d34-0410-b5e6-96231b3b80d8
We used to just say "invalid type suffix for instruction", which is
misleading. This is because we fallback to the long-form matcher if the
short-form matcher failed, losing the error information on the way.
Save it, so that we can provide a little better diagnostics when the
long-form matcher thinks a suffix is the cause of the error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244955 91177308-0d34-0410-b5e6-96231b3b80d8
Follow up to D10947 - D9746 added general SMAX/SMIN/UMAX/UMIN pattern matching to SelectionDAGBuilder::visitSelect.
This patch removes the X86 implementation and improves the AVX1/AVX2 support to correctly lower 256-bit integer vectors.
Differential Revision: http://reviews.llvm.org/D12006
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244949 91177308-0d34-0410-b5e6-96231b3b80d8
If <src> is non-zero we can safely set the flag to true, and this
results in less code generated for, e.g. ffs(x) + 1 on FreeBSD.
Thanks to majnemer for suggesting the fix and reviewing.
Code generated before the patch was applied:
0: 0f bc c7 bsf %edi,%eax
3: b9 20 00 00 00 mov $0x20,%ecx
8: 0f 45 c8 cmovne %eax,%ecx
b: 83 c1 02 add $0x2,%ecx
e: b8 01 00 00 00 mov $0x1,%eax
13: 85 ff test %edi,%edi
15: 0f 45 c1 cmovne %ecx,%eax
18: c3 retq
Code generated after the patch was applied:
0: 0f bc cf bsf %edi,%ecx
3: 83 c1 02 add $0x2,%ecx
6: 85 ff test %edi,%edi
8: b8 01 00 00 00 mov $0x1,%eax
d: 0f 45 c1 cmovne %ecx,%eax
10: c3 retq
It seems we can still use cmove and save another 'test' instruction, but
that can be tackled separately.
Differential Revision: http://reviews.llvm.org/D11989
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244947 91177308-0d34-0410-b5e6-96231b3b80d8
This commit extracts the code that parses the memory operand's alignment into
a new method named 'parseAlignment' so that it can be reused when parsing the
basic block's alignment attribute.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244945 91177308-0d34-0410-b5e6-96231b3b80d8
This commit renames the method 'diagFromLLVMAssemblyDiag' to
'diagFromBlockStringDiag'. This method will be used when converting diagnostics
from other YAML block strings, and not just the LLVM module block string, so
the new name should reflect that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244943 91177308-0d34-0410-b5e6-96231b3b80d8
We used to be over-conservative about preserving inbounds. Actually, the second
GEP (which applies the constant offset) can inherit the inbounds attribute of
the original GEP, because the resultant pointer is equivalent to that of the
original GEP. For example,
x = GEP inbounds a, i+5
=>
y = GEP a, i // inbounds removed
x = GEP inbounds y, 5 // inbounds preserved
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244937 91177308-0d34-0410-b5e6-96231b3b80d8
After r244870 flush() will only compare two null pointers and return,
doing nothing but wasting run time. The call is not required any more
as the stream and its SmallString are always in sync.
Thanks to David Blaikie for reviewing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244928 91177308-0d34-0410-b5e6-96231b3b80d8
This patch corresponds to review:
http://reviews.llvm.org/D11471
It improves the code generated for converting a scalar to a vector value. With
direct moves from GPRs to VSRs, we no longer require expensive stack operations
for this. Subsequent patches will handle the reverse case and more general
operations between vectors and their scalar elements.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244921 91177308-0d34-0410-b5e6-96231b3b80d8
This was my error. We've got f32 marked as legal because they're simulated using a v2f32 instruction, but there's no equivalent for f64.
This will get test coverage imminently when D12015 lands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244916 91177308-0d34-0410-b5e6-96231b3b80d8
This overrides the default to more closely resemble the hand-crafted matching logic in ISelLowering. It makes sense, as there is no VFP equivalent of vmin or vmax, to use them when they're available even if in general VFP ops should be preferred.
This should be NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244915 91177308-0d34-0410-b5e6-96231b3b80d8
They rely on global fast-math options, but soon ISel will rely only on fast-math flags on the instructions themselves. Rip the fast checks out into their own file so we can mark their instructions as fast.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244914 91177308-0d34-0410-b5e6-96231b3b80d8
These tests relied on -enable-no-nans-fp-math, whereas soon they'll take their no-nans hint
from the FCMP instruction itself, so split the no-nans stuff out into its own test.
Also do a slight rejig of instruction order. The old FMIN/MAX backend matching had to deal with looking through casts, which it never did particularly well. Now, instcombine will recognize such patterns and canonicalize the cast outside the select. So modify the test inputs to assume that instcombine has already run.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244913 91177308-0d34-0410-b5e6-96231b3b80d8
DeadStoreElimination does eliminate a store if it stores a value which was loaded from the same memory location.
So far this worked only if the store is in the same block as the load.
Now we can also handle stores which are in a different block than the load.
Example:
define i32 @test(i1, i32*) {
entry:
%l2 = load i32, i32* %1, align 4
br i1 %0, label %bb1, label %bb2
bb1:
br label %bb3
bb2:
; This store is redundant
store i32 %l2, i32* %1, align 4
br label %bb3
bb3:
ret i32 0
}
Differential Revision: http://reviews.llvm.org/D11854
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244901 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Update the demotion logic in WinEHPrepare to avoid creating new cleanups by
walking predecessors as necessary to insert stores for EH-pad PHIs.
Also avoid creating stores for EH-pad PHIs that have no uses.
The store/load placement is still pretty naive. Likely future improvements
(at least for optimized compiles) include:
- Share loads for related uses as possible
- Coalesce non-interfering use/def-related PHIs
- Store at definition point rather than each PHI pred for non-interfering
lifetimes.
Reviewers: rnk, majnemer
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11955
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244894 91177308-0d34-0410-b5e6-96231b3b80d8
Recent mesa/llvmpipe crashes on SystemZ due to a failed assertion when
attempting to compile a routine with a return type of
{ <4 x float>, <4 x float>, <4 x float>, <4 x float> }
on a system without vector instruction support.
This is because after legalizing the vector type, we get a return value
consisting of 16 floats, which cannot all be returned in registers.
Usually, what should happen in this case is that the target's CanLowerReturn
routine rejects the return type, in which case SelectionDAG falls back to
implementing a structure return in memory via implicit reference.
However, the SystemZ target never actually implemented any CanLowerReturn
routine, and thus would accept any struct return type.
This patch fixes the crash by implementing CanLowerReturn. As a side effect,
this also handles fp128 return values, fixing a todo that was noted in
SystemZCallingConv.td.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244889 91177308-0d34-0410-b5e6-96231b3b80d8
Consider this code:
BB:
%i = phi i32 [ 0, %if.then ], [ %c, %if.else ]
%add = add nsw i32 %i, %b
...
In this common case the add can be moved to the %if.else basic block, because
adding zero is an identity operation. If we go though %if.then branch it's
always a win, because add is not executed; if not, the number of instructions
stays the same.
This pattern applies also to other instructions like sub, shl, shr, ashr | 0,
mul, sdiv, div | 1.
Patch by Jakub Kuderski!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244887 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r244879, as it broke the test-suite on
SingleSource/Regression/C/2004-03-15-IndirectGoto in AArch64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244885 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r244880, as it broke the test-suite on
SingleSource/Regression/C/2004-03-15-IndirectGoto in AArch64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244884 91177308-0d34-0410-b5e6-96231b3b80d8
Other than PC-relative loads/store the patterns that match the various
load/store addressing modes have the same complexity, so the order that they
are matched is the order that they appear in the .td file.
Rearrange the instruction definitions in ARMInstrThumb.td, and make use of
AddedComplexity for PC-relative loads, so that the instruction matching order
is the order that results in the simplest selection logic. This also makes
register-offset load/store be selected when it should, as previously it was
only selected for too-large immediate offsets.
Differential Revision: http://reviews.llvm.org/D11800
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244882 91177308-0d34-0410-b5e6-96231b3b80d8
in LoopIdiomRecognize. This is what started me staring at this code. Now
migrating it with the new AA stuff will be trivial.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244880 91177308-0d34-0410-b5e6-96231b3b80d8
simplified form to remove redundant checks and simplify the code for
popcount recognition. We don't actually need to handle all of these
cases.
I've left a FIXME for one in particular until I finish inspecting to
make sure we don't actually *rely* on the predicate in any way.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244879 91177308-0d34-0410-b5e6-96231b3b80d8
Most SSE/AVX (non-constant) vector shift instructions only use the lower 64-bits of the 128-bit shift amount vector operand, this patch calls SimplifyDemandedVectorElts to optimize for this.
I had to refactor some of my recent InstCombiner work on the vector shifts to avoid quite a bit of duplicate code, it means that SimplifyX86immshift now (re)decodes the type of shift.
Differential Revision: http://reviews.llvm.org/D11938
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244872 91177308-0d34-0410-b5e6-96231b3b80d8
This is faster and avoids the stream and SmallString state synchronization issue.
resync() is a no-op and may be safely deleted. I'll do so in a follow-up commit.
Reviewed by Rafael Espindola.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244870 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: This patch moves the check of OptimizeForSize before traversing over all basic blocks in current loop. If OptimizeForSize is set to true, no non-trivial unswitch is ever allowed. Therefore, the early exit will help reduce compilation time. This patch should be NFC.
Reviewers: reames, weimingz, broune
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D11997
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244868 91177308-0d34-0410-b5e6-96231b3b80d8
Now that we can properly promote mismatched FCOPYSIGNs (r244858), we
can mark the FP_ROUND on the result as truncating, to expose folding.
FCOPYSIGN doesn't change anything but the sign bit, so
(fp_round (fcopysign (fpext a), b))
is equivalent to (modulo the sign bit):
(fp_round (fpext a))
which is a no-op.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244862 91177308-0d34-0410-b5e6-96231b3b80d8
We can lower them using our cool tricks if we fpext/fptrunc the second
input, like we do for f32/f64.
Follow-up to r243924, r243926, and r244858.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@244860 91177308-0d34-0410-b5e6-96231b3b80d8