The Printer will now print instructions with the correct alignment specifier syntax, like
vld1.8 {d16}, [r0:64]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175884 91177308-0d34-0410-b5e6-96231b3b80d8
It was incorrectly checking a Function* being an IntrinsicInst* which
isn't possible. It should always have been checking the CallInst* instead.
Added test case for x86 which ensures we only get one constant load.
It was 2 before this change.
rdar://problem/13267920
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175853 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes some problems with too conservative checking where we were
marking all aliases of a register as used, and then also checking all
aliases when allocating a register.
<rdar://problem/13249625>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175782 91177308-0d34-0410-b5e6-96231b3b80d8
Large code model is identical to medium code model except that the
addis/addi sequence for "local" accesses is never used. All accesses
use the addis/ld sequence.
The coding changes are straightforward; most of the patch is taken up
with creating variants of the medium model tests for large model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175767 91177308-0d34-0410-b5e6-96231b3b80d8
A legal BUILD_VECTOR goes in and gets constant folded into another legal
BUILD_VECTOR so we don't lose any legality here. The problematic PPC
optimization that made this check necessary was fixed recently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175759 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes for-loop.cl piglit test
Patch By: Vincent Lejeune
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
NOTE: This is a candidate for the Mesa stable branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175742 91177308-0d34-0410-b5e6-96231b3b80d8
NOTE: This is a candidate for the Mesa stable branch.
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175733 91177308-0d34-0410-b5e6-96231b3b80d8
there were inline br .+4 instructions. Soon everything can enjoy the
full instruction scheduling experience.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175718 91177308-0d34-0410-b5e6-96231b3b80d8
This patch implements the PPCDAGToDAGISel::PostprocessISelDAG virtual
method to perform post-selection peephole optimizations on the DAG
representation.
One optimization is implemented here: folds to clean up complex
addressing expressions for thread-local storage and medium code
model. It will also be useful for large code model sequences when
those are added later. I originally thought about doing this on the
MI representation prior to register assignment, but it's difficult to
do effective global dead code elimination at that point. DCE is
trivial on the DAG representation.
A typical example of a candidate code sequence in assembly:
addis 3, 2, globalvar@toc@ha
addi 3, 3, globalvar@toc@l
lwz 5, 0(3)
When the final instruction is a load or store with an immediate offset
of zero, the offset from the add-immediate can replace the zero,
provided the relocation information is carried along:
addis 3, 2, globalvar@toc@ha
lwz 5, globalvar@toc@l(3)
Since the addi can in general have multiple uses, we need to only
delete the instruction when the last use is removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175697 91177308-0d34-0410-b5e6-96231b3b80d8
(2xi32) (truncate ((2xi64) bitcast (buildvector i32 a, i32 x, i32 b, i32 y)))
can be folded into a (2xi32) (buildvector i32 a, i32 b).
Such a DAG would cause uneccessary vdup instructions followed by vmovn
instructions.
We generate this code on ARM NEON for a setcc olt, 2xf64, 2xf64. For example, in
the vectorized version of the code below.
double A[N];
double B[N];
void test_double_compare_to_double() {
int i;
for(i=0;i<N;i++)
A[i] = (double)(A[i] < B[i]);
}
radar://13191881
Fixes bug 15283.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175670 91177308-0d34-0410-b5e6-96231b3b80d8
This handles the cases where the 6-bit splat element is odd, converting
to a three-instruction sequence to add or subtract two splats. With this
fix, the XFAIL in test/CodeGen/PowerPC/vec_constants.ll is removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175663 91177308-0d34-0410-b5e6-96231b3b80d8
- When extloading from a vector with non-byte-addressable element, e.g.
<4 x i1>, the current logic breaks. Extend the current logic to
fix the case where the element type is not byte-addressable by loading
all bytes, bit-extracting/packing each element.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175642 91177308-0d34-0410-b5e6-96231b3b80d8
The PPC backend doesn't handle these correctly. This patch uses logic
similar to that in the X86 and ARM backends to track these arguments
properly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175635 91177308-0d34-0410-b5e6-96231b3b80d8
Add HexagonMCInst class which adds various Hexagon VLIW annotations.
In addition, this class also includes some APIs related to the
constant extenders.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175634 91177308-0d34-0410-b5e6-96231b3b80d8
During lowering of a BUILD_VECTOR, we look for opportunities to use a
vector splat. When the splatted value fits in 5 signed bits, a single
splat does the job. When it doesn't fit in 5 bits but does fit in 6,
and is an even value, we can splat on half the value and add the result
to itself.
This last optimization hasn't been working recently because of improved
constant folding. To circumvent this, create a pseudo VADD_SPLAT that
can be expanded during instruction selection.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175632 91177308-0d34-0410-b5e6-96231b3b80d8
sext <4 x i1> to <4 x i64>
sext <4 x i8> to <4 x i64>
sext <4 x i16> to <4 x i64>
I'm running Combine on SIGN_EXTEND_IN_REG and revert SEXT patterns:
(sext_in_reg (v4i64 anyext (v4i32 x )), ExtraVT) -> (v4i64 sext (v4i32 sext_in_reg (v4i32 x , ExtraVT)))
The sext_in_reg (v4i32 x) may be lowered to shl+sar operations.
The "sar" does not exist on 64-bit operation, so lowering sext_in_reg (v4i64 x) has no vector solution.
I also added a cost of this operations to the AVX costs table.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175619 91177308-0d34-0410-b5e6-96231b3b80d8
It is possible that frame pointer is not found in the
callee saved info, thus FramePtrSpillFI may be incorrect
if we don't check the result of hasFP(MF).
Besides, if we enable the stack coloring algorithm, there
will be an assertion to ensure the slot is live. But in
the test case, %var1 is not live in the prologue of the
function, and we will get the assertion failure.
Note: There is similar code in ARMFrameLowering.cpp.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175616 91177308-0d34-0410-b5e6-96231b3b80d8
SltCCRxRy16, SltiCCRxImmX16, SltiuCCRxImmX16, SltuCCRxRy16
$T8 shows up as register $24 when emitted from C++ code so we had
to change some tests that were already there for this functionality.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175593 91177308-0d34-0410-b5e6-96231b3b80d8
MS-style inline assembly.
This is a follow-on to r175334. Forcing a FP to be emitted doesn't ensure it
will be used. Therefore, force the base pointer as well. We now treat MS
inline assembly in the same way we treat functions with dynamic stack
realignment and VLAs. This guarantees the BP will be used to reference
parameters and locals.
rdar://13218191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175576 91177308-0d34-0410-b5e6-96231b3b80d8
When creating an allocation hint for a register pair, make sure the hint
for the physical register reference is still in the allocation order.
rdar://13240556
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175541 91177308-0d34-0410-b5e6-96231b3b80d8
Due to the execution order of doFinalization functions, the GC information were
deleted before AsmPrinter::doFinalization was executed. Thus, the
GCMetadataPrinter::finishAssembly was never called.
The patch fixes that by moving the code of the GCInfoDeleter::doFinalization to
Printer::doFinalization.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175528 91177308-0d34-0410-b5e6-96231b3b80d8
A vectorized sitfp on doubles will get scalarized to a sequence of an
extract_element of <2 x i32>, a bitcast to f32 and a sitofp.
Due to the the extract_element, and the bitcast we will uneccessarily generate
moves between scalar and vector registers.
The patch fixes this by using a COPY_TO_REGCLASS and a EXTRACT_SUBREG to extract
the element from the vector instead.
radar://13191881
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175520 91177308-0d34-0410-b5e6-96231b3b80d8
If the memcpy has an odd length with an alignment of 2, this would incorrectly
assert on the last 1 byte copy.
rdar://13202135
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175459 91177308-0d34-0410-b5e6-96231b3b80d8
This expansion will be moved to expandISelPseudos as soon as I can figure
out how to do that. There are other instructions which use this
ExpandFEXT_T8I816_ins and as soon as I have finished expanding them all,
I will delete the macro asm string text so it has no way to be used
in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175413 91177308-0d34-0410-b5e6-96231b3b80d8
If the frame pointer is omitted, and any stack changes occur in the inline
assembly, e.g.: "pusha", then any C local variable or C argument references
will be incorrect.
I pass no judgement on anyone who would do such a thing. ;)
rdar://13218191
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175334 91177308-0d34-0410-b5e6-96231b3b80d8
When we're recalculating the feature set of the subtarget, we need to have the
ivars in their initial state.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175320 91177308-0d34-0410-b5e6-96231b3b80d8
- add sincos to runtime library if target triple environment is GNU
- added canCombineSinCosLibcall() which checks that sincos is in the RTL and
if the environment is GNU then unsafe fpmath is enabled (required to
preserve errno)
- extended sincos-opt lit test
Reviewed by: Hal Finkel
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175283 91177308-0d34-0410-b5e6-96231b3b80d8
This implements the review suggestion to simplify the AArch64 backend. If we
later discover that we *really* need the extra complexity of the
ConstantIslands pass for performance reasons it can be resurrected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@175258 91177308-0d34-0410-b5e6-96231b3b80d8