Commit Graph

95290 Commits

Author SHA1 Message Date
Daniel Sanders
91c40d80de [mips][msa] Added insve
llvm-svn: 188777
2013-08-20 09:22:54 +00:00
Richard Sandiford
6a0b1638b4 Fix test typo and add usual "br %r14" test
llvm-svn: 188775
2013-08-20 09:14:46 +00:00
Richard Sandiford
fcd54a3b89 Fix overly pessimistic shortcut in post-RA MachineLICM
Post-RA LICM keeps three sets of registers: PhysRegDefs, PhysRegClobbers
and TermRegs.  When it sees a definition of R it adds all aliases of R
to the corresponding set, so that when it needs to test for membership
it only needs to test a single register, rather than worrying about
aliases there too.  E.g. the final candidate loop just has:

    unsigned Def = Candidates[i].Def;
    if (!PhysRegClobbers.test(Def) && ...) {

to test whether register Def is multiply defined.

However, there was also a shortcut in ProcessMI to make sure we didn't
add candidates if we already knew that they would fail the final test.
This shortcut was more pessimistic than the final one because it
checked whether _any alias_ of the defined register was multiply defined.
This is too conservative for targets that define register pairs.
E.g. on z, R0 and R1 are sometimes used as a pair, so there is a
128-bit register that aliases both R0 and R1.  If a loop used
R0 and R1 independently, and the definition of R0 came first,
we would be able to hoist the R0 assignment (because that used
the final test quoted above) but not the R1 assignment (because
that meant we had two definitions of the paired R0/R1 register
and would fail the shortcut in ProcessMI).

This patch just uses the same check for the ProcessMI shortcut as
we use in the final candidate loop.

llvm-svn: 188774
2013-08-20 09:11:13 +00:00
Tim Northover
cec1079024 ARM: implement some simple f64 materializations.
Previously we used a const-pool load for virtually all 64-bit floating values.
Actually, we can get quite a few common values (including 0.0, 1.0) via "vmov"
instructions of one stripe or another.

llvm-svn: 188773
2013-08-20 08:57:11 +00:00
Michael Gottesman
7bcc2da0c0 [stackprotector] Small cleanup.
llvm-svn: 188772
2013-08-20 08:56:28 +00:00
Michael Gottesman
2bc9b68fb8 [stackprotector] Small Bit of computation hoisting.
llvm-svn: 188771
2013-08-20 08:56:26 +00:00
Michael Gottesman
1bad711e58 [stackprotector] Added significantly longer comment to FindPotentialTailCall to make clear its relationship to llvm::isInTailCallPosition.
llvm-svn: 188770
2013-08-20 08:56:23 +00:00
Michael Gottesman
708165a1b3 Removed trailing whitespace.
llvm-svn: 188769
2013-08-20 08:46:16 +00:00
Michael Gottesman
780a56c825 [stackprotector] Removed stale TODO.
llvm-svn: 188768
2013-08-20 08:46:13 +00:00
Daniel Sanders
15341e9a12 [mips][msa] Added and.v, bmnz.v, bmz.v, bsel.v, nor.v, or.v, xor.v
llvm-svn: 188767
2013-08-20 08:38:21 +00:00
Michael Gottesman
ba9e36102b [stackprotector] Added support for emitting the llvm intrinsic stack protector check.
rdar://13935163

llvm-svn: 188766
2013-08-20 08:36:53 +00:00
Michael Gottesman
d2ede9d0a0 [stackprotector] Refactor out the end of isInTailCallPosition into the function returnTypeIsEligibleForTailCall.
This allows me to use returnTypeIsEligibleForTailCall in the stack protector pass.

rdar://13935163

llvm-svn: 188765
2013-08-20 08:36:50 +00:00
Michael Gottesman
c760b9c3d7 Remove unused variables that crept in.
llvm-svn: 188761
2013-08-20 07:17:27 +00:00
Michael Gottesman
9da49efd0a Teach selectiondag how to handle the stackprotectorcheck intrinsic.
Previously, generation of stack protectors was done exclusively in the
pre-SelectionDAG Codegen LLVM IR Pass "Stack Protector". This necessitated
splitting basic blocks at the IR level to create the success/failure basic
blocks in the tail of the basic block in question. As a result of this,
calls that would have qualified for the sibling call optimization were no
longer eligible for optimization since said calls were no longer right in
the "tail position" (i.e. the immediate predecessor of a ReturnInst
instruction).

Then it was noticed that since the sibling call optimization causes the
callee to reuse the caller's stack, if we could delay the generation of
the stack protector check until later in CodeGen after the sibling call
decision was made, we get both the tail call optimization and the stack
protector check!

A few goals in solving this problem were:

  1. Preserve the architecture independence of stack protector generation.

  2. Preserve the normal IR level stack protector check for platforms like
     OpenBSD for which we support platform specific stack protector
     generation.

The main problem that guided the present solution is that one can not
solve this problem in an architecture independent manner at the IR level
only. This is because:

  1. The decision on whether or not to perform a sibling call on certain
     platforms (for instance i386) requires lower level information
     related to available registers that can not be known at the IR level.

  2. Even if the previous point were not true, the decision on whether to
     perform a tail call is done in LowerCallTo in SelectionDAG which
     occurs after the Stack Protector Pass. As a result, one would need to
     put the relevant callinst into the stack protector check success
     basic block (where the return inst is placed) and then move it back
     later at SelectionDAG/MI time before the stack protector check if the
     tail call optimization failed. The MI level option was nixed
     immediately since it would require platform specific pattern
     matching. The SelectionDAG level option was nixed because
     SelectionDAG only processes one IR level basic block at a time
     implying one could not create a DAG Combine to move the callinst.

To get around this problem a few things were realized:

  1. While one can not handle multiple IR level basic blocks at the
     SelectionDAG Level, one can generate multiple machine basic blocks
     for one IR level basic block. This is how we handle bit tests and
     switches.

  2. At the MI level, tail calls are represented via a special return
     MIInst called "tcreturn". Thus if we know the basic block in which we
     wish to insert the stack protector check, we get the correct behavior
     by always inserting the stack protector check right before the return
     statement. This is a "magical transformation" since no matter where
     the stack protector check intrinsic is, we always insert the stack
     protector check code at the end of the BB.

Given the aforementioned constraints, the following solution was devised:

  1. On platforms that do not support SelectionDAG stack protector check
     generation, allow for the normal IR level stack protector check
     generation to continue.

  2. On platforms that do support SelectionDAG stack protector check
     generation:

    a. Use the IR level stack protector pass to decide if a stack
       protector is required/which BB we insert the stack protector check
       in by reusing the logic already therein. If we wish to generate a
       stack protector check in a basic block, we place a special IR
       intrinsic called llvm.stackprotectorcheck right before the BB's
       returninst or if there is a callinst that could potentially be
       sibling call optimized, before the call inst.

    b. Then when a BB with said intrinsic is processed, we codegen the BB
       normally via SelectBasicBlock. In said process, when we visit the
       stack protector check, we do not actually emit anything into the
       BB. Instead, we just initialize the stack protector descriptor
       class (which involves stashing information/creating the success
       mbbb and the failure mbb if we have not created one for this
       function yet) and export the guard variable that we are going to
       compare.

    c. After we finish selecting the basic block, in FinishBasicBlock if
       the StackProtectorDescriptor attached to the SelectionDAGBuilder is
       initialized, we first find a splice point in the parent basic block
       before the terminator and then splice the terminator of said basic
       block into the success basic block. Then we code-gen a new tail for
       the parent basic block consisting of the two loads, the comparison,
       and finally two branches to the success/failure basic blocks. We
       conclude by code-gening the failure basic block if we have not
       code-gened it already (all stack protector checks we generate in
       the same function, use the same failure basic block).

llvm-svn: 188755
2013-08-20 07:00:16 +00:00
Craig Topper
3bda7ba32d Fix formatting. No functional change.
llvm-svn: 188746
2013-08-20 05:23:59 +00:00
Craig Topper
0c2625083b Add AVX-512 and related features to the CPUID detection code.
llvm-svn: 188745
2013-08-20 05:22:42 +00:00
Craig Topper
94afe57f94 Move AVX and non-AVX replication inside a couple multiclasses to avoid repeating each instruction for both individually.
llvm-svn: 188743
2013-08-20 04:24:14 +00:00
Craig Topper
77fb9fd06a Add an error check for a typo I accidentally made in a td file that caused an assert to fire.
llvm-svn: 188742
2013-08-20 04:22:09 +00:00
Bill Schmidt
178a1ca418 [PowerPC] More refactoring prior to real PPC emitPrologue/Epilogue changes.
(Patch committed on behalf of Mark Minich, whose log entry follows.)

This is a continuation of the refactorings performed in svn rev 188573
(see that rev's comments for more detail).

This is my stage 2 refactoring: I combined the emitPrologue() &
emitEpilogue() PPC32 & PPC64 code into a single flow, simplifying a
lot of the code since in essence the PPC32 & PPC64 code generation
logic is the same, only the instruction forms are different (in most
cases). This simplification is necessary because my functional changes
(yet to come) add significant complexity, and without the
simplification of my stage 2 refactoring, the overall complexity of
both emitPrologue() & emitEpilogue() would have become almost
intractable for most mortal programmers (like me).

This submission was intended to be a pure refactoring (no functional
changes whatsoever). However, in the process of combining the PPC32 &
PPC64 flows, I spotted a difference that I believe is a bug (see svn
rev 186478 line 863, or svn rev 188573 line 888): This line appears to
be restoring the BP with the original FP content, not the original BP
content. When I merged the 32-bit and 64-bit code, I used the
corresponding code from the 64-bit flow, which I believe uses the
correct offset (BPOffset) for this operation.

llvm-svn: 188741
2013-08-20 03:12:23 +00:00
Andrew Kaylor
745a59aca5 Marking MCJIT PIC tests as XFAIL on AArch64
llvm-svn: 188740
2013-08-20 01:50:50 +00:00
Venkatraman Govindaraju
87eaa9ee17 [Sparc] Use HWEncoding instead of unused Num field in Sparc register definitions. Also, correct the definitions of RETL and RET instructions.
llvm-svn: 188738
2013-08-20 01:26:14 +00:00
Andrew Kaylor
17dcfe14bc Fixing XPASSes among MCJIT PIC test on i686
llvm-svn: 188736
2013-08-20 00:37:33 +00:00
Andrew Kaylor
3653cb94e4 Second attempt to mark Large/PIC MCJIT test as XFAIL for PowerPC64
llvm-svn: 188735
2013-08-20 00:22:03 +00:00
Andrew Kaylor
bb483e15f1 Marking two MCJIT PIC tests as XFAIL on Darwin
llvm-svn: 188734
2013-08-20 00:14:50 +00:00
Andrew Kaylor
895491fcf0 Trying again with PIC tests for MCJIT
llvm-svn: 188730
2013-08-19 23:52:53 +00:00
Hal Finkel
8f395a803a Add a llvm.copysign intrinsic
This adds a llvm.copysign intrinsic; We already have Libfunc recognition for
copysign (which is turned into the FCOPYSIGN SDAG node). In order to
autovectorize calls to copysign in the loop vectorizer, we need a corresponding
intrinsic as well.

In addition to the expected changes to the language reference, the loop
vectorizer, BasicTTI, and the SDAG builder (the intrinsic is transformed into
an FCOPYSIGN node, just like the function call), this also adds FCOPYSIGN to a
few lists in LegalizeVector{Ops,Types} so that vector copysigns can be
expanded.

In TargetLoweringBase::initActions, I've made the default action for FCOPYSIGN
be Expand for vector types. This seems correct for all in-tree targets, and I
think is the right thing to do because, previously, there was no way to generate
vector-values FCOPYSIGN nodes (and most targets don't specify an action for
vector-typed FCOPYSIGN).

llvm-svn: 188728
2013-08-19 23:35:46 +00:00
Hal Finkel
4bb40e7c8d Don't form PPC CTR-based loops around a copysignl call
copysign/copysignf never become function calls (because the SDAG expansion code
does not lower to the corresponding function call, but rather directly
implements the associated logic), but copysignl almost always is lowered into a
call to the requested libm functon (and, thus, might clobber CTR).

llvm-svn: 188727
2013-08-19 23:35:24 +00:00
Andrew Kaylor
baeae94794 Adding PIC support for ELF on x86_64 platforms
llvm-svn: 188726
2013-08-19 23:27:43 +00:00
Peter Collingbourne
5c5e108012 Introduce non-const overloads for GlobalAlias::{get,resolve}AliasedGlobal.
llvm-svn: 188725
2013-08-19 23:13:33 +00:00
Jakub Staszak
b4a62fef02 Use pop_back_val() instead of both back() and pop_back().
llvm-svn: 188723
2013-08-19 22:47:55 +00:00
Matt Arsenault
77bbadbfcb Teach InstCombine visitGetElementPtr about address spaces
llvm-svn: 188721
2013-08-19 22:17:40 +00:00
Matt Arsenault
2c55fe773b Cleanup visitGetElementPtr to make address space change easier
llvm-svn: 188720
2013-08-19 22:17:34 +00:00
Matt Arsenault
14a3f7be8d commonPointerCast cleanups to make address space change easier
llvm-svn: 188719
2013-08-19 22:17:18 +00:00
Jakub Staszak
ef4f3f31d3 Make sure that pop_back_val() result is used.
llvm-svn: 188717
2013-08-19 22:12:00 +00:00
Andrew Kaylor
37ddd6bf63 Reverting r188709 until I can figure out the proper way to XFAIL it.
llvm-svn: 188715
2013-08-19 22:05:07 +00:00
Matt Arsenault
1bfa3222e1 Fix assert with GEP ptr vector indexing structs
Also fix it calculating the wrong value. The struct index
is not a ConstantInt, so it was being interpreted as an array
index.

llvm-svn: 188713
2013-08-19 21:43:16 +00:00
Eric Christopher
2cc884bd3b Use less verbose code and update comments.
llvm-svn: 188711
2013-08-19 21:41:38 +00:00
Matt Arsenault
22098dc46b Revert non-test parts of r188507
Re-add the inboundsless tests I didn't add originally

llvm-svn: 188710
2013-08-19 21:40:31 +00:00
Andrew Kaylor
9fab6e549f Adding tests for PIC with MCJIT
llvm-svn: 188709
2013-08-19 21:08:35 +00:00
Eric Christopher
fe60d7311c Turn on pubnames by default on linux.
Until gdb supports the new accelerator tables we should add the
pubnames section so that gdb_index can be generated from gold
at link time. On darwin we already emit the accelerator tables
and so don't need to worry about pubnames.

llvm-svn: 188708
2013-08-19 21:07:38 +00:00
Reid Kleckner
9ea33f4b8c Suppress an annoying CMake warning in ChooseMSVCCRT.cmake
Warning was:
  Argument not separated from preceding token by whitespace.

llvm-svn: 188701
2013-08-19 20:25:26 +00:00
Paul Redmond
404ef5af36 Improve the widening of integral binary vector operations
- split WidenVecRes_Binary into WidenVecRes_Binary and WidenVecRes_BinaryCanTrap
  - WidenVecRes_BinaryCanTrap preserves the original behaviour for operations
    that can trap
  - WidenVecRes_Binary simply widens the operation and improves codegen for
    3-element vectors by allowing widening and promotion on x86 (matches the
    behaviour of unary and ternary operation widening)
- use WidenVecRes_Binary for operations on integers.

Reviewed by: nrotem

llvm-svn: 188699
2013-08-19 20:01:35 +00:00
Andrew Kaylor
e94cdad560 Adding comments to document RuntimeDyld relocation handling
llvm-svn: 188697
2013-08-19 19:38:06 +00:00
Akira Hatanaka
5d76bb1499 [mips] Fix instruction definitions that were incorrectly marked as code-gen-only.
llvm-svn: 188690
2013-08-19 19:08:03 +00:00
Jakub Staszak
5838af6924 Add definition of __warn_unused_result__ attribute. It will be used in the
futher commits.

llvm-svn: 188689
2013-08-19 19:02:33 +00:00
Peter Collingbourne
aa48e92de9 Introduce SpecialCaseList::isIn overload for GlobalAliases.
Differential Revision: http://llvm-reviews.chandlerc.com/D1437

llvm-svn: 188688
2013-08-19 19:00:35 +00:00
Mihai Popa
db3c6274ef Thumb2 add immediate alias for SP
The Thumb2 add immediate is in fact defined for SP. The manual is misleading as it points to a different section for add immediate with SP, however the encoding is the same as for add immediate with register only with the SP operand hard coded. As such add immediate with SP and add immediate with register can safely be treated as the same instruction.

All the patch does is adjust a register constraint on an instruction alias.

llvm-svn: 188676
2013-08-19 15:02:25 +00:00
Elena Demikhovsky
f1afd2e4db AVX-512: added arithmetic and logical operations.
ADD, SUB, MUL integer and FP types. OR, AND, XOR.
Added embeded broadcast form for these instructions.

llvm-svn: 188673
2013-08-19 13:26:14 +00:00
Richard Sandiford
5e32ef0acd [SystemZ] Add negative integer absolute (load negative)
For now this matches the equivalent of (neg (abs ...)), which did hit a few
times in projects/test-suite.  We should probably also match cases where
absolute-like selects are used with reversed arguments.

llvm-svn: 188671
2013-08-19 12:56:58 +00:00
Richard Sandiford
aee0958460 [SystemZ] Add integer absolute (load positive)
llvm-svn: 188670
2013-08-19 12:48:54 +00:00