The register scavenger maintains a DistanceMap that maps MI pointers to their
distance from the top of the current MBB. The DistanceMap is built
incrementally in forward() and in bulk in findFirstUse(). It is used by
scavengeRegister() to determine which candidate register has the longest
unused interval.
Unfortunately the DistanceMap contents can become outdated. The first time
scavengeRegister() is called, the DistanceMap is filled to cover the MBB. If
then instructions are inserted in the MBB (as they always are following
scavengeRegister()), the recorded distances are too short. This causes bad
behaviour in the included test case where a register use /after/ the current
position is ignored because findFirstUse() thinks is is /before/ the current
position. A "using an undefined register" assertion follows promptly.
The fix is to build a fresh DistanceMap at the top of scavengeRegister(), and
discard it after use. This means that DistanceMap is no longer needed as a
RegScavenger member variable, and forward() doesn't need to update it.
The fix then discloses issue number two in the same test case: The candidate
search in scavengeRegister() finds a CSR that has been saved in the prologue,
but is currently unused. It would be both inefficient and wrong to spill such
a register in the emergency spill slot. In the present case, the emergency
slot restore is placed immediately before the normal epilogue restore, leading
to a "Redefining a live register" assertion.
Fix number two: When scavengerRegister() stumbles upon an unused register that
is overwritten later in the MBB, return that register early. It is important
to verify that the register is defined later in the MBB, otherwise it might be
an unspilled CSR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78650 91177308-0d34-0410-b5e6-96231b3b80d8
the overloaded vector types allowed floating-point or integer vector elements.
Most of these operations actually depend on the element type, so bitcasting
was not an option.
If you include the vpadd intrinsics that I updated earlier, this gets rid
of 20 intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78646 91177308-0d34-0410-b5e6-96231b3b80d8
MERGE_VALUES nodes. Replacing the result values with the
operands in one MERGE_VALUES node may cause another
MERGE_VALUES node be CSE'd with the first one, and bring
its uses along, so that the first one isn't dead, as this
code expects. Fix this by iterating until the node is
really dead. This fixes PR4699.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78619 91177308-0d34-0410-b5e6-96231b3b80d8
instead of syntactically as a string. This means that it keeps track of the
segment, section, flags, etc directly and asmprints them in the right format.
This also includes parsing and validation support for llvm-mc and
"attribute(section)", so we should now start getting errors about invalid
section attributes from the compiler instead of the assembler on darwin.
Still todo:
1) Uniquing of darwin mcsections
2) Move all the Darwin stuff out to MCSectionMachO.[cpp|h]
3) there are a few FIXMEs, for example what is the syntax to get the
S_GB_ZEROFILL segment type?
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78547 91177308-0d34-0410-b5e6-96231b3b80d8
bytes for F2 0F 38 and propagate. Add a FIXME for a set
of possibilities which correspond to intrinsics already used.
New test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78508 91177308-0d34-0410-b5e6-96231b3b80d8
Blackfin supports and/or/xor on i32 but not on i16. Teach
DAGCombiner::SimplifyBinOpWithSameOpcodeHands to not produce illegal nodes
after legalize ops.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78497 91177308-0d34-0410-b5e6-96231b3b80d8
Verify that early clobber registers and their aliases are not used.
All changes to RegsAvailable are now done as a transaction so the order of
operands makes no difference.
The included test case is from PR4686. It has behaviour that was dependent on the order of operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78465 91177308-0d34-0410-b5e6-96231b3b80d8
- This doesn't actually improve the algorithm (its still linear), but the
generated (match) code is now fairly compact and table driven. Still need a
generic string matcher.
- The table still needs to be compressed, this is quite simple to do and should
shrink it to under 16k.
- This also simplifies and restructures the code to make the match classes more
explicit, in anticipation of resolving ambiguities.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78461 91177308-0d34-0410-b5e6-96231b3b80d8
I can clean this up a bit more and do way with the TheCondState and just use
the top element on the TheCondStack if not empty. Also may tweak the code
around ParseConditionalAssemblyDirectives() to simplify the AsmParser code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78423 91177308-0d34-0410-b5e6-96231b3b80d8
- Still not very sane, but a least its not 60k lines on X86. :)
- In terms of correctness, currently some things are hard wired for X86, and we
still don't properly resolve ambiguities (this is ignoring the instructions
we don't even match due to funny .td stuff or other corner cases).
The high level changes:
1. Represent tokens which are significant for matching explicitly as separate
operands. This uniformly handles not only the instruction mnemonic, but
also 'signficiant' syntax like the '*' in "call * ...".
2. Separate the matching of operands to an instruction from the construction of
the MCInst. In theory this can be done during matching, but since the number
of variations is small I think it makes sense to decompose the problems.
3. Improved a few of the mechanisms to at least successfully flatten / tokenize
the assembly strings for PowerPC and ARM.
4. The comment at the top of AsmMatcherEmitter.cpp explains the approach I'm
moving towards for handling ambiguous instructions. The high-bit is to infer
a partial ordering of the operand classes (and force the user to specify one
if we can't) and use that to resolve ambiguities.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78378 91177308-0d34-0410-b5e6-96231b3b80d8
This patch takes pain to ensure all the PEI lowering code does the right thing when lowering frame indices, insert code to manipulate stack pointers, etc. It's also custom lowering dynamic stack alloc into pseudo instructions so we can insert the right instructions at scheduling time.
This fixes PR4659 and PR4682.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78361 91177308-0d34-0410-b5e6-96231b3b80d8
by aggressive chain operand optimization. UpdateNodeOperands
does not modify the node in place if it would result in
a node identical to an existing node.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78297 91177308-0d34-0410-b5e6-96231b3b80d8
and high-bits values in ways that weren't correct for integer
types wider than 64 bits. This fixes a miscompile in
PPMacroExpansion.cpp in clang on x86-64.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78295 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.
This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.
This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78142 91177308-0d34-0410-b5e6-96231b3b80d8
When LowerExtract eliminates an EXTRACT_SUBREG with a kill flag, it moves the
kill flag to the place where the sub-register is killed. This can accidentally
overlap with the use of a sibling sub-register, and we have trouble.
In the test case we have this code:
Live Ins: %R0 %R1 %R2
%R2L<def> = EXTRACT_SUBREG %R2<kill>, 1
%R2H<def> = LOAD16fi <fi#-1>, 0, Mem:LD(2,4) [FixedStack-1 + 0]
%R1L<def> = EXTRACT_SUBREG %R1<kill>, 1
%R0L<def> = EXTRACT_SUBREG %R0<kill>, 1
%R0H<def> = ADD16 %R2H<kill>, %R2L<kill>, %AZ<imp-def>, %AN<imp-def>, %AC0<imp-def>, %V<imp-def>, %VS<imp-def>
subreg: CONVERTING: %R2L<def> = EXTRACT_SUBREG %R2<kill>, 1
subreg: eliminated!
subreg: killed here: %R0H<def> = ADD16 %R2H, %R2L, %R2<imp-use,kill>, %AZ<imp-def>, %AN<imp-def>, %AC0<imp-def>, %V<imp-def>, %VS<imp-def>
The kill flag on %R2 is moved to the last instruction, and the live range overlaps with the definition of %R2H:
*** Bad machine code: Redefining a live physical register ***
- function: f
- basic block: 0x18358c0 (#0)
- instruction: %R2H<def> = LOAD16fi <fi#-1>, 0, Mem:LD(2,4) [FixedStack-1 + 0]
Register R2H was defined but already live.
The fix is to replace EXTRACT_SUBREG with IMPLICIT_DEF instead of eliminating
it completely:
subreg: CONVERTING: %R2L<def> = EXTRACT_SUBREG %R2<kill>, 1
subreg: replace by: %R2L<def> = IMPLICIT_DEF %R2<kill>
Note that these IMPLICIT_DEF instructions survive to the asm output. It is
necessary to fix the stack-color-with-reg test case because of that.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78093 91177308-0d34-0410-b5e6-96231b3b80d8
killed by another operand.
There is probably a better fix. Either 1) scavenger can look at other operands, or
2) livevariables can be smarter about kill markers. Patches welcome.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78072 91177308-0d34-0410-b5e6-96231b3b80d8
few places in InstCombine to use it, to fix problems handling pointer
types. This fixes the recent llvm-gcc bootstrap error.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@78005 91177308-0d34-0410-b5e6-96231b3b80d8
When LowerSubregsInstructionPass::LowerInsert eliminates an INSERT_SUBREG
instriction because it is an identity copy, make sure that the same registers
are alive before and after the elimination.
When the super-register is marked <undef> this requires inserting an
IMPLICIT_DEF instruction to make sure the super register is live.
Fix a related bug where a kill flag on the inserted sub-register was not transferred properly.
Finally, clear the undef flag in MachineInstr::addRegisterKilled. Undef implies dead and kill implies live, so they cant both be valid.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77989 91177308-0d34-0410-b5e6-96231b3b80d8
This is not just a matter of passing in the target triple from the module;
currently backends are making decisions based on the build and host
architecture. The goal is to migrate to making these decisions based off of the
triple (in conjunction with the feature string). Thus most clients pass in the
target triple, or the host triple if that is empty.
This has one important change in the way behavior of the JIT and llc.
For the JIT, it was previously selecting the Target based on the host
(naturally), but it was setting the target machine features based on the triple
from the module. Now it is setting the target machine features based on the
triple of the host.
For LLC, -march was previously only used to select the target, the target
machine features were initialized from the module's triple (which may have been
empty). Now the target triple is taken from the module, or the host's triple is
used if that is empty. Then the triple is adjusted to match -march.
The take away is that -march for llc is now used in conjunction with the host
triple to initialize the subtarget. If users want more deterministic behavior
from llc, they should use -mtriple, or set the triple in the input module.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77946 91177308-0d34-0410-b5e6-96231b3b80d8
__builtin_bfin_ones does the same as ctpop, so it can be implemented in the front-end.
__builtin_bfin_loadbytes loads from an unaligned pointer with the disalignexcpt instruction. It does the same as loading from a pointer with the low bits masked. It is better if the front-end creates a masked load. We can always instruction select the masked to disalignexcpt+load.
We keep csync/ssync/idle. These intrinsics represent instructions that need workarounds for some silicon revisions. We may even want to convert inline assembler to intrinsics to enable the workarounds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77917 91177308-0d34-0410-b5e6-96231b3b80d8
Allow imp-def and imp-use of anything in the scavenger asserts, just like the machine code verifier.
Allow redefinition of a sub-register of a live register.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77904 91177308-0d34-0410-b5e6-96231b3b80d8
to:
.quad X
even on a 32-bit system, where X is not 64-bits. There isn't much that
we can do here, so we just print:
.quad ((X) & 4294967295)
instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77818 91177308-0d34-0410-b5e6-96231b3b80d8
myself because I'm getting tired of seeing the red buildbots, which have
been red since 5:30PM PDT last night.
Proposed supplement to developer policy: committers should make sure to
be around to watch for buildbot failures after committing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77785 91177308-0d34-0410-b5e6-96231b3b80d8
instructions for calls since BL and BLX are always 32-bit long and BX is always
16-bit long.
Also, we should be using BLX to call external function stubs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77756 91177308-0d34-0410-b5e6-96231b3b80d8
- Operands which are just a label should be parsed as immediates, not memory
operands (from the assembler perspective).
- Match a few more flavors of immediates.
- Distinguish match functions for memory operands which don't take a segment
register.
- We match the .s for "hello world" now!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77745 91177308-0d34-0410-b5e6-96231b3b80d8
padding is disabled, tabs get replaced by spaces except in the case of
the first operand, where the tab is output to line up the operands after
the mnemonics.
Add some better comments and eliminate redundant code.
Fix some testcases to not assume tabs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77740 91177308-0d34-0410-b5e6-96231b3b80d8
- Uses MCAsmToken::getIdentifier which returns the (sub)string representing the
meaningfull contents a string or identifier token.
- Directives aren't done yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77739 91177308-0d34-0410-b5e6-96231b3b80d8
to ensure the instruction that follows a TBB (when the number of table entries
is odd) is 2-byte aligned.
Patch by Sandeep Patel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77705 91177308-0d34-0410-b5e6-96231b3b80d8
into the mergable section if it is one of our special cases. This could
obviously be improved, but this is the minimal fix and restores us to the
previous behavior.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77679 91177308-0d34-0410-b5e6-96231b3b80d8
- This is "experimental" code, I am feeling my way around and working out the
best way to do things (and learning tblgen in the process). Comments welcome,
but keep in mind this stuff will change radically.
- This is enough to match "subb" and friends, but not much else. The next step is to
automatically generate the matchers for individual operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77657 91177308-0d34-0410-b5e6-96231b3b80d8
When the return value is not used (i.e. only care about the value in the memory), x86 does not have to use add to implement these. Instead, it can use add, sub, inc, dec instructions with the "lock" prefix.
This is currently implemented using a bit of instruction selection trick. The issue is the target independent pattern produces one output and a chain and we want to map it into one that just output a chain. The current trick is to select it into a merge_values with the first definition being an implicit_def. The proper solution is to add new ISD opcodes for the no-output variant. DAG combiner can then transform the node before it gets to target node selection.
Problem #2 is we are adding a whole bunch of x86 atomic instructions when in fact these instructions are identical to the non-lock versions. We need a way to add target specific information to target nodes and have this information carried over to machine instructions. Asm printer (or JIT) can use this information to add the "lock" prefix.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77582 91177308-0d34-0410-b5e6-96231b3b80d8
due to x86 encoding restrictions. This is currently off by default
because it may cause code quality regressions. This is for PR4572.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77565 91177308-0d34-0410-b5e6-96231b3b80d8
wide vectors. Likewise, change VSTn intrinsics to take separate arguments
for each vector in a multi-vector struct. Adjust tests accordingly.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77468 91177308-0d34-0410-b5e6-96231b3b80d8
- This change also makes it possible to switch between ARM / Thumb on a
per-function basis.
- Fixed thumb2 routine which expand reg + arbitrary immediate. It was using
using ARM so_imm logic.
- Use movw and movt to do reg + imm when profitable.
- Other code clean ups and minor optimizations.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77300 91177308-0d34-0410-b5e6-96231b3b80d8
for now. Make the section switching directives more consistent
by not including \n and including \t for them all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77107 91177308-0d34-0410-b5e6-96231b3b80d8
and make it more aggressive, we now put:
const int G2 __attribute__((weak)) = 42;
into the text (readonly) segment like gcc, previously we put
it into the data (readwrite) segment.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77104 91177308-0d34-0410-b5e6-96231b3b80d8
the step value as unsigned, the start value and the addrec
itself still need to be treated as signed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77078 91177308-0d34-0410-b5e6-96231b3b80d8
on darwin with ".cstring" instead of ".section __TEXT,__cstring". They
are the same and the former is better. Remove this because this is no longer
magic pixie dust in the frontend.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77055 91177308-0d34-0410-b5e6-96231b3b80d8
Before:
adr r12, #LJTI3_0_0
ldr pc, [r12, +r0, lsl #2]
LJTI3_0_0:
.long LBB3_24
.long LBB3_30
.long LBB3_31
.long LBB3_32
After:
adr r12, #LJTI3_0_0
add pc, r12, +r0, lsl #2
LJTI3_0_0:
b.w LBB3_24
b.w LBB3_30
b.w LBB3_31
b.w LBB3_32
This has several advantages.
1. This will make it easier to optimize this to a TBB / TBH instruction +
(smaller) table.
2. This eliminate the need for ugly asm printer hack to force the address
into thumb addresses (bit 0 is one).
3. Same codegen for pic and non-pic.
4. This eliminate the need to align the table so constantpool island pass
won't have to over-estimate the size.
Based on my calculation, the later is probably slightly faster as well since
ldr pc with shifter address is very slow. That is, it should be a win as long
as the HW implementation can do a reasonable job of branch predict the second
branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77024 91177308-0d34-0410-b5e6-96231b3b80d8
There's still a strict-aliasing violation here, but I don't feel like
dealing with that right now...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@77005 91177308-0d34-0410-b5e6-96231b3b80d8
also apply to vectors. This allows us to compile this:
#include <emmintrin.h>
__m128i a(__m128 a, __m128 b) { return a==a & b==b; }
__m128i b(__m128 a, __m128 b) { return a!=a | b!=b; }
to:
_a:
cmpordps %xmm1, %xmm0
ret
_b:
cmpunordps %xmm1, %xmm0
ret
with clang instead of to a ton of horrible code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@76863 91177308-0d34-0410-b5e6-96231b3b80d8
with negative tests: this test wasn't checking what it thought it was
because it was grepping .bc, not .ll.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@76861 91177308-0d34-0410-b5e6-96231b3b80d8
dumping ground of various SSE4.1 tests, since filecheck can reasonably
handle them all in one file. Generalize it to check x86-64 stuff as
well since it has a different ABI (a convenient way to test both the
reg and mem forms of these instructions).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@76848 91177308-0d34-0410-b5e6-96231b3b80d8
(x pred y) with more thorough code that does more complete canonicalization
before resorting to range checks. This helps it find more cases where
the canonicalized expressions match.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@76671 91177308-0d34-0410-b5e6-96231b3b80d8
as 32-bit code by default, and if gcc defaults to 64-bit code then ocamlc
requires a -cc "gcc -arch i386" option. We were hardcoding -cc g++
and throwing away any other compiler options that were determined when
ocamlc was configured and built.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@76658 91177308-0d34-0410-b5e6-96231b3b80d8