The R0 register can now be allocated because instructions
that cannot use R0 as a GPR have been appropriately marked.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178123 91177308-0d34-0410-b5e6-96231b3b80d8
The register parameter in these instructions becomes the base register in an
r+i ld instruction (and, thus, cannot be r0).
This is not yet testable because we don't yet allocate r0 (and even then any
test would be very fragile).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178121 91177308-0d34-0410-b5e6-96231b3b80d8
Either operand of these pseudo instructions can be transformed into the first
operand of an isel instruction (and this operand cannot be r0).
This is not yet testable because we don't yet allocate r0 (and even when we do,
any test would be very fragile).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178119 91177308-0d34-0410-b5e6-96231b3b80d8
Like the addi/addis instructions themselves, these pseudo instructions also
cannot have r0 as their register parameter (because it will be interpreted as
the value 0).
This is not yet testable because we don't yet allocate r0 (and even when we do,
any regression test would be very fragile because it would depend on the
register allocator heuristics).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178118 91177308-0d34-0410-b5e6-96231b3b80d8
Some implementation detail in the forgotten past required the link
register to be placed in the GPRC and G8RC register classes. This is
just wrong on the face of it, and causes several extra intersection
register classes to be generated. I found this was having evil
effects on instruction scheduling, by causing the wrong register class
to be consulted for register pressure decisions.
No code generation changes are expected, other than some minor changes
in instruction order. Seven tests in the test bucket required minor
tweaks to adjust to the new normal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178114 91177308-0d34-0410-b5e6-96231b3b80d8
This is just the basic groundwork for supporting DW_TAG_imported_module but I
wanted to commit this before pushing support further into Clang or LLVM so that
this rather churny change is isolated from the rest of the work. The major
churn here is obviously adding another field (within the common DIScope prefix)
to all DIScopes (files, classes, namespaces, lexical scopes, etc). This should
be the last big churny change needed for DW_TAG_imported_module/using directive
support/PR14606.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178099 91177308-0d34-0410-b5e6-96231b3b80d8
As Bill Schmidt pointed out to me, only on Darwin do we need to spill/restore
VRSAVE in the SjLj code. For non-Darwin, don't spill/restore VRSAVE (and I've
added some asserts to make sure that we're not).
As it turns out, we're not currently handling the Darwin case correctly (I've
added a FIXME in the test case). I've tried adding various implied register
definitions/uses to force the spill without success, so I'll need to address
this later.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178096 91177308-0d34-0410-b5e6-96231b3b80d8
if execution failed. ExecuteAndWait returns -1 upon an execution failure, but
checking the return value isn't sufficient because the wait command may
return -1 as well. This new parameter is to be used by the clang driver in a
subsequent commit.
Part of rdar://13362359
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178087 91177308-0d34-0410-b5e6-96231b3b80d8
If we compile a single source program, the `.gcda' file will be generated where
the program was executed. This isn't desirable, because that place may be at an
unpredictable place (the program could call `chdir' for instance).
Instead, we will output the `.gcda' file in the same place we output the `.gcno'
file. I.e., the directory where the executable was generated. This matches GCC's
behavior.
<rdar://problem/13061072> & PR11809
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178084 91177308-0d34-0410-b5e6-96231b3b80d8
All Intel CPUs since Yonah look a lot alike, at least at the granularity
of the scheduling models. We can add more accurate models for
processors that aren't Sandy Bridge if required. Haswell will probably
need its own.
The Atom processor and anything based on NetBurst is completely
different. So are the non-Intel chips.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178080 91177308-0d34-0410-b5e6-96231b3b80d8
This will be used to factor out some uses of magic number operand offsets
inside Clang where these fields were updated in an effort to resolve forward
declarations/circular references.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178078 91177308-0d34-0410-b5e6-96231b3b80d8
As suggested by Bill Schmidt (in reviewing r178067), use the real register
number bit lengths (which is self-documenting, and prevents using illegal
numbers), and set only the relevant bits in HWEncoding (which defaults to 0).
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178077 91177308-0d34-0410-b5e6-96231b3b80d8
As pointed out by Richard Sandiford, my recent updates to the register
scavenger broke targets that use custom spilling (because the new code assumed
that if there were no valid spill slots, than spilling would be impossible).
I don't have a test case, but it should be possible to create one for Thumb 1,
Mips 16, etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178073 91177308-0d34-0410-b5e6-96231b3b80d8
As pointed out by Jakob, we don't need to maintain a separate
register-numbering table. Instead we should let TableGen generate the table for
us from the information (already present) in PPCRegisterInfo.td.
TRI->getEncodingValue is now used to access register-encoding values.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178067 91177308-0d34-0410-b5e6-96231b3b80d8
Now that the register scavenger can support multiple spill slots, and PEI can
use virtual-register-based scavenging for multiple simultaneous registers, we
can use a virtual register for the transfer register in the CR spilling code.
This should eliminate the last place (outside of the prologue/epilogue) where
we depend on the unconditional availability of the r0 register. We will soon be
able to allocate it (in a somewhat restricted sense) as a GPR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178060 91177308-0d34-0410-b5e6-96231b3b80d8
PPC's use of PEI's virtual-register-based scavenging functionality had
redefined the virtual registers (it was non-SSA). Now that PEI supports
dealing with instructions with multiple virtual registers, this can be
cleanup up to use multiple virtual registers and keep SSA form.
No functionality change intended.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178059 91177308-0d34-0410-b5e6-96231b3b80d8
The previous algorithm could not deal properly with scavenging multiple virtual
registers because it kept only one live virtual -> physical mapping (and
iterated through operands in order). Now we don't maintain a current mapping,
but rather use replaceRegWith to completely remove the virtual register as
soon as the mapping is established.
In order to allow the register scavenger to return a physical register killed
by an instruction for definition by that same instruction, we now call
RS->forward(I) prior to eliminating virtual registers defined in I. This
requires a minor update to forward to ignore virtual registers.
These new features will be tested in forthcoming commits.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178058 91177308-0d34-0410-b5e6-96231b3b80d8
Now all x86 instructions that have itinerary classes also have SchedRW
lists. This is required before the new scheduling models can be used.
There are still unannotated instructions remaining, but they don't have
itinerary classes either.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178051 91177308-0d34-0410-b5e6-96231b3b80d8
This is a compile time optimization. Before the patch we would do two traversals
on each call to aliasGEP - one with a set size parameter one with UnknownSize.
We can do better by first checking the result of the alias query with
UnknownSize.
Only if this one returns MayAlias do we query a second time using size and type.
This recovers an about 7% compile time regression on spec/ammp.
radar://12349960
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178045 91177308-0d34-0410-b5e6-96231b3b80d8
The OptimizeIntToFloatBitCast converts shift-truncate sequences
into extractelement operations. The computation of the element
index to be used in the resulting operation is currently only
correct for little-endian targets.
This commit fixes the element index computation to be correct
for big-endian targets as well. If the target byte order is
unknown, the optimization cannot be performed at all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178031 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r177968. It is causing failures in a local build bot.
"fatal error: error in backend: Expected a variant SchedClass"
Original commit message:
Move the CortexA9 resources into the CortexA9 SchedModel namespace. Define
resource mappings under the CortexA9 SchedModel. Define resources and mappings
for the SwiftModel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178028 91177308-0d34-0410-b5e6-96231b3b80d8
Not only fold immediates, but avoid unnecessary copies as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178024 91177308-0d34-0410-b5e6-96231b3b80d8
Just define the address as unknown instead of VReg_32.
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178022 91177308-0d34-0410-b5e6-96231b3b80d8
They read from constant register space anyway.
v2: fix lit tests
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178020 91177308-0d34-0410-b5e6-96231b3b80d8
Just enable WQM when we see an LDS interpolation instruction.
Signed-off-by: Christian König <christian.koenig@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178019 91177308-0d34-0410-b5e6-96231b3b80d8
Restore the EXEC mask early, otherwise a copy might end up not beeing executed.
Candidate for the mesa stable branch.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Tested-by: Michel Dänzer <michel.daenzer@amd.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178018 91177308-0d34-0410-b5e6-96231b3b80d8
If PC or SP is the destination, the disassembler erroneously failed with the
invalid encoding, despite the manual saying that both are fine.
This patch addresses failure to decode encoding T4 of LDR (A8.8.62) which is a
postindexed load, where the offset 0xc is applied to SP after the load occurs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178017 91177308-0d34-0410-b5e6-96231b3b80d8
There remain a number of patterns that cannot (and should not)
be handled by the asm parser, in particular all the Pseudo patterns.
This commit marks those patterns as isCodeGenOnly.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178008 91177308-0d34-0410-b5e6-96231b3b80d8
MCTargetDesc/PPCMCCodeEmitter.cpp current has code like:
if (isSVR4ABI() && is64BitMode())
Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
(MCFixupKind)PPC::fixup_ppc_toc16));
else
Fixups.push_back(MCFixup::Create(0, MO.getExpr(),
(MCFixupKind)PPC::fixup_ppc_lo16));
This is a problem for the asm parser, since it requires knowledge of
the ABI / 64-bit mode to be set up. However, more fundamentally,
at this point we shouldn't make such distinctions anyway; in an assembler
file, it always ought to be possible to e.g. generate TOC relocations even
when the main ABI is one that doesn't use TOC.
Fortunately, this is actually completely unnecessary; that code was added
to decide whether to generate TOC relocations, but that information is in
fact already encoded in the VariantKind of the underlying symbol.
This commit therefore merges those fixup types into one, and then decides
which relocation to use based on the VariantKind.
No changes in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178007 91177308-0d34-0410-b5e6-96231b3b80d8
As part of the the sequence generated to implement long double -> int
conversions, we need to perform an FADD in round-to-zero mode. This is
problematical since the FPSCR is not at all modeled at the SelectionDAG
level, and thus there is a risk of getting floating point instructions
generated out of sequence with the instructions to modify FPSCR.
The current code handles this by somewhat "special" patterns that in part
have dummy operands, and/or duplicate existing instructions, making them
awkward to handle in the asm parser.
This commit changes this by leaving the "FADD in round-to-zero mode"
as an atomic operation on the SelectionDAG level, and only split it up into
real instructions at the MI level (via custom inserter). Since at *this*
level the FPSCR *is* modeled (via the "RM" hard register), much of the
"special" stuff can just go away, and the resulting patterns can be used by
the asm parser.
No significant change in generated code expected.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178006 91177308-0d34-0410-b5e6-96231b3b80d8
The LDrs pattern is a duplicate of LD, except that it accepts memory
addresses where the displacement is a symbolLo64. An operand type
"memrs" is defined for just that purpose.
However, this wouldn't be necessary if the default "memrix" operand
type were to simply accept 64-bit symbolic addresses directly.
The only problem with that is that it uses "symbolLo", which is
hardcoded to 32-bit.
To fix this, this commit changes "memri" and "memrix" to use new
operand types for the memory displacement, which allow iPTR
instead of i32. This will also make address parsing easier to
implment in the asm parser.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178005 91177308-0d34-0410-b5e6-96231b3b80d8
The ADDI/ADDI8 patterns are currently duplicated into ADDIL/ADDI8L,
which describe the same instruction, except that they accept a
symbolLo[64] operand instead of a s16imm[64] operand.
This duplication confuses the asm parser, and it actually not really
needed, since symbolLo[64] already accepts immediate operands anyway.
So this commit removes the duplicate patterns.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178004 91177308-0d34-0410-b5e6-96231b3b80d8
This commit changes the ISEL patterns to use a CCBITRC operand
instead of a "pred" operand. This matches the actual instruction
text more directly, and simplifies use of ISEL with the asm parser.
In addition, this change allows some simplification of handling
the "pred" operand, as this is now only used by BCC.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178003 91177308-0d34-0410-b5e6-96231b3b80d8
The BLR pattern cannot be recognized by the asm parser in its current form.
This complexity is due to an apparent attempt to enable conditional BLR
variants. However, none of those can ever be generated by current code;
the pattern is only ever created using the default "pred" operand.
To simplify the pattern and allow it to be recognized by the parser,
this commit removes those attempts at conditional BLR support.
When we later come back to actually add real conditional BLR, this
should probably be done via a fully generic conditional branch pattern.
No change in generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178002 91177308-0d34-0410-b5e6-96231b3b80d8
In PPCInstr64Bit.td, some branch patterns appear in a different sequence
than the corresponding 32-bit patterns in PPCInstrInfo.td.
To simplify future changes that affect both files, this commit moves
those patterns to rearrange them into a similar sequence.
No effect on generated code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@178001 91177308-0d34-0410-b5e6-96231b3b80d8