Before the patch, -fprofile-instr-generate compile will fail
if no integrated-as is specified when the file contains
any static functions (the -S output is also invalid).
This patch fixed the issue. With the change, the index format
version will be bumped up by 1. Backward compatibility is
preserved with this change.
Differential Revision: http://reviews.llvm.org/D15243
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255365 91177308-0d34-0410-b5e6-96231b3b80d8
computeRegisterLiveness() was broken in that it reported dead for a
register even if a subregister was alive. I assume this was because the
results of analayzePhysRegs() are hard to understand with respect to
subregisters.
This commit: Changes the results of analyzePhysRegs (=struct
PhysRegInfo) to be clearly understandable, also renames the fields to
avoid silent breakage of third-party code (and improve the grammar).
Fix all (two) users of computeRegisterLiveness() in llvm: By reenabling
it and removing workarounds for the bug.
This fixes http://llvm.org/PR24535 and http://llvm.org/PR25033
Differential Revision: http://reviews.llvm.org/D15320
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255362 91177308-0d34-0410-b5e6-96231b3b80d8
These are redundant pairs of nodes defined for
INSERT_VECTOR_ELEMENT/EXTRACT_VECTOR_ELEMENT.
insertelement/extractelement are slightly closer to the corresponding
C++ node name, and has stricter type checking so prefer it.
Update targets to only use these nodes where it is trivial to do so.
AArch64, ARM, and Mips all have various type errors on simple replacement,
so they will need work to fix.
Example from AArch64:
def : Pat<(sext_inreg (vector_extract (v16i8 V128:$Rn), VectorIndexB:$idx), i8),
(i32 (SMOVvi8to32 V128:$Rn, VectorIndexB:$idx))>;
Which is trying to do sext_inreg i8, i8.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255359 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
ADJCALLSTACK{DOWN,UP} (aka CALLSEQ_{START,END}) MIs are supposed to use
and def the stack pointer. Since they do not, all the nodes are being
eliminated by DeadMachineInstructionElim, so they aren't in the IR when
PrologEpilogInserter/eliminateCallFramePseudo needs them.
This change fixes that, but since RegStackify will not stackify across
them (and it runs early, before PEI), change LowerCall to only emit them
when the call frame size is > 0. That makes the current code work the
same way and makes code handled by D15344 also work the same way. We can
expand the condition beyond NumBytes > 0 in the future if needed.
Reviewers: sunfish, jfb
Subscribers: jfb, dschuff, llvm-commits
Differential Revision: http://reviews.llvm.org/D15459
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255356 91177308-0d34-0410-b5e6-96231b3b80d8
Revert "[DSE] Disable non-local DSE to see if the bots go green."
Revert "[DeadStoreElimination] Use range-based loops. NFC."
Revert "[DeadStoreElimination] Add support for non-local DSE."
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255354 91177308-0d34-0410-b5e6-96231b3b80d8
The access function has a short entry and a short exit, the initialization
block is only run the first time. To improve the performance, we want to
have a short frame at the entry and exit.
We explicitly handle most of the CSRs via copies. Only the CSRs that are not
handled via copies will be in CSR_SaveList.
Frame lowering and prologue/epilogue insertion will generate a short frame
in the entry and exit according to CSR_SaveList. The majority of the CSRs will
be handled by register allcoator. Register allocator will try to spill and
reload them in the initialization block.
We add CSRsViaCopy, it will be explicitly handled during lowering.
1> we first set FunctionLoweringInfo->SplitCSR if conditions are met (the target
supports it for the given calling convention and the function has only return
exits). We also call TLI->initializeSplitCSR to perform initialization.
2> we call TLI->insertCopiesSplitCSR to insert copies from CSRsViaCopy to
virtual registers at beginning of the entry block and copies from virtual
registers to CSRsViaCopy at beginning of the exit blocks.
3> we also need to make sure the explicit copies will not be eliminated.
rdar://problem/23557469
Differential Revision: http://reviews.llvm.org/D15340
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255353 91177308-0d34-0410-b5e6-96231b3b80d8
Quoting from the comment added to the code:
// Objective-C on i386 uses artificial absolute symbols to
// perform some link time checks. Those symbols have a fixed 0
// address that might conflict with real symbols in the object
// file. As I cannot see a way for absolute symbols to find
// their way into the debug information, let's just ignore those.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255350 91177308-0d34-0410-b5e6-96231b3b80d8
AsmWriterEmitter will generate a getRegisterName function with an alternate
register name index as its second argument if the target makes use of them. The
enum of these values is generated in RegisterInfoEmitter. The getRegisterName
generator would assume the namespace could always be found by reading index 1
of the list of AltNameIndices, but this will fail if this list is sorted such
that the NoRegAltName is at index 1. Because this list is sorted by record name
(in CodeGenTarget::ReadRegAltNameIndices), you only run in to problems if your
MyTargetRegisterInfo.td defines a single RegAltNameIndex that sorts lexically
before NoRegAltName.
For example, if a target has something like
def AnAltNameIndex : RegAltNameIndex
and defines RegAltNameIndices for some registers then, prior to this change,
AsmWriterEmitter would generate references to
::AnAltNameIndex and ::NoRegAltName
Patch by Alex Bradbury!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255344 91177308-0d34-0410-b5e6-96231b3b80d8
Mem2Reg shouldn't be optimizing a function that is marked
optnone. There is a test checking this that fails when mem2reg is
explicitly added to the standard pass pipeline.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255336 91177308-0d34-0410-b5e6-96231b3b80d8
MatchBSwap has most of the functionality to match bit reversals already. If we switch it from looking at bytes to individual bits and remove a few early exits, we can extend the main recursive function to match any sequence of ORs, ANDs and shifts that assemble a value from different parts of another, base value. Once we have this bit->bit mapping, we can very simply detect if it is appropriate for a bswap or bitreverse.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255334 91177308-0d34-0410-b5e6-96231b3b80d8
Before this patch, each function's on-disk VP data is 'pointed'
to by the Value field of per-function ProfileData structue, and
read relies on this field (relocated with ValueDataDelta field)
to read the value data. However this means the Value field needs
to be updated during runtime before dumping, which creates undesirable
data races.
With this patch, the reading of VP data no longer depends on Value
field. There is no format change. ValueDataDelta header field becomes
obsolute but will be kept for compatibility reason (will be removed
next time the raw format change is needed).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255329 91177308-0d34-0410-b5e6-96231b3b80d8
reduce memory usage.
Previously, LazyValueInfoCache inserted overdefined lattice values into
both ValueCache and OverDefinedCache. This wasn't necessary and was
causing LazyValueInfo to use an excessive amount of memory in some cases.
This patch changes LazyValueInfoCache to insert overdefined values only
into OverDefinedCache. The memory usage decreases by 70 to 75% when one
of the files in llvm is compiled.
rdar://problem/11388615
Differential revision: http://reviews.llvm.org/D15391
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255320 91177308-0d34-0410-b5e6-96231b3b80d8
Access to aligned globals gives us a chance to peephole optimize nonzero
offsets. If a struct is 4 byte aligned, then accesses to bytes 0-3 won't
overflow the available displacement. For example:
addis 3, 2, b4v@toc@ha
addi 4, 3, b4v@toc@l
lbz 5, b4v@toc@l(3) ; This is the result of the current peephole
lbz 6, 1(4) ; optimizer
lbz 7, 2(4)
lbz 8, 3(4)
If b4v is 4-byte aligned, we can skip using register 4 because we know
that b4v@toc@l+{1,2,3} won't overflow 32K, and instead generate:
addis 3, 2, b4v@toc@ha
lbz 4, b4v@toc@l(3)
lbz 5, b4v@toc@l+1(3)
lbz 6, b4v@toc@l+2(3)
lbz 7, b4v@toc@l+3(3)
Saving a register and an addition.
Larger alignments allow larger structures/arrays to be optimized.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255319 91177308-0d34-0410-b5e6-96231b3b80d8
Previously in the conversion cost table there are no entries for integer-integer
conversions on SSE2. This will result in imprecise costs for certain vectorized
operations. This patch adds those entries for SSE2 and SSE4.1. The cost numbers
are counted from the result of running llc on the new test case in this patch.
Differential revision: http://reviews.llvm.org/D15132
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255315 91177308-0d34-0410-b5e6-96231b3b80d8
This was causing bad code gen and assembly that won't assemble, as
mixed altivec and vsx code would end up with a vsx high register
assigned to an altivec instruction, which won't work. Constraining the
classes allows the optimization to proceed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255299 91177308-0d34-0410-b5e6-96231b3b80d8
This is the first step in supporting PGO data generation via CMake. I've marked the option as advanced and experimental until it is fleshed out further.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255298 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: As a follow-up to rL255054 I wasn't able to convince myself that the code did what I thought, so I wrote more tests.
Reviewers: reames
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15371
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255295 91177308-0d34-0410-b5e6-96231b3b80d8
PR25763 demonstrated an issue with D14683 - vector comparison constant folding only works for i1 results, so we need to split off the sign-extension of the result to the required type. Luckily this can be done with the existing type legalization code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255289 91177308-0d34-0410-b5e6-96231b3b80d8
Avoid O(N^2) behaviour when checking for bad bitcasts in `ConstantExpr`s
buried inside of aggregate initializers to `GlobalVariable`s. I've:
- centralized the "visited" set for recursing through `ConstantExpr`s so
that expressions are only visited once per Verifier run,
- removed the duplicate logic for the stack visit, and
- avoided recursing into other `GlobalValue`s.
This recovers roughly a 100x time difference in clang compiles of a
particular input file (filled with large cross-referencing tables) that
depends on whether `-disable-llvm-verifier` is on. This slowdown was
caused by r187506, which introduced these checks.
Now, avoiding `-disable-llvm-verifier` only causes a 2x slowdown for
this case.
(Interestingly, dumping the textual IR for this file starts at least
50GB of global variable initializers (I don't know the total, since I
killed the dump)...)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255269 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Adds support for in-memory round-trip of sample profile data along with basic
round trip unit tests. This will also make it easier to include unit tests for
future changes to sample profiling.
Reviewers: davidxl, dnovillo, silvas
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D15211
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255264 91177308-0d34-0410-b5e6-96231b3b80d8
This is a redo of r255137 (reverted at r255227) which was a redo of
r255124 (reverted at r255126) with a fixed check for a scalar source
type and an added test for the failure that caused the revert.
Original commit message:
Example:
bitcast (extractelement (bitcast <2 x float> %X to <2 x i32>), 1) to float
--->
extractelement <2 x float> %X, i32 1
This is part of fixing PR25543:
https://llvm.org/bugs/show_bug.cgi?id=25543
The next step will be to generalize this fold:
trunc ( lshr ( bitcast X) ) -> extractelement (X)
Ie, I'm hoping to replace the existing transform of:
bitcast ( trunc ( lshr ( bitcast X)))
added by:
http://reviews.llvm.org/rL112232
with 2 less specific transforms to catch the case in the bug report.
Differential Revision: http://reviews.llvm.org/D14879
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255261 91177308-0d34-0410-b5e6-96231b3b80d8
Added some missing spaces between the module identifier and the start of
the debug message. Also added a ":" after the module identifier to make
this look a little nicer.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@255259 91177308-0d34-0410-b5e6-96231b3b80d8