I think these were affected by a change way back when to stop printing newlines in Value::dump() by default. This change simply allows the debug output to be readable.
NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251517 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch handles assembly and disassembly, but not codegen, as of yet.
Additionally, it fixes a bug whereby SP and PC as shifted-reg operands
were treated as predictable in ARMv7 Thumb; and it enables the tests
for invalid and unpredictable instructions to run on both ARMv7 and ARMv8.
Reviewers: jmolloy, rengolin
Subscribers: aemerson, rengolin, llvm-commits
Differential Revision: http://reviews.llvm.org/D14141
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251516 91177308-0d34-0410-b5e6-96231b3b80d8
When checking if an indirect global (a global with pointer type) is only assigned by allocation functions, first check if the global is itself initialized. If it is, it's not only assigned by allocation functions.
This fixes PR25309. Thanks to David Majnemer for reducing the test case!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251508 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch adds support to check if a loop has loop invariant conditions which lead to loop exits. If so, we know that if the exit path is taken, it is at the first loop iteration. If there is an induction variable used in that exit path whose value has not been updated, it will keep its initial value passing from loop preheader. We can therefore rewrite the exit value with
its initial value. This will help remove phis created by LCSSA and enable other optimizations like loop unswitch.
Reviewers: sanjoy
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D13974
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251492 91177308-0d34-0410-b5e6-96231b3b80d8
Change InstrProfReaderIndex from typedef into a wrapper
class with helper methods. This makes the index profile
reader code more readable. It also hides the implementation
detail of the index format and make it more flexible to allow
support different (or more than one) format in the future.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251491 91177308-0d34-0410-b5e6-96231b3b80d8
This also lets us remove the versions of the functions that took a statically sized array as we can rely on ArrayRef implicit conversion now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251490 91177308-0d34-0410-b5e6-96231b3b80d8
cntlz is the old POWER mnemonic. cntlzw is the PowerPC mnemonic.
This change fixes an issue when -no-integrated-as: The opcode cntlz is
unrecognized by gas
Alias the POWER mnemonic cntlz[.] to the PowerPC mnemonic cntlzw[.]
This is done for because the POWER cntlz mnemonic has be used by LLVM for
a very long time. We need to make sure that assembly programs
that are using the cntlz[.] do not break with this change.
Change PowerPC tests to reflect the insn change from cntlz to cntlzw.
Add assembly test to verify cntlz[.] is encoded correctly.
Patch by Tom Rix!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251489 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Don't call `computeKnownBitsFromRangeMetadata` for extended loads --
this can cause a mismatch between the width of the !range metadata and
the width of the APInt's accumulating `KnownZero` (and `KnownOne` in the
future). This isn't a problem now, but will be after a future change.
Note: this can be made more aggressive in the future.
Reviewers: nlewycky
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D14107
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251486 91177308-0d34-0410-b5e6-96231b3b80d8
The singleton !range metadata gets simplified more aggressively after a
later change, so change the !range metadata to contain more than one
element.
While at it, turn some `; CHECK` s to `; CHECK-LABEL:` s.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251485 91177308-0d34-0410-b5e6-96231b3b80d8
was causing builder failures.
The bindings were originally added in r251472, and reverted in r251473 due to
the builder failures.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251482 91177308-0d34-0410-b5e6-96231b3b80d8
This allows the function to be easily reused and also simplifies the
code as the keyword list is next to the keyword handling now.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251478 91177308-0d34-0410-b5e6-96231b3b80d8
r248010 changed the -debug output to use short ids, but did not
similarly modify the graph printer. Change to be consistent, for ease of
cross-reference.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251465 91177308-0d34-0410-b5e6-96231b3b80d8
CatchReturnInst has side-effects: it runs a destructor. This destructor
could conceivably run forever/call exit/etc. and should not be removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251461 91177308-0d34-0410-b5e6-96231b3b80d8
We used automated tools to update our IR to its current syntax in commit
21f77df7(r247378). While it correctly updated the CHECK lines in our
compatibility tests, the IR should have remained untouched. This commit
fixes the syntax errors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251458 91177308-0d34-0410-b5e6-96231b3b80d8
Use 10 bits to represent calling convention ID's instead of 13, and
update the bitcode compatibility tests accordingly. We now error-out in
the bitcode reader when we see bad calling conv ID's.
Thanks to rnk and dexonsmith for feedback!
Differential Revision: http://reviews.llvm.org/D13826
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251452 91177308-0d34-0410-b5e6-96231b3b80d8
AliasSetTracker does not need to convert the access mode to ModRefAccess if the
new visited UnknownInst has only 'REF' modrefinfo to existing pointers in the
sets.
Patch by Andrew Zhogin!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251451 91177308-0d34-0410-b5e6-96231b3b80d8
This is a usage of the IR-level fast-math-flags now that they are propagated to SDNodes.
This was originally part of D8900.
Removing the global 'enable-unsafe-fp-math' checks will require auto-upgrade and
possibly other changes.
Differential Revision: http://reviews.llvm.org/D9708
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251450 91177308-0d34-0410-b5e6-96231b3b80d8
A PHI on a catchpad might be used by both edges out of the catchpad,
feeding back into a loop. In this case, just use the insertion point.
Anything more clever would require new basic blocks or PHI placement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251442 91177308-0d34-0410-b5e6-96231b3b80d8
This recommits r250719, which caused a failure in SPEC2000.gcc
because of the incorrect insert point for the new wider load.
Convert two halfword loads into a single 32-bit word load with bitfield extract
instructions. For example :
ldrh w0, [x2]
ldrh w1, [x2, #2]
becomes
ldr w0, [x2]
ubfx w1, w0, #16, #16
and w0, w0, #ffff
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251438 91177308-0d34-0410-b5e6-96231b3b80d8
When optimization is disabled, edge weights that are stored in MBB won't be used so that we don't have to store them. Currently, this is done by adding successors with default weight 0, and if all successors have default weights, the weight list will be empty. But that the weight list is empty doesn't mean disabled optimization (as is stated several times in MachineBasicBlock.cpp): it may also mean all successors just have default weights.
We should discourage using default weights when adding successors, because it is very easy for users to forget update the correct edge weights instead of using default ones (one exception is that the MBB only has one successor). In order to detect such usages, it is better to differentiate using default weights from the case when optimizations is disabled.
In this patch, a new interface addSuccessorWithoutWeight(MBB*) is created for when optimization is disabled. In this case, MBB will try to maintain an empty weight list, but it cannot guarantee this as for many uses of addSuccessor() whether optimization is disabled or not is not checked. But it can guarantee that if optimization is enabled, then the weight list always has the same size of the successor list.
Differential revision: http://reviews.llvm.org/D13963
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251429 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This change could be way off-piste, I'm looking for any feedback on whether it's an acceptable approach.
It never seems to be a problem to gobble up as many reduction values as can be found, and then to attempt to reduce the resulting tree. Some of the workloads I'm looking at have been aggressively unrolled by hand, and by selecting reduction widths that are not constrained by a vector register size, it becomes possible to profitably vectorize. My test case shows such an unrolling which SLP was not vectorizing (on neither ARM nor X86) before this patch, but with it does vectorize.
I measure no significant compile time impact of this change when combined with D13949 and D14063. There are also no significant performance regressions on ARM/AArch64 in SPEC or LNT.
The more principled approach I thought of was to generate several candidate tree's and use the cost model to pick the cheapest one. That seemed like quite a big design change (the algorithms seem very much one-shot), and would likely be a costly thing for compile time. This seemed to do the job at very little cost, but I'm worried I've misunderstood something!
Reviewers: nadav, jmolloy
Subscribers: mssimpso, llvm-commits, aemerson
Differential Revision: http://reviews.llvm.org/D14116
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251428 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Currently, when the SLP vectorizer considers whether a phi is part of a reduction, it dismisses phi's whose incoming blocks are not the same as the block containing the phi. For the patterns I'm looking at, extending this rule to allow phis whose incoming block is a containing loop latch allows me to vectorize certain workloads.
There is no significant compile-time impact, and combined with D13949, no performance improvement measured in ARM/AArch64 in any of SPEC2000, SPEC2006 or LNT.
Reviewers: jmolloy, mcrosier, nadav
Subscribers: mssimpso, nadav, aemerson, llvm-commits
Differential Revision: http://reviews.llvm.org/D14063
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251425 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Certain workloads, in particular sum-of-absdiff loops, can be vectorized using SLP if it can treat select instructions as reduction values.
The test case is a bit awkward. The AArch64 cost model needs some tuning to not be so pessimistic about selects. I've had to tweak the SLP threshold here.
Reviewers: jmolloy, mzolotukhin, spatel, nadav
Subscribers: nadav, mssimpso, aemerson, llvm-commits
Differential Revision: http://reviews.llvm.org/D13949
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@251424 91177308-0d34-0410-b5e6-96231b3b80d8