Splitting a basic block will create a new ALU clause, so we need to make
sure we aren't moving uses of registers that are local to their
current clause into a new one.
I had a test case for this, but unfortunately unrelated schedule changes
invalidated it, and I wasn't been able to come up with another one.
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195399 91177308-0d34-0410-b5e6-96231b3b80d8
The legalizer can now do this type of expansion for more
type combinations without loading and storing to and
from the stack.
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195398 91177308-0d34-0410-b5e6-96231b3b80d8
This patch is a rewrite of the original patch commited in r194542. Instead of
relying on the type legalizer to do the splitting for us, we now peform the
splitting ourselves in the DAG combiner. This is necessary for the case where
the vector mask is a legal type after promotion and still wouldn't require
splitting.
Patch by: Juergen Ributzka
NOTE: This is a candidate for the 3.4 branch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195397 91177308-0d34-0410-b5e6-96231b3b80d8
section use the form DW_FORM_data4 whilst in Dwarf 4 and later they
use the form DW_FORM_sec_offset.
This patch updates the places where such attributes are generated to
use the appropriate form depending on the Dwarf version. The DIE entries
affected have the following tags:
DW_AT_stmt_list, DW_AT_ranges, DW_AT_location, DW_AT_GNU_pubnames,
DW_AT_GNU_pubtypes, DW_AT_GNU_addr_base, DW_AT_GNU_ranges_base
It also adds a hidden command line option "--dwarf-version=<uint>"
to llc which allows the version of Dwarf to be generated to override
what is specified in the metadata; this makes it possible to update
existing tests to check the debugging information generated for both
Dwarf 4 (the default) and Dwarf 3 using the same metadata.
Patch (slightly modified) by Keith Walker!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195391 91177308-0d34-0410-b5e6-96231b3b80d8
AMD's processors family K7, K8, K10, K12, K15 and K16 are known to have SHLD/SHRD instructions with very poor latency. Optimization guides for these processors recommend using an alternative sequence of instructions. For these AMD's processors, I disabled folding (or (x << c) | (y >> (64 - c))) when we are not optimizing for size.
It might be beneficial to disable this folding for some of the Intel's processors. However, since I couldn't find specific recommendations regarding using SHLD/SHRD instructions on Intel's processors, I haven't disabled this peephole for Intel.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195383 91177308-0d34-0410-b5e6-96231b3b80d8
The new command line flags are -dfsan-ignore-pointer-label-on-store and -dfsan-ignore-pointer-label-on-load. Their default value matches the current labelling scheme.
Additionally, the function __dfsan_union_load is marked as readonly.
Patch by Lorenzo Martignoni!
Differential Revision: http://llvm-reviews.chandlerc.com/D2187
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195382 91177308-0d34-0410-b5e6-96231b3b80d8
Mask == ~InvMask asserts if the width of Mask and InvMask differ.
The combine isn't valid (with two exceptions, see below) if the widths differ
so test for this before testing Mask == ~InvMask.
In the specific cases of Mask=~0 and InvMask=0, as well as Mask=0 and
InvMask=~0, the combine is still valid. However, there are more appropriate
combines that could be used in these cases such as folding x & 0 to 0, or
x & ~0 to x.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195364 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
LegalizeSetCCCondCode can now legalize SETEQ and SETNE by returning the inverse
condition and requesting that the caller invert the result of the condition.
The caller of LegalizeSetCCCondCode must handle the inverted CC, and they do
so as follows:
SETCC, BR_CC:
Invert the result of the SETCC with SelectionDAG::getNOT()
SELECT_CC:
Swap the true/false operands.
This is necessary for MSA which lacks an integer SETNE instruction.
Reviewers: resistor
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2229
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195355 91177308-0d34-0410-b5e6-96231b3b80d8
It broke, at least, i686 target. It is reproducible with "llc -mtriple=i686-unknown".
FYI, it didn't appear to add either "-O0" or "-fast-isel".
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195339 91177308-0d34-0410-b5e6-96231b3b80d8
it is completely optional, and sink the logic for handling the preserved
analysis set into it.
This allows us to implement the delegation logic desired in the proxy
module analysis for the function analysis manager where if the proxy
itself is preserved we assume the set of functions hasn't changed and we
do a fine grained invalidation by walking the functions in the module
and running the invalidate for them all at the manager level and letting
it try to invalidate any passes.
This in turn makes it blindingly obvious why we should hoist the
invalidate trait and have two collections of results. That allows
handling invalidation for almost all analyses without indirect calls and
it allows short circuiting when the preserved set is all.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195338 91177308-0d34-0410-b5e6-96231b3b80d8
clang optimizes tail calls, as in this example:
int foo(void);
int bar(void) {
return foo();
}
where the call is transformed to:
calll .L0$pb
.L0$pb:
popl %eax
.Ltmp0:
addl $_GLOBAL_OFFSET_TABLE_+(.Ltmp0-.L0$pb), %eax
movl foo@GOT(%eax), %eax
popl %ebp
jmpl *%eax # TAILCALL
However, the GOT references must all be resolved at dlopen() time, and so this
approach cannot be used with lazy dynamic linking (e.g. using RTLD_LAZY), which
usually populates the PLT with stubs that perform the actual resolving.
This patch changes X86TargetLowering::LowerCall() to skip tail call
optimization, if the called function is a global or external symbol.
Patch by Dimitry Andric!
PR15086
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195318 91177308-0d34-0410-b5e6-96231b3b80d8
This proxy will fill the role of proxying invalidation events down IR
unit layers so that when a module changes we correctly invalidate
function analyses. Currently this is a very coarse solution -- any
change blows away the entire thing -- but the next step is to make
invalidation handling more nuanced so that we can propagate specific
amounts of invalidation from one layer to the next.
The test is extended to place a module pass between two function pass
managers each of which have preserved function analyses which get
correctly invalidated by the module pass that might have changed what
functions are even in the module.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195304 91177308-0d34-0410-b5e6-96231b3b80d8
The instruction definitions incorrectly specified that popcntd and popcntw have
record forms; they do not. This mistake was causing invalid code generation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195272 91177308-0d34-0410-b5e6-96231b3b80d8
We now only allow breaking source order if the exit block frequency is
significantly higher than the other exit block. The actual bias is
currently under a flag so the best cut-off can be found; the flag
defaults to the old behavior. The idea is to get some benchmark coverage
over different values for the flag and pick the best one.
When we require the new frequency to be at least 20% higher than the old
frequency I see a 5% speedup on zlib's deflate when compressing a random
file on x86_64/westmere. Hal reported a small speedup on Fhourstones on
a BG/Q and no regressions in the test suite.
The test case is the full long_match function from zlib's deflate. I was
reluctant to add it for previous tweaks to branch probabilities because
it's large and potentially fragile, but changed my mind since it's an
important use case and more likely to break with all the current work
going into the PGO infrastructure.
Differential Revision: http://llvm-reviews.chandlerc.com/D2202
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195265 91177308-0d34-0410-b5e6-96231b3b80d8
While not strictly necessary (the class has an invariant that
"setDebugInfoOffset" is called before "getDebugInfoOffset" - anyone
client that actually gets the default zero offset is buggy/broken) this
is consistent with the code as originally written and the removal of the
initialization was an accident in r195166.
Suggested by Manman Ren.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195263 91177308-0d34-0410-b5e6-96231b3b80d8
Enhance the tests to actually require moves in C++11 mode, in addition
to testing the moved-from state. Further enhance the tests to cover
copy-assignment into a moved-from object and moving a large-state
object. (Note that we can't really test small-state vs. large-state as
that isn't an observable property of the API really.) This should finish
addressing review on r195239.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195261 91177308-0d34-0410-b5e6-96231b3b80d8
There's no test case for this commit. This is because it is doubtful that the
incorrect behaviour can actually trigger. When MSA is not enabled, the type
legalizer should have eliminated all occurrences of patterns the affected
pseudo-instruction could possibly match before instruction selection occurs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195252 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a new set-like type which represents a set of preserved
analysis passes. The set is managed via the opaque PassT::ID() void*s.
The expected convenience templates for interacting with specific passes
are provided. It also supports a symbolic "all" state which is
represented by an invalid pointer in the set. This state is nicely
saturating as it comes up often. Finally, it supports intersection which
is used when finding the set of preserved passes after N different
transforms.
The pass API is then changed to return the preserved set rather than
a bool. This is much more self-documenting than the previous system.
Returning "none" is a conservatively correct solution just like
returning "true" from todays passes and not marking any passes as
preserved. Passes can also be dynamically preserved or not throughout
the run of the pass, and whatever gets returned is the binding state.
Finally, preserving "all" the passes is allowed for no-op transforms
that simply can't harm such things.
Finally, the analysis managers are changed to instead of blindly
invalidating all of the analyses, invalidate those which were not
preserved. This should rig up all of the basic preservation
functionality. This also correctly combines the preservation moving up
from one IR-layer to the another and the preservation aggregation across
N pass runs. Still to go is incrementally correct invalidation and
preservation across IR layers incrementally during N pass runs. That
will wait until we have a device for even exposing analyses across IR
layers.
While the core of this change is obvious, I'm not happy with the current
testing, so will improve it to cover at least some of the invalidation
that I can test easily in a subsequent commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195241 91177308-0d34-0410-b5e6-96231b3b80d8
Somehow, this ADT got missed which is moderately terrifying considering
the efficiency of move for it.
The code to implement move semantics for it is pretty horrible
currently but was written to reasonably closely match the rest of the
code. Unittests that cover both copying and moving (at a basic level)
added.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195239 91177308-0d34-0410-b5e6-96231b3b80d8
The FunctionPassManager is now itself a function pass. When run over
a function, it runs all N of its passes over that function. This is the
1:N mapping in the pass dimension only. This allows it to be used in
either a ModulePassManager or potentially some other manager that
works on IR units which are supersets of Functions.
This commit also adds the obvious adaptor to map from a module pass to
a function pass, running the function pass across every function in the
module.
The test has been updated to use this new pattern.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195192 91177308-0d34-0410-b5e6-96231b3b80d8
Instead of permanently outputting "MVLL" as the file checksum, clang
will create gcno and gcda checksums by hashing the destination block
numbers of every arc. This allows for llvm-cov to check if the two gcov
files are synchronized.
Regenerated the test files so they contain the checksum. Also added
negative test to ensure error when the checksums don't match.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195191 91177308-0d34-0410-b5e6-96231b3b80d8
a module-specific interface. This is the first of many steps necessary
to generalize the infrastructure such that we can support both
a Module-to-Function and Module-to-SCC-to-Function pass manager
nestings.
After a *lot* of attempts that never worked and didn't even make it to
a committable state, it became clear that I had gotten the layering
design of analyses flat out wrong. Four days later, I think I have most
of the plan for how to correct this, and I'm starting to reshape the
code into it. This is just a baby step I'm afraid, but starts separating
the fundamentally distinct concepts of function analysis passes and
module analysis passes so that in subsequent steps we can effectively
layer them, and have a consistent design for the eventual SCC layer.
As part of this, I've started some interface changes to make passes more
regular. The module pass accepts the module in the run method, and some
of the constructor parameters are gone. I'm still working out exactly
where constructor parameters vs. method parameters will be used, so
I expect this to fluctuate a bit.
This actually makes the invalidation less "correct" at this phase,
because now function passes don't invalidate module analysis passes, but
that was actually somewhat of a misfeature. It will return in a better
factored form which can scale to other units of IR. The documentation
has gotten less verbose and helpful.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195189 91177308-0d34-0410-b5e6-96231b3b80d8
Masking operations (where only some number of the low bits are being kept) are
selected to rldicl(x, 0, mb). If x is a logical right shift (which would become
rldicl(y, 64-n, n)), we might be able to fold the two instructions together:
rldicl(rldicl(x, 64-n, n), 0, mb) -> rldicl(x, 64-n, mb) for n <= mb
The right shift is really a left rotate followed by a mask, and if the explicit
mask is a more-restrictive sub-mask of the mask implied by the shift, only one
rldicl is needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@195185 91177308-0d34-0410-b5e6-96231b3b80d8