Summary:
This directive is used to reset the assembler options to their initial values.
Assembly programmers use it in conjunction with the ".set mipsX" directives.
This patch depends on the .set push/pop directive (http://reviews.llvm.org/D4821).
Contains work done by Matheus Almeida.
Reviewers: dsanders
Reviewed By: dsanders
Differential Revision: http://reviews.llvm.org/D4957
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217438 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch moves the profile reading logic out of the Sample Profile
transformation into a generic profile reader facility in
lib/ProfileData.
The intent is to use this new reader to implement a sample profile
reader/writer that can be used to convert sample profiles from external
sources into LLVM.
This first patch introduces no functional changes. It moves the profile
reading code from lib/Transforms/SampleProfile.cpp into
lib/ProfileData/SampleProfReader.cpp.
In subsequent patches I will:
- Add a bitcode format for sample profiles to allow for more efficient
encoding of the profile.
- Add a writer for both text and bitcode format profiles.
- Add a 'convert' command to llvm-profdata to be able to convert between
the two (and serve as entry point for other sample profile formats).
Reviewers: bogner, echristo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5250
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217437 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The GPR size is more a property of the subtarget than that of the ABI so move
this information to the MipsSubtarget.
No functional change.
Reviewers: vmedic
Reviewed By: vmedic
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D5009
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217436 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
In AT&T annotation for both x86_64 and x32 calls should be printed as
callq in assembly. It's only a matter of correct mnemonic, object output
is ok.
Test Plan: trivial test added
Reviewers: nadav, dschuff, craig.topper
Subscribers: llvm-commits, zinovy.nis
Differential Revision: http://reviews.llvm.org/D5213
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217435 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
These directives are used to save the current assembler options (in the case of ".set push") and restore the previously saved options (in the case of ".set pop").
Contains work done by Matheus Almeida.
Reviewers: dsanders
Reviewed By: dsanders
Differential Revision: http://reviews.llvm.org/D4821
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217432 91177308-0d34-0410-b5e6-96231b3b80d8
This solves the problem of having a kill flag inside a loop
with a definition of the register prior to the loop:
%vreg368<def> ...
Inside loop:
%vreg520<def> = COPY %vreg368
%vreg568<def,tied1> = add %vreg341<tied0>, %vreg520<kill>
=> was coalesced into =>
%vreg568<def,tied1> = add %vreg341<tied0>, %vreg368<kill>
MachineVerifier then complained:
*** Bad machine code: Virtual register killed in block, but needed live out. ***
The kill flag for %vreg368 is incorrect, and is cleared by this patch.
This is similar to the clearing done at the end of
MachineSinking::SinkInstruction().
Patch provided by Jonas Paulsson.
Reviewed by Quentin Colombet and Juergen Ributzka.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217427 91177308-0d34-0410-b5e6-96231b3b80d8
llvm-cov had a SourceRange type that was nearly identical to a
CountedRegion except that it shaved off a couple of fields. There
aren't likely to be enough of these for the minor memory savings to be
worth the extra complexity here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217417 91177308-0d34-0410-b5e6-96231b3b80d8
This name was too similar to CoverageMappingRegion, and the type
really belongs in the coverage library anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217416 91177308-0d34-0410-b5e6-96231b3b80d8
When compiling without SSE2, isTruncStoreLegal(F64, F32) would return Legal, whereas with SSE2 it would return Expand. And since the Target doesn't seem to actually handle a truncstore for double -> float, it would just output a store of a full double in the space for a float hence overwriting other bits on the stack.
Patch by Luqman Aden!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217410 91177308-0d34-0410-b5e6-96231b3b80d8
Previously, fast-isel would not clean up after failing to select a call
instruction, because it would have called flushLocalValueMap() which moves
the insertion point, making SavedInsertPt in selectInstruction() invalid.
Fixing this by making SavedInsertPt a member variable, and having
flushLocalValueMap() update it.
This removes some redundant code at -O0, and more importantly fixes PR20863.
Differential Revision: http://reviews.llvm.org/D5249
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217401 91177308-0d34-0410-b5e6-96231b3b80d8
FunctionCoverageMapping::PrettyName was from a version of the tool
during review, and isn't actually used currently.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217398 91177308-0d34-0410-b5e6-96231b3b80d8
UBSan complained about using static_cast on the invalid (tombstone, etc.)
pointers used by DenseMap. Use a reinterpret_cast instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217397 91177308-0d34-0410-b5e6-96231b3b80d8
to make sure we don't do invalid load of an enum. Share the
conversion code between llvm::Module implementation and the
verifier.
This bug was reported by UBSan.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217395 91177308-0d34-0410-b5e6-96231b3b80d8
Assert in scheduler from an inserted copy_to_regclass from
a constant.
This only seems to break sometimes when a constant initializer
address is forced into VGPRs in a non-entry block. No test
since the only case I've managed to hit only happens with a future
patch, and that case will also not be a problem once scalar instructions
are used in non-entry blocks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217380 91177308-0d34-0410-b5e6-96231b3b80d8
Only handles LDS atomics for now, and will be used
to replace atomics with no uses with the no return
versions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217378 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a basic (but important) use of @llvm.assume calls in ScalarEvolution.
When SE is attempting to validate a condition guarding a loop (such as whether
or not the loop count can be zero), this check should also include dominating
assumptions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217348 91177308-0d34-0410-b5e6-96231b3b80d8
From a combination of @llvm.assume calls (and perhaps through other means, such
as range metadata), it is possible that all bits of a return value might be
known. Previously, InstCombine did not check for this (which is understandable
given assumptions of constant propagation), but means that we'd miss simple
cases where assumptions are involved.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217346 91177308-0d34-0410-b5e6-96231b3b80d8
This change teaches LazyValueInfo to use the @llvm.assume intrinsic. Like with
the known-bits change (r217342), this requires feeding a "context" instruction
pointer through many functions. Aside from a little refactoring to reuse the
logic that turns predicates into constant ranges in LVI, the only new code is
that which can 'merge' the range from an assumption into that otherwise
computed. There is also a small addition to JumpThreading so that it can have
LVI use assumptions in the same block as the comparison feeding a conditional
branch.
With this patch, we can now simplify this as expected:
int foo(int a) {
__builtin_assume(a > 5);
if (a > 3) {
bar();
return 1;
}
return 0;
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217345 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a ScalarEvolution-powered transformation that updates load, store and
memory intrinsic pointer alignments based on invariant((a+q) & b == 0)
expressions. Many of the simple cases we can get with ValueTracking, but we
still need something like this for the more complicated cases (such as those
with an offset) that require some algebra. Note that gcc's
__builtin_assume_aligned's optional third argument provides exactly for this
kind of 'misalignment' offset for which this kind of logic is necessary.
The primary motivation is to fixup alignments for vector loads/stores after
vectorization (and unrolling). This pass is added to the optimization pipeline
just after the SLP vectorizer runs (which, admittedly, does not preserve SE,
although I imagine it could). Regardless, I actually don't think that the
preservation matters too much in this case: SE computes lazily, and this pass
won't issue any SE queries unless there are any assume intrinsics, so there
should be no real additional cost in the common case (SLP does preserve DT and
LoopInfo).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217344 91177308-0d34-0410-b5e6-96231b3b80d8
This builds on r217342, which added the infrastructure to compute known bits
using assumptions (@llvm.assume calls). That original commit added only a few
patterns (to catch common cases related to determining pointer alignment); this
change adds several other patterns for simple cases.
r217342 contained that, for assume(v & b = a), bits in the mask
that are known to be one, we can propagate known bits from the a to v. It also
had a known-bits transfer for assume(a = b). This patch adds:
assume(~(v & b) = a) : For those bits in the mask that are known to be one, we
can propagate inverted known bits from the a to v.
assume(v | b = a) : For those bits in b that are known to be zero, we can
propagate known bits from the a to v.
assume(~(v | b) = a): For those bits in b that are known to be zero, we can
propagate inverted known bits from the a to v.
assume(v ^ b = a) : For those bits in b that are known to be zero, we can
propagate known bits from the a to v. For those bits in
b that are known to be one, we can propagate inverted
known bits from the a to v.
assume(~(v ^ b) = a) : For those bits in b that are known to be zero, we can
propagate inverted known bits from the a to v. For those
bits in b that are known to be one, we can propagate
known bits from the a to v.
assume(v << c = a) : For those bits in a that are known, we can propagate them
to known bits in v shifted to the right by c.
assume(~(v << c) = a) : For those bits in a that are known, we can propagate
them inverted to known bits in v shifted to the right by c.
assume(v >> c = a) : For those bits in a that are known, we can propagate them
to known bits in v shifted to the right by c.
assume(~(v >> c) = a) : For those bits in a that are known, we can propagate
them inverted to known bits in v shifted to the right by c.
assume(v >=_s c) where c is non-negative: The sign bit of v is zero
assume(v >_s c) where c is at least -1: The sign bit of v is zero
assume(v <=_s c) where c is negative: The sign bit of v is one
assume(v <_s c) where c is non-positive: The sign bit of v is one
assume(v <=_u c): Transfer the known high zero bits
assume(v <_u c): Transfer the known high zero bits (if c is know to be a power
of 2, transfer one more)
A small addition to InstCombine was necessary for some of the test cases. The
problem is that when InstCombine was simplifying and, or, etc. it would fail to
check the 'do I know all of the bits' condition before checking less specific
conditions and would not fully constant-fold the result. I'm not sure how to
trigger this aside from using assumptions, so I've just included the change
here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217343 91177308-0d34-0410-b5e6-96231b3b80d8
This change, which allows @llvm.assume to be used from within computeKnownBits
(and other associated functions in ValueTracking), adds some (optional)
parameters to computeKnownBits and friends. These functions now (optionally)
take a "context" instruction pointer, an AssumptionTracker pointer, and also a
DomTree pointer, and most of the changes are just to pass this new information
when it is easily available from InstSimplify, InstCombine, etc.
As explained below, the significant conceptual change is that known properties
of a value might depend on the control-flow location of the use (because we
care that the @llvm.assume dominates the use because assumptions have
control-flow dependencies). This means that, when we ask if bits are known in a
value, we might get different answers for different uses.
The significant changes are all in ValueTracking. Two main changes: First, as
with the rest of the code, new parameters need to be passed around. To make
this easier, I grouped them into a structure, and I made internal static
versions of the relevant functions that take this structure as a parameter. The
new code does as you might expect, it looks for @llvm.assume calls that make
use of the value we're trying to learn something about (often indirectly),
attempts to pattern match that expression, and uses the result if successful.
By making use of the AssumptionTracker, the process of finding @llvm.assume
calls is not expensive.
Part of the structure being passed around inside ValueTracking is a set of
already-considered @llvm.assume calls. This is to prevent a query using, for
example, the assume(a == b), to recurse on itself. The context and DT params
are used to find applicable assumptions. An assumption needs to dominate the
context instruction, or come after it deterministically. In this latter case we
only handle the specific case where both the assumption and the context
instruction are in the same block, and we need to exclude assumptions from
being used to simplify their own ephemeral values (those which contribute only
to the assumption) because otherwise the assumption would prove its feeding
comparison trivial and would be removed.
This commit adds the plumbing and the logic for a simple masked-bit propagation
(just enough to write a regression test). Future commits add more patterns
(and, correspondingly, more regression tests).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217342 91177308-0d34-0410-b5e6-96231b3b80d8
It's probably not a huge deal to not do this - if we could, maybe the
address could be reused by a subprogram low_pc and avoid an extra
relocation, but it's just one per CU at best.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217338 91177308-0d34-0410-b5e6-96231b3b80d8
This adds a set of utility functions for collecting 'ephemeral' values. These
are LLVM IR values that are used only by @llvm.assume intrinsics (directly or
indirectly), and thus will be removed prior to code generation, implying that
they should be considered free for certain purposes (like inlining). The
inliner's cost analysis, and a few other passes, have been updated to account
for ephemeral values using the provided functionality.
This functionality is important for the usability of @llvm.assume, because it
limits the "non-local" side-effects of adding llvm.assume on inlining, loop
unrolling, etc. (these are hints, and do not generate code, so they should not
directly contribute to estimates of execution cost).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217335 91177308-0d34-0410-b5e6-96231b3b80d8
This adds an immutable pass, AssumptionTracker, which keeps a cache of
@llvm.assume call instructions within a module. It uses callback value handles
to keep stale functions and intrinsics out of the map, and it relies on any
code that creates new @llvm.assume calls to notify it of the new instructions.
The benefit is that code needing to find @llvm.assume intrinsics can do so
directly, without scanning the function, thus allowing the cost of @llvm.assume
handling to be negligible when none are present.
The current design is intended to be lightweight. We don't keep track of
anything until we need a list of assumptions in some function. The first time
this happens, we scan the function. After that, we add/remove @llvm.assume
calls from the cache in response to registration calls and ValueHandle
callbacks.
There are no new direct test cases for this pass, but because it calls it
validation function upon module finalization, we'll pick up detectable
inconsistencies from the other tests that touch @llvm.assume calls.
This pass will be used by follow-up commits that make use of @llvm.assume.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217334 91177308-0d34-0410-b5e6-96231b3b80d8
I hadn't actually run all the tests yet and these combines have somewhat
surprisingly far reaching effects.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217333 91177308-0d34-0410-b5e6-96231b3b80d8
support for MOVDDUP which is really important for matrix multiply style
operations that do lots of non-vector-aligned load and splats.
The original motivation was to add support for MOVDDUP as the lack of it
regresses matmul_f64_4x4 by 5% or so. However, all of the rules here
were somewhat suspicious.
First, we should always be using the floating point domain shuffles,
regardless of how many copies we have to make as a movapd is *crazy*
faster than the domain switching cost on some chips. (Mostly because
movapd is crazy cheap.) Because SHUFPD can't do the copy-for-free trick
of the PSHUF instructions, there is no need to avoid canonicalizing on
UNPCK variants, so do that canonicalizing. This also ensures we have the
chance to form MOVDDUP. =]
Second, we assume SSE2 support when doing any vector lowering, and given
that we should just use UNPCKLPD and UNPCKHPD as they can operate on
registers or memory. If vectors get spilled or come from memory at all
this is going to allow the load to be folded into the operation. If we
want to optimize for encoding size (the only difference, and only
a 2 byte difference) it should be done *much* later, likely after RA.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@217332 91177308-0d34-0410-b5e6-96231b3b80d8