APInt::slt was copying the LHS and RHS in to temporaries then making
them unsigned so that it could use an unsigned comparision. It did
this even on the paths which were trivial to give results for, such
as the sign bit of the LHS being set while RHS was not set.
This changes the logic to return out immediately in the trivial cases,
and use an unsigned comparison in the remaining cases. But this time,
just use the unsigned comparison directly without creating any temporaries.
This works because, for example:
true = (-2 slt -1) = (0xFE ult 0xFF)
Also added some tests explicitly for slt with APInt's larger than 64-bits
so that this new code is tested.
Using the memory for 'opt -O2 verify-uselistorder.lto.opt.bc -o opt.bc'
(see r236629 for details), this reduces the number of allocations from
26.8M to 23.9M.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270881 91177308-0d34-0410-b5e6-96231b3b80d8
Fixes build error on windows where MSVC does not
support list initialization inside member initializer list.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270877 91177308-0d34-0410-b5e6-96231b3b80d8
NVVMIntrRange adds !range metadata to calls of NVVM intrinsics
that return values within known limited range.
This allows LLVM to generate optimal code for indexing arrays
based on tid/ctaid which is a frequently used pattern in CUDA code.
Differential Revision: http://reviews.llvm.org/D20644
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270872 91177308-0d34-0410-b5e6-96231b3b80d8
At some point we're going to need libObject to have this dependency, but as it is now this is causing too many headaches. This commit will reduce the linkage to just llvm-objdump where it is strictly needed, and we'll cross the libObject bridge later when we need it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270866 91177308-0d34-0410-b5e6-96231b3b80d8
Most often as not this is what it started out as, the extraction is zero-cost on AVX and the PMOVZX/PMOVSX folding logic is based around 128-bit loads.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270858 91177308-0d34-0410-b5e6-96231b3b80d8
The exit-on-error flag is needed to avoid an assert where
llvm::SelectionDAGISel::LowerArguments doesn't create enough arguments. Fill up
with zeroes to reach the right number of args.
Fixes PR27767.
Differential Revision: http://reviews.llvm.org/D20571
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270855 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Ensure we keep prevailing copy of LinkOnceAny by converting it to
WeakAny.
Rename odr_resolution test to the now more appropriate weak_resolution
(weak in the linker sense includes linkonce).
Reviewers: joker.eph
Subscribers: llvm-commits, joker.eph
Differential Revision: http://reviews.llvm.org/D20634
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270850 91177308-0d34-0410-b5e6-96231b3b80d8
If and only if the value being inserted sets only known zero bits.
This combine transforms things like
and w8, w0, #0xfffffff0
movz w9, #5
orr w0, w8, w9
into
movz w8, #5
bfxil w0, w8, #0, #4
The combine is tuned to make sure we always reduce the number of instructions.
We avoid churning code for what is expected to be performance neutral changes
(e.g., converted AND+OR to OR+BFI).
Differential Revision: http://reviews.llvm.org/D20387
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270846 91177308-0d34-0410-b5e6-96231b3b80d8
The problem with plugins on Windows is that when building a plugin DLL it needs
to explicitly link against something (an exe or DLL) if it uses symbols from
that thing, and that thing must explicitly export those symbols. Also there's a
limit of 65535 symbols that can be exported. This means that currently plugins
only work on Windows when using BUILD_SHARED_LIBS, and that doesn't work with
MSVC.
This patch adds an LLVM_EXPORT_SYMBOLS_FOR_PLUGINS option, which when enabled
automatically exports from all LLVM tools the symbols that a plugin could want
to use so that a plugin can link against a tool directly. Plugins can specify
what tool they link against by using PLUGIN_TOOL argument to llvm_add_library.
The option can also be enabled on Linux, though there all it should do is
restrict the set of symbols that are exported as by default all symbols are
exported.
This option is currently OFF by default, as while I've verified that it works
with MSVC, linux gcc, and cygwin gcc, I haven't tried mingw gcc and I have no
idea what will happen on OSX. Also unfortunately we can't turn on
LLVM_ENABLE_PLUGINS when the option is ON as bugpoint-passes needs to be
loaded by both bugpoint.exe and opt.exe which is incompatible with this
approach. Also currently clang plugins don't work with this approach, which
will be fixed in future patches.
Differential Revision: http://reviews.llvm.org/D18826
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270839 91177308-0d34-0410-b5e6-96231b3b80d8
It is unsafe to hoist a load before a function call which may throw, the
throw might prevent a pointer dereference.
Likewise, it is unsafe to sink a store after a call which may throw.
The caller might be able to observe the difference.
This fixes PR27858.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270828 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
If an index for a vector or array type is out-of-range GEP constant
folding tries to factor it into preceding dimensions. The code however
does not consider addressing of structure field padding which should not
qualify as out-of-range index.
As demonstrated by the testcase, this can occur if the indexing
performed on a vector type and the preceding index is an array type.
SROA generates GEPs for example involving padding bytes as it slices an
alloca.
My fix disables this folding if the element type is a vector type. I
believe that this is the only way we can end up with padding. (We have
no access to DataLayout so I am not sure if there is actual robust way
of actually checking the presence of padding.)
Reviewers: majnemer
Subscribers: llvm-commits, Gerolf
Differential Revision: http://reviews.llvm.org/D20663
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270826 91177308-0d34-0410-b5e6-96231b3b80d8
It turns out that too many passes are relying on alias analysis results
for control dependencies. Until we fix that by introducing a more accurate
modelling of control dependencies, special case assume in MemorySSA instead.
Also introduce tests to ensure we don't regress the FunctionAttrs or LICM
passes.
Differential Revision: http://reviews.llvm.org/D20658
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270823 91177308-0d34-0410-b5e6-96231b3b80d8
There is only one caller of MemorySSA::createNewAccess, and it passes true
as the IgnoreNonMemory argument. Remove that argument and fold its behavior
into createNewAccess.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270812 91177308-0d34-0410-b5e6-96231b3b80d8
If there is already debug info in the assembly file, and user hope to
use -g option for compiling, we think we should not directly report an
error.
According to what GNU assembler did, it just reused the debug info in
the assembly file, and turned off the DEBUG_TYPE option so that there
will be no new debug info emitted by assembler. This fix is just as what
GNU assembler did.
The concern is the situation that there are two .text sections in the
assembly file, one with debug info and the other one without. Currently
with this fix, the assembler will no longer generate any debug info for
the second .text section. And this is what GNU assembler exactly did for
this situation. So I think this still make some sense.
Patch by Zhizhou Yang!
Differential Revision: http://reviews.llvm.org/D20002
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270806 91177308-0d34-0410-b5e6-96231b3b80d8
After this change, we do the expected thing for cases like
```
Check0Passed = /* range check IRCE can optimize */
Check1Passed = /* range check IRCE can optimize */
if (!(Check0Passed && Check1Passed))
throw_Exception();
```
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270804 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This allows the linker to discard unused symbol information for comdat
functions that were discarded during the link. Before this change,
searching for the name of an inline function in the debugger would
return multiple results, one per symbol subsection in the object file.
After this change, there is only one result, the result for the function
chosen by the linker.
Reviewers: zturner, majnemer
Subscribers: aaboud, amccarth, llvm-commits
Differential Revision: http://reviews.llvm.org/D20642
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270792 91177308-0d34-0410-b5e6-96231b3b80d8
When we have "Image Info Version" module flag but don't have "Class Properties"
module flag, set "Class Properties" module flag to 0, so we can correctly emit
errors when one module has the flag set and another module does not.
rdar://26469641
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270791 91177308-0d34-0410-b5e6-96231b3b80d8
If a we have (a) a GEP and (b) a pointer based on an alloca, and the
beginning of the object the GEP points would have a negative offset with
repsect to the alloca, then the GEP can not alias pointer (b).
For example, consider code like:
struct { int f0, int f1, ...} foo;
...
foo alloca;
foo *random = bar(alloca);
int *f0 = &alloca.f0
int *f1 = &random->f1;
Which is lowered, approximately, to:
%alloca = alloca %struct.foo
%random = call %struct.foo* @random(%struct.foo* %alloca)
%f0 = getelementptr inbounds %struct, %struct.foo* %alloca, i32 0, i32 0
%f1 = getelementptr inbounds %struct, %struct.foo* %random, i32 0, i32 1
Assume %f1 and %f0 alias. Then %f1 would point into the object allocated
by %alloca. Since the %f1 GEP is inbounds, that means %random must also
point into the same object. But since %f0 points to the beginning of %alloca,
the highest %f1 can be is (%alloca + 3). This means %random can not be higher
than (%alloca - 1), and so is not inbounds, a contradiction.
Differential Revision: http://reviews.llvm.org/D20495
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270777 91177308-0d34-0410-b5e6-96231b3b80d8
This patch modifies the LiveDebugValues pass to use more efficient set
data structures as outlined in PR26055. Both VarLocSet and VarLocList are
now SparseBitVectors which allows us to perform much faster bitvector
arithmetic on them.
The speedup can be in the order of minutes especially on ASANified code.
The change is not NFC in the assembler output because the inserted
DBG_VALUEs are now sorted by variable and location.
Many thanks to Daniel Berlin for helping design the improved algorithm and
reviewing the patch.
https://llvm.org/bugs/show_bug.cgi?id=26055http://reviews.llvm.org/D20178
rdar://problem/24091200
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270776 91177308-0d34-0410-b5e6-96231b3b80d8
Getting accurate locations for loops is important, because those locations are
used by the frontend to generate optimization remarks. Currently, optimization
remarks for loops often appear on the wrong line, often the first line of the
loop body instead of the loop itself. This is confusing because that line might
itself be another loop, or might be somewhere else completely if the body was
inlined function call. This happens because of the way we find the loop's
starting location. First, we look for a preheader, and if we find one, and its
terminator has a debug location, then we use that. Otherwise, we look for a
location on an instruction in the loop header.
The fallback heuristic is not bad, but will almost always find the beginning of
the body, and not the loop statement itself. The preheader location search
often fails because there's often not a preheader, and even when there is a
preheader, depending on how it was formed, it sometimes carries the location of
some preceeding code.
I don't see any good theoretical way to fix this problem. On the other hand,
this seems like a straightforward solution: Put the debug location in the
loop's llvm.loop metadata. A companion Clang patch will cause Clang to insert
llvm.loop metadata with appropriate locations when generating debugging
information. With these changes, our loop remarks have much more accurate
locations.
Differential Revision: http://reviews.llvm.org/D19738
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270771 91177308-0d34-0410-b5e6-96231b3b80d8
Since r268966 the modern Verifier pass defaults to stripping invalid debug info
in nonasserts builds. This patch ports this behavior back to the legacy
Verifier pass as well. The primary motivation is that the clang frontend
accepts bitcode files as input but is still using the legacy pass pipeline.
Background: The problem I'm trying to solve with this sequence of patches is
that historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about breaking
bitcode compatibility with existing producers. For example, we don't necessarily
want IR produced by an older version of clang to be rejected by an LTO link just
because of malformed debug info, and rather provide an option to strip it. Note
that merely outdated (but well-formed) debug info would continue to be
auto-upgraded in this scenario.
http://reviews.llvm.org/D20629
<rdar://problem/26448800>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270768 91177308-0d34-0410-b5e6-96231b3b80d8
As a result of D18634 we no longer infer certain attributes on linkonce_odr
functions at compile time, and may only infer them at LTO time. The readnone
attribute in particular is required for virtual constant propagation (part
of whole-program virtual call optimization) to work correctly.
This change moves the whole-program virtual call optimization pass after
the function attribute inference passes, and enables the attribute inference
passes at opt level 1, so that virtual constant propagation has a chance to
work correctly for linkonce_odr functions.
Differential Revision: http://reviews.llvm.org/D20643
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270765 91177308-0d34-0410-b5e6-96231b3b80d8