Summary:
This allows the linker to discard unused symbol information for comdat
functions that were discarded during the link. Before this change,
searching for the name of an inline function in the debugger would
return multiple results, one per symbol subsection in the object file.
After this change, there is only one result, the result for the function
chosen by the linker.
Reviewers: zturner, majnemer
Subscribers: aaboud, amccarth, llvm-commits
Differential Revision: http://reviews.llvm.org/D20642
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270792 91177308-0d34-0410-b5e6-96231b3b80d8
When we have "Image Info Version" module flag but don't have "Class Properties"
module flag, set "Class Properties" module flag to 0, so we can correctly emit
errors when one module has the flag set and another module does not.
rdar://26469641
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270791 91177308-0d34-0410-b5e6-96231b3b80d8
If a we have (a) a GEP and (b) a pointer based on an alloca, and the
beginning of the object the GEP points would have a negative offset with
repsect to the alloca, then the GEP can not alias pointer (b).
For example, consider code like:
struct { int f0, int f1, ...} foo;
...
foo alloca;
foo *random = bar(alloca);
int *f0 = &alloca.f0
int *f1 = &random->f1;
Which is lowered, approximately, to:
%alloca = alloca %struct.foo
%random = call %struct.foo* @random(%struct.foo* %alloca)
%f0 = getelementptr inbounds %struct, %struct.foo* %alloca, i32 0, i32 0
%f1 = getelementptr inbounds %struct, %struct.foo* %random, i32 0, i32 1
Assume %f1 and %f0 alias. Then %f1 would point into the object allocated
by %alloca. Since the %f1 GEP is inbounds, that means %random must also
point into the same object. But since %f0 points to the beginning of %alloca,
the highest %f1 can be is (%alloca + 3). This means %random can not be higher
than (%alloca - 1), and so is not inbounds, a contradiction.
Differential Revision: http://reviews.llvm.org/D20495
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270777 91177308-0d34-0410-b5e6-96231b3b80d8
This patch modifies the LiveDebugValues pass to use more efficient set
data structures as outlined in PR26055. Both VarLocSet and VarLocList are
now SparseBitVectors which allows us to perform much faster bitvector
arithmetic on them.
The speedup can be in the order of minutes especially on ASANified code.
The change is not NFC in the assembler output because the inserted
DBG_VALUEs are now sorted by variable and location.
Many thanks to Daniel Berlin for helping design the improved algorithm and
reviewing the patch.
https://llvm.org/bugs/show_bug.cgi?id=26055http://reviews.llvm.org/D20178
rdar://problem/24091200
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270776 91177308-0d34-0410-b5e6-96231b3b80d8
Getting accurate locations for loops is important, because those locations are
used by the frontend to generate optimization remarks. Currently, optimization
remarks for loops often appear on the wrong line, often the first line of the
loop body instead of the loop itself. This is confusing because that line might
itself be another loop, or might be somewhere else completely if the body was
inlined function call. This happens because of the way we find the loop's
starting location. First, we look for a preheader, and if we find one, and its
terminator has a debug location, then we use that. Otherwise, we look for a
location on an instruction in the loop header.
The fallback heuristic is not bad, but will almost always find the beginning of
the body, and not the loop statement itself. The preheader location search
often fails because there's often not a preheader, and even when there is a
preheader, depending on how it was formed, it sometimes carries the location of
some preceeding code.
I don't see any good theoretical way to fix this problem. On the other hand,
this seems like a straightforward solution: Put the debug location in the
loop's llvm.loop metadata. A companion Clang patch will cause Clang to insert
llvm.loop metadata with appropriate locations when generating debugging
information. With these changes, our loop remarks have much more accurate
locations.
Differential Revision: http://reviews.llvm.org/D19738
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270771 91177308-0d34-0410-b5e6-96231b3b80d8
Since r268966 the modern Verifier pass defaults to stripping invalid debug info
in nonasserts builds. This patch ports this behavior back to the legacy
Verifier pass as well. The primary motivation is that the clang frontend
accepts bitcode files as input but is still using the legacy pass pipeline.
Background: The problem I'm trying to solve with this sequence of patches is
that historically we've done a really bad job at verifying debug info. We want
to be able to make the verifier stricter without having to worry about breaking
bitcode compatibility with existing producers. For example, we don't necessarily
want IR produced by an older version of clang to be rejected by an LTO link just
because of malformed debug info, and rather provide an option to strip it. Note
that merely outdated (but well-formed) debug info would continue to be
auto-upgraded in this scenario.
http://reviews.llvm.org/D20629
<rdar://problem/26448800>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270768 91177308-0d34-0410-b5e6-96231b3b80d8
As a result of D18634 we no longer infer certain attributes on linkonce_odr
functions at compile time, and may only infer them at LTO time. The readnone
attribute in particular is required for virtual constant propagation (part
of whole-program virtual call optimization) to work correctly.
This change moves the whole-program virtual call optimization pass after
the function attribute inference passes, and enables the attribute inference
passes at opt level 1, so that virtual constant propagation has a chance to
work correctly for linkonce_odr functions.
Differential Revision: http://reviews.llvm.org/D20643
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270765 91177308-0d34-0410-b5e6-96231b3b80d8
They were originally separated to handle the co-recursion between
the ValueMapper and the ValueMaterializer. This recursion does not
exist anymore: the ValueMapper now uses a Worklist and the
ValueMaterializer is scheduling job on the Worklist.
Differential Revision: http://reviews.llvm.org/D20593
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270758 91177308-0d34-0410-b5e6-96231b3b80d8
We have need to reuse this functionality, including making
additional generic stream types that are smarter about how and
when they copy memory versus referencing the original memory.
So all of these structures belong in the common library
rather than being pdb specific.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270751 91177308-0d34-0410-b5e6-96231b3b80d8
There was a typo in r267758. It caused invalid accesses when
given something like "void @free(...)", as NumParams == 0, and
we then try to look at the 0th parameter.
Turns out, most of these were untested; add both attribute
and missing-prototype checks for all libc libfuncs.
Differential Revision: http://reviews.llvm.org/D20543
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270750 91177308-0d34-0410-b5e6-96231b3b80d8
This is probably correct for all uses except cross-module IR linking,
where we need to move the comdat from the source module to the
destination module.
Fixes PR27870.
Reviewers: majnemer
Differential Revision: http://reviews.llvm.org/D20631
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270743 91177308-0d34-0410-b5e6-96231b3b80d8
While here, convert the logic of the pass to use static function(s).
This is in preparation for porting this pass to the new PM.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270734 91177308-0d34-0410-b5e6-96231b3b80d8
f32 vectors would use a sequence of BFI instructions instead
of unrolled cmp + select. This was better in the case of a VALU
select with SGPR inputs, but we don't have a way of dealing with that
in the DAG.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270731 91177308-0d34-0410-b5e6-96231b3b80d8
By making pointer extraction from a vector more expensive in the cost model,
we avoid the vectorization of a loop that is very likely to be memory-bound:
https://llvm.org/bugs/show_bug.cgi?id=27826
There are still bugs related to this, so we may need a more general solution
to avoid vectorizing obviously memory-bound loops when we don't have HW gather
support.
Differential Revision: http://reviews.llvm.org/D20601
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270729 91177308-0d34-0410-b5e6-96231b3b80d8
This should actually address PR27855. This results in adding references to the system libs inside generated dylibs so that they get correctly pulled in when linking against the dylib.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@270723 91177308-0d34-0410-b5e6-96231b3b80d8