In practice only a few well known appending linkage variables work.
Currently if codegen sees an unknown appending linkage variable it will
just print it as a regular global. That is wrong as the symbol in the
produced object file has different semantics as the one provided by the
appending linkage.
This just errors early instead of producing a broken .o.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@269706 91177308-0d34-0410-b5e6-96231b3b80d8
This new verifier rule lets us unambigously pick a calling convention
when creating a new declaration for
`@llvm.experimental.deoptimize.<ty>`. It is also congruent with our
lowering strategy -- since all calls to `@llvm.experimental.deoptimize`
are lowered to calls to `__llvm_deoptimize`, it is reasonable to enforce
a unique calling convention.
Some of the tests that were breaking this verifier rule have had to be
split up into different .ll files.
The inliner was violating this rule as well, and has been fixed to avoid
producing invalid IR.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@269261 91177308-0d34-0410-b5e6-96231b3b80d8
An oddity of the .ll syntax is that the "@var = " in
@var = global i32 42
is optional. Writing just
global i32 42
is equivalent to
@0 = global i32 42
This means that there is a pretty big First set at the top level. The
current implementation maintains it manually. I was trying to refactor
it, but then started wondering why keep it a all. I personally find the
above syntax confusing. It looks like something is missing.
This patch removes the feature and simplifies the parser.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@269096 91177308-0d34-0410-b5e6-96231b3b80d8
Seems like my sphynx version is different than the one in the bot, as it
accepted everything locally. I think this is the right fix...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@269062 91177308-0d34-0410-b5e6-96231b3b80d8
HowToCrossCompile was outdated and generating too much traffic on the mailing
list with similar queries. This change helps offset most of the problems that
were reported recently including:
* Removing the -ccc-gcc-name, adding --sysroot
* Making references to Debian's multiarch for target libraries
* Expanding -DCMAKE_CXX_FLAGS for both GCC and Clang
* Some formatting and clarifications in the text
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@269054 91177308-0d34-0410-b5e6-96231b3b80d8
This is a step towards removing the rampant undefined behaviour in
SelectionDAG, which is a part of llvm.org/PR26808.
We rename SelectionDAGISel::Select to SelectImpl and update targets to
match, and then change Select to return void and consolidate the
sketchy behaviour we're trying to get away from there.
Next, we'll update backends to implement `void Select(...)` instead of
SelectImpl and eventually drop the base Select implementation.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@268693 91177308-0d34-0410-b5e6-96231b3b80d8
This backend was supposed to generate C++ code which will re-construct
the LLVM IR passed as input. This seems to me to have very marginal
usefulness in the first place.
However, the code has never been updated to use IRBuilder, which makes
its current value negative -- people who look at the output may be
steered to use the *wrong* C++ APIs to construct IR.
Furthermore, it's generated code that doesn't compile since at least
2013.
Differential Revision: http://reviews.llvm.org/D19942
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@268631 91177308-0d34-0410-b5e6-96231b3b80d8
If a guard call being lowered by LowerGuardIntrinsics has the
`!make.implicit` metadata attached, then reattach the metadata to the
branch in the resulting expanded form of the intrinsic. This allows us
to implement null checks as guards and still get the benefit of implicit
null checks.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@268148 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
D19403 adds a new pragma for loop distribution. This change adds
support for the corresponding metadata that the pragma is translated to
by the FE.
As part of this I had to rethink the flag -enable-loop-distribute. My
goal was to be backward compatible with the existing behavior:
A1. pass is off by default from the optimization pipeline
unless -enable-loop-distribute is specified
A2. pass is on when invoked directly from opt (e.g. for unit-testing)
The new pragma/metadata overrides these defaults so the new behavior is:
B1. A1 + enable distribution for individual loop with the pragma/metadata
B2. A2 + disable distribution for individual loop with the pragma/metadata
The default value whether the pass is on or off comes from the initiator
of the pass. From the PassManagerBuilder the default is off, from opt
it's on.
I moved -enable-loop-distribute under the pass. If the flag is
specified it overrides the default from above.
Then the pragma/metadata can further modifies this per loop.
As a side-effect, we can now also use -enable-loop-distribute=0 from opt
to emulate the default from the optimization pipeline. So to be precise
this is the new behavior:
C1. pass is off by default from the optimization pipeline
unless -enable-loop-distribute or the pragma/metadata enables it
C2. pass is on when invoked directly from opt
unless -enable-loop-distribute=0 or the pragma/metadata disables it
Reviewers: hfinkel
Subscribers: joker.eph, mzolotukhin, llvm-commits
Differential Revision: http://reviews.llvm.org/D19431
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267672 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This tries to anchor down the concept of domains a bit better. I had
trouble initially relating this to anything. Also talking to David
Majnemer on IRC suggested that I wasn't the only one.
Reviewers: hfinkel
Subscribers: llvm-commits, majnemer
Differential Revision: http://reviews.llvm.org/D18799
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267647 91177308-0d34-0410-b5e6-96231b3b80d8
I really thought we were doing this already, but we were not. Given this input:
void Test(int *res, int *c, int *d, int *p) {
for (int i = 0; i < 16; i++)
res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
}
we did not vectorize the loop. Even with "assume_safety" the check that we
don't if-convert conditionally-executed loads (to protect against
data-dependent deferenceability) was not elided.
One subtlety: As implemented, it will still prefer to use a masked-load
instrinsic (given target support) over the speculated load. The choice here
seems architecture specific; the best option depends on how expensive the
masked load is compared to a regular load. Ideally, using the masked load still
reduces unnecessary memory traffic, and so should be preferred. If we'd rather
do it the other way, flipping the order of the checks is easy.
The LangRef is updated to make explicit that llvm.mem.parallel_loop_access also
implies that if conversion is okay.
Differential Revision: http://reviews.llvm.org/D19512
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267514 91177308-0d34-0410-b5e6-96231b3b80d8
Eliminate DITypeIdentifierMap and make DITypeRef a thin wrapper around
DIType*. It is no longer legal to refer to a DICompositeType by its
'identifier:', and DIBuilder no longer retains all types with an
'identifier:' automatically.
Aside from the bitcode upgrade, this is mainly removing logic to resolve
an MDString-based reference to an actualy DIType. The commits leading
up to this have made the implicit type map in DICompileUnit's
'retainedTypes:' field superfluous.
This does not remove DITypeRef, DIScopeRef, DINodeRef, and
DITypeRefArray, or stop using them in DI-related metadata. Although as
of this commit they aren't serving a useful purpose, there are patchces
under review to reuse them for CodeView support.
The tests in LLVM were updated with deref-typerefs.sh, which is attached
to the thread "[RFC] Lazy-loading of debug info metadata":
http://lists.llvm.org/pipermail/llvm-dev/2016-April/098318.html
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267296 91177308-0d34-0410-b5e6-96231b3b80d8
This intrinsic takes two arguments, ``%ptr`` and ``%offset``. It loads
a 32-bit value from the address ``%ptr + %offset``, adds ``%ptr`` to that
value and returns it. The constant folder specifically recognizes the form of
this intrinsic and the constant initializers it may load from; if a loaded
constant initializer is known to have the form ``i32 trunc(x - %ptr)``,
the intrinsic call is folded to ``x``.
LLVM provides that the calculation of such a constant initializer will
not overflow at link time under the medium code model if ``x`` is an
``unnamed_addr`` function. However, it does not provide this guarantee for
a constant initializer folded into a function body. This intrinsic can be
used to avoid the possibility of overflows when loading from such a constant.
Differential Revision: http://reviews.llvm.org/D18367
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@267223 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
LLVMAttribute has outlived its utility and is becoming a problem for C API users that what to use all the LLVM attributes. In order to help moving away from LLVMAttribute in a smooth manner, this diff introduce LLVMGetAttrKindIDInContext, which can be used instead of the enum values.
See D18749 for reference.
Reviewers: Wallbraker, whitequark, joker.eph, echristo, rafael
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D19081
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@266842 91177308-0d34-0410-b5e6-96231b3b80d8
Both AArch64 and ARM support llvm.<arch>.thread.pointer intrinsics that
just return the thread pointer. I have a pending patch that does the same
for SystemZ (D19054), and there are many more targets that could benefit
from one.
This patch merges the ARM and AArch64 intrinsics into a single target
independent one that will also be used by subsequent targets.
Differential Revision: http://reviews.llvm.org/D19098
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@266818 91177308-0d34-0410-b5e6-96231b3b80d8
With this change, ideally IR pass can always generate llvm.stackguard
call to get the stack guard; but for now there are still IR form stack
guard customizations around (see getIRStackGuard()). Future SSP
customization should go through LOAD_STACK_GUARD.
There is a behavior change: stack guard values are not CSEed anymore,
since we should never reuse the value in case that it has been spilled (and
corrupted). See ssp-guard-spill.ll. This also cause the change of stack
size and codegen in X86 and AArch64 test cases.
Ideally we'd like to know if the guard created in llvm.stackprotector() gets
spilled or not. If the value is spilled, discard the value and reload
stack guard; otherwise reuse the value. This can be done by teaching
register allocator to know how to rematerialize LOAD_STACK_GUARD and
force a rematerialization (which seems hard), or check for spilling in
expandPostRAPseudo. It only makes sense when the stack guard is a global
variable, which requires more instructions to load. Anyway, this seems to go out
of the scope of the current patch.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@266806 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The `"patchable-function"` attribute can be used by an LLVM client to
influence LLVM's code generation in ways that makes the generated code
easily patchable at runtime (for instance, to redirect control).
Right now only one patchability scheme is supported,
`"prologue-short-redirect"`, but this can be expanded in the future.
Reviewers: joker.eph, rnk, echristo, dberris
Subscribers: joker.eph, echristo, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D19046
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@266715 91177308-0d34-0410-b5e6-96231b3b80d8
Rather than relying on the structural equivalence of DICompositeType to
merge type definitions, use an explicit map on the LLVMContext that
LLParser and BitcodeReader consult when constructing new nodes.
Each non-forward-declaration DICompositeType with a non-empty
'identifier:' field is stored/loaded from the type map, and the first
definiton will "win".
This map is opt-in: clients that expect ODR types from different modules
to be merged must call LLVMContext::ensureDITypeMap.
- Clients that just happen to load more than one Module in the same
LLVMContext won't magically merge types.
- Clients (like LTO) that want to continue to merge types based on ODR
identifiers should opt-in immediately.
I have updated LTOCodeGenerator.cpp, the two "linking" spots in
gold-plugin.cpp, and llvm-link (unless -disable-debug-info-type-map) to
set this.
With this in place, it will be straightforward to remove the DITypeRef
concept (i.e., referencing types by their 'identifier:' string rather
than pointing at them directly).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@266549 91177308-0d34-0410-b5e6-96231b3b80d8
Merge members that are describing the same member of the same ODR type,
even if other bits differ. If the file or line differ, we don't care;
if anything else differs, it's an ODR violation (and we still don't
really care).
For DISubprogram declarations, this looks at the LinkageName and Scope.
For DW_TAG_member instances of DIDerivedType, this looks at the Name and
Scope. In both cases, we know that the Scope follows ODR rules if it
has a non-empty identifier.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@266548 91177308-0d34-0410-b5e6-96231b3b80d8