Gcc supports target armv7ve which is armv7-a with virtualization
extensions. This change adds support for this in llvm for gcc
compatibility.
Also remove redundant FeatureHWDiv, FeatureHWDivARM for a few models as
this is specified automatically by FeatureVirtualization.
Patch by Manoj Gupta.
Differential Revision: https://reviews.llvm.org/D29472
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294661 91177308-0d34-0410-b5e6-96231b3b80d8
disturbing the graph or having to update edges.
This is motivated by porting argument promotion to the new pass manager.
Because of how LLVM IR Function objects work, in order to change their
signature a new object needs to be created. This is efficient and
straight forward in the IR but previously was very hard to implement in
LCG. We could easily replace the function a node in the graph
represents. The challenging part is how to handle updating the edges in
the graph.
LCG previously used an edge to a raw function to represent a node that
had not yet been scanned for calls and references. This was the core
of its laziness. However, that model causes this kind of update to be
very hard:
1) The keys to lookup an edge need to be `Function*`s that would all
need to be updated when we update the node.
2) There will be some unknown number of edges that haven't transitioned
from `Function*` edges to `Node*` edges.
All of this complexity isn't necessary. Instead, we can always build
a node around any function, always pointing edges at it and always using
it as the key to lookup an edge. To maintain the laziness, we need to
sink the *edges* of a node into a secondary object and explicitly model
transitioning a node from empty to populated by scanning the function.
This design seems much cleaner in a number of ways, but importantly
there is now exactly *one* place where the `Function*` has to be
updated!
Some other cleanups that fall out of this include having something to
model the *entry* edges more accurately. Rather than hand rolling parts
of the node in the graph itself, we have an explicit `EdgeSequence`
object that gives us exactly the functionality needed. We also have
a consistent place to define the edge iterators and can use them for
both the entry edges and the internal edges of the graph.
The API used to model the separation between a node and its edges is
intentionally very thin as most clients are expected to deal with nodes
that have populated edges. We model this exactly as an optional does
with an additional method to populate the edges when that is
a reasonable thing for a client to do. This is based on API design
suggestions from Richard Smith and David Blaikie, credit goes to them
for helping pick how to model this without it being either too explicit
or too implicit.
The patch is somewhat noisy due to shifting around iterator types and
new syntax for walking the edges of a node, but most of the
functionality change is in the `Edge`, `EdgeSequence`, and `Node` types.
Differential Revision: https://reviews.llvm.org/D29577
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294653 91177308-0d34-0410-b5e6-96231b3b80d8
This fold already existed for vectors but only when 'C1' was a splat
constant (but 'C2' could be any constant).
There were no tests for any vector constants, so I'm adding a test
that shows non-splat constants for both operands.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294650 91177308-0d34-0410-b5e6-96231b3b80d8
This requires that we communicate to X86InstrInfo::optimizeCompareInstr
that the second operand is neither a register nor an immediate. The way we
do that is by setting CmpMask to zero.
Note that there were already instructions where the second operand was not a
register nor an immediate, namely X86::SUB*rm, so also set CmpMask to zero
for those instructions. This seems like a latent bug, but I was unable to
trigger it.
Differential Revision: https://reviews.llvm.org/D28621
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294634 91177308-0d34-0410-b5e6-96231b3b80d8
This is a stub for a new concrete implementation of IPDBRawSymbol.
Nothing uses this uses this implementation yet. My plan is to
locally switch lldb-pdbdump from the DIA reader to the Native one
and flesh out the implementations of these method stubs in the order
they're needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294633 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Convert all obvious node_begin/node_end and child_begin/child_end
pairs to range based for.
Sending for review in case someone has a good idea how to make
graph_children able to be inferred. It looks like it would require
changing GraphTraits to be two argument or something. I presume
inference does not happen because it would have to check every
GraphTraits in the world to see if the noderef types matched.
Note: This change was 3-staged with clang as well, which uses
Dominators/etc from LLVM.
Reviewers: chandlerc, tstellarAMD, dblaikie, rsmith
Subscribers: arsenm, llvm-commits, nhaehnle
Differential Revision: https://reviews.llvm.org/D29767
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294620 91177308-0d34-0410-b5e6-96231b3b80d8
r288399 introduced the DIEUnit class, and in the process broke
the corner case where dsymutil generates an empty CU during an
LTO link. This restores the logic and adds a test for the corner
case.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294618 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch allows JumpThreading also thread through guards.
Virtually, guard(cond) is equivalent to the following construction:
if (cond) { do something } else {deoptimize}
Yet it is not explicitly converted into IFs before lowering.
This patch enables early threading through guards in simple cases.
Currently it covers the following situation:
if (cond1) {
// code A
} else {
// code B
}
// code C
guard(cond2)
// code D
If there is implication cond1 => cond2 or !cond1 => cond2, we can transform
this construction into the following:
if (cond1) {
// code A
// code C
} else {
// code B
// code C
guard(cond2)
}
// code D
Thus, removing the guard from one of execution branches.
Patch by Max Kazantsev!
Reviewers: reames, apilipenko, igor-laevsky, anna, sanjoy
Reviewed By: sanjoy
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29620
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294617 91177308-0d34-0410-b5e6-96231b3b80d8
Passing the --restrict flag to the coverage prep script before other
positional arguments is wrong, because it prevents the argparse module
from telling apart arguments to --restrict versus positional arguments.
Pointed out by Sean Callanan!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294616 91177308-0d34-0410-b5e6-96231b3b80d8
ld64 requires its archive members to be 8-byte aligned for 64-bit
content and 4-byte aligned for 32-bit content. Opt for the larger
alignment requirement. This ensures that ld64 can consume archives
generated by llvm-ar.
Thanks to Kevin Enderby for the hint about the ld64/cctools behaviours!
Resolves PR28361!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294615 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Fix two bugs in SelectionDAGBuilder::FindMergedConditions reported by
Mikael Holmen. Handle non-canonicalized xor not operation
correctly (was assuming operand 0 was always the non-constant operand)
and check that the negated condition is also in the same block as the
original and/or instruction (as is done for and/or operands already)
before proceeding with optimization.
Reviewers: bogner, MatzeB, qcolombet
Subscribers: mcrosier, uabelho, llvm-commits
Differential Revision: https://reviews.llvm.org/D29680
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294605 91177308-0d34-0410-b5e6-96231b3b80d8
This patch sets the global property indicating that target registration is complete for standalone sub-project builds.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294602 91177308-0d34-0410-b5e6-96231b3b80d8
that it works when the ObjC metadata sections end up in the
__DATA_CONST or __DATA_DIRTY segments.
rdar://26315238
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294599 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Documentation update to reflect the changes that occured in the allocator:
- additional architectures support;
- modification of the header;
- options default values for 32 & 64-bit.
Reviewers: kcc, alekseyshl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D29592
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294595 91177308-0d34-0410-b5e6-96231b3b80d8
Add a note about the reason for the divergence from the specification
for ld64. Addresses post-commit review comments from Davide. NFC.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294594 91177308-0d34-0410-b5e6-96231b3b80d8
If some of the trailing or leading bytes of a load combine pattern are zeroes we can combine the pattern to a load + zext and shift. Currently we don't support it, so the tests check the current codegen without load combine. This change will make the patch to support this kind of combine a bit more clear.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294591 91177308-0d34-0410-b5e6-96231b3b80d8
Stack Smash Protection is not completely free, so in hot code, the overhead it causes can cause performance issues. By adding diagnostic information for which function have SSP and why, a user can quickly determine what they can do to stop SSP being applied to a specific hot function.
This change adds an SSP-specific DiagnosticInfo class and uses of it to the Stack Protection code. A subsequent change to clang will cause the remarks to be emitted when enabled.
Patch by: James Henderson
Differential Revision: https://reviews.llvm.org/D29023
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294590 91177308-0d34-0410-b5e6-96231b3b80d8
1. Added missing substitutions to the documentation in docs/TestingGuide.rst
2. Modified docs/CommandGuide/lit.rst to only document the "base" set of substitutions and to refer the reader to docs/TestingGuide.rst for more detailed info on substitutions.
Patch by bd1976llvm
Differential Revision: https://reviews.llvm.org/D29281
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294586 91177308-0d34-0410-b5e6-96231b3b80d8
Both for aapcscc and aapcs_vfpcc. We currently filter out soft float targets
because we don't support libcalls yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294584 91177308-0d34-0410-b5e6-96231b3b80d8
LowerBuildVectorv16i8/LowerBuildVectorv8i16 insert values into a UNDEF vector if the build vector doesn't contain any zero elements, resulting in register dependencies with a previous use of the register.
This patch attempts to break the register dependency by either always zeroing the vector before hand or (if we're inserting to the 0'th element) by using VZEXT_MOVL(SCALAR_TO_VECTOR(i32 AEXT(Elt))) which lowers to (V)MOVD and performs a similar function. Additionally (V)MOVD is a shorter instruction than PINSRB/PINSRW. We already do something similar for SSE41 PINSRD.
On pre-SSE41 LowerBuildVectorv16i8 we go a little further and use VZEXT_MOVL(SCALAR_TO_VECTOR(i32 ZEXT(Elt))) if the build vector contains zeros to avoid the vector zeroing at the cost of a scalar zero extension, which can probably be brought over to the other cases in a future patch in some cases (load folding etc.)
Differential Revision: https://reviews.llvm.org/D29720
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294581 91177308-0d34-0410-b5e6-96231b3b80d8
We only implemented it for one of the 3 HLE instructions and that instruction is also under the RTM flag. Clang only implements the RTM flag from its command line.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294562 91177308-0d34-0410-b5e6-96231b3b80d8
If we implement intrinsics for their instructions in the future, the feature flags can be added back with proper testing.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@294561 91177308-0d34-0410-b5e6-96231b3b80d8