GenericIndirectStubsInfo.
This will allow architecture support classes for other architectures to re-use
this code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259549 91177308-0d34-0410-b5e6-96231b3b80d8
CodeView requires us to accurately describe the extent of the inlined
code. We did this by grabbing the next debug location in source order
and using *that* to denote where we stopped inlining. However, this is
not sufficient or correct in instances where there is no next debug
location or the next debug location belongs to the start of another
function.
To get this correct, use the end symbol of the function to denote the
last possible place the inlining could have stopped at.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259548 91177308-0d34-0410-b5e6-96231b3b80d8
When promoting allocas to LDS, we know we are indexing
into a specific area just created, and the calculation
will also never overflow.
Also emit some of the muls as nsw nuw, because instcombine
infers this already from the range metadata. I think
putting this on the other adds and muls might be OK too,
but I'm not 100% sure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259545 91177308-0d34-0410-b5e6-96231b3b80d8
This directive emits the binary annotations that describe line and code
deltas in inlined call sites. Single-stepping through inlined frames in
windbg now works.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259535 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Enables eip-based addressing, e.g.,
lea constant(%eip), %rax
lea constant(%eip), %eax
in MC, (used for the x32 ABI). EIP-base addressing is also valid in x86_64,
it is left enabled for that architecture as well.
Patch by João Porto
Differential Revision: http://reviews.llvm.org/D16581
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259528 91177308-0d34-0410-b5e6-96231b3b80d8
Re-commit of r258951 after fixing layering violation.
The BPF and WebAssembly backends had identical code for emitting errors
for unsupported features, and AMDGPU had very similar code. This merges
them all into one DiagnosticInfo subclass, that can be used by any
backend.
There should be minimal functional changes here, but some AMDGPU tests
have been updated for the new format of errors (it used a slightly
different format to BPF and WebAssembly). The AMDGPU error messages will
now benefit from having precise source locations when debug info is
available.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259498 91177308-0d34-0410-b5e6-96231b3b80d8
description and changed the regression test accordingly.
The default configuration of a Cortex-R7 is to implement the
VFPv3-D16 architecture and the feature line as it was is too
restrictive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259480 91177308-0d34-0410-b5e6-96231b3b80d8
When rematerializing a computation by replacing the copy, use the copy's
location. The location of the copy is more representative of the
original program.
This partially fixes PR10003.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259469 91177308-0d34-0410-b5e6-96231b3b80d8
differentiate between indirect references to functions an direct calls.
This doesn't do a whole lot yet other than change the print out produced
by the analysis, but it lays the groundwork for a very major change I'm
working on next: teaching the call graph to actually be a call graph,
modeling *both* the indirect reference graph and the call graph
simultaneously. More details on that in the next patch though.
The rest of this is essentially a bunch of over-engineering that won't
be interesting until the next patch. But this also isolates essentially
all of the churn necessary to introduce the edge abstraction from the
very important behavior change necessary in order to separately model
the two graphs. So it should make review of the subsequent patch a bit
easier at the cost of making this patch seem poorly motivated. ;]
Differential Revision: http://reviews.llvm.org/D16038
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259463 91177308-0d34-0410-b5e6-96231b3b80d8
LVI has several separate sources of facts - edge local conditions, recursive queries, assumes, and control independent value facts - which all apply to the same value at the same location. The existing implementation was very conservative about exploiting all of these facts at once.
This change introduces an "intersect" function specifically to abstract the action of picking a good set of facts from all of the separate facts given. At the moment, this function is relatively simple (i.e. mostly just reuses the bits which were already there), but even the minor additions reveal the inherent power. For example, JumpThreading is now capable of doing an inductive proof that a particular value is always positive and removing a half range check.
I'm currently only using the new intersect function in one place. If folks are happy with the direction of the work, I plan on making a series of small changes without review to replace mergeIn with intersect at all the appropriate places.
Differential Revision: http://reviews.llvm.org/D14476
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259461 91177308-0d34-0410-b5e6-96231b3b80d8
Fix a crash in `getMemOpBaseRegImmOfs` that happens if the base of
`MemOp` is a frame index memory operand. The fix is to have
`getMemOpBaseRegImmOfs` bail out in such cases. We can possibly be more
clever here, if needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259456 91177308-0d34-0410-b5e6-96231b3b80d8
Officially, we don't acknowledge non-default configurations of MXCSR,
as getting there would require usage of the FENV_ACCESS pragma (at
least insofar as rounding mode is concerned).
We don't support the pragma, so we can assume that the default
rounding mode - round to nearest, ties to even - is always used.
However, it's inconsistent with the rest of the instruction set,
where MXCSR is always effective (unless otherwise specified).
Also, it's an unnecessary obstacle to the few brave souls that use
fenv.h with LLVM.
Avoid the hard-coded rounding mode for fp_to_f16; use MXCSR instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259448 91177308-0d34-0410-b5e6-96231b3b80d8
This routine was returning Undefined for most queries. This was utterly wrong. Amusingly, we do not appear to have any callers of this which are actually trying to exploit unreachable code or this would have broken the world.
A better approach would be to explicit describe the intersection of facts. That's blocked behind http://reviews.llvm.org/D14476 and I wanted to fix the current bug.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259446 91177308-0d34-0410-b5e6-96231b3b80d8
I'll submit a test case shortly which covers this, but it's causing clang self host problems in the builders so I wanted to get it removed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259432 91177308-0d34-0410-b5e6-96231b3b80d8
Teach LVI to handle select instructions in the exact same way it handles PHI nodes. This is useful since various parts of the optimizer convert PHI nodes into selects and we don't want these transformations to cause inferior optimization.
Note that this patch does nothing to exploit the implied constraint on the inputs represented by the select condition itself. That will be a later patch and is blocked on http://reviews.llvm.org/D14476
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259429 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
If the normal destination of the invoke or the parent block of the call site is unreachable-terminated, there is little point in inlining the call site unless there is literally zero cost. Unlike my previous change (D15289), this change specifically handle the call sites followed by unreachable in the same basic block for call or in the normal destination for the invoke. This change could be a reasonable first step to conservatively inline call sites leading to an unreachable-terminated block while BFI / BPI is not yet available in inliner.
Reviewers: manmanren, majnemer, hfinkel, davidxl, mcrosier, dblaikie, eraman
Subscribers: dblaikie, davidxl, mcrosier, llvm-commits
Differential Revision: http://reviews.llvm.org/D16616
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259403 91177308-0d34-0410-b5e6-96231b3b80d8
- ScalarEvolution::isKnownPredicateViaConstantRanges duplicates some
logic already present in ConstantRange, use ConstantRange for those
bits.
- In some cases ScalarEvolution::isKnownPredicateViaConstantRanges
returns `false` to mean "definitely false" (e.g. see the
`LHSRange.getSignedMin().sge(RHSRange.getSignedMax())` case for
`ICmpInst::ICMP_SLT`), but for `isKnownPredicateViaConstantRanges`,
`false` actually means "don't know". Get rid of this extra bit of
code to avoid confusion.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259401 91177308-0d34-0410-b5e6-96231b3b80d8
Make it obvious that it uses constant ranges, and use `Via` instead of
`With`, like other similar functions in SCEV.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259400 91177308-0d34-0410-b5e6-96231b3b80d8
If a target can only emit 8-bits data, we would loop in EmitValueImpl
since it will try to split a 32-bits data in 1 chunk of 32-bits.
No test since all current targets can emit 32bits at a time.
Patch by Alexandru Guduleasa!
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@259399 91177308-0d34-0410-b5e6-96231b3b80d8