Just like the existing find_as() method, the new insert_as() accepts
an extra parameter which is used as a key to find the bucket in the
map.
When creating a Constant, we want to check the map before actually
creating the object. In this case we have to perform two queries to
the map, and this extra parameter can save recomputing the hash value
for the second query.
This is a reapply of r260458, that was reverted because it was
suspected to be the cause of instability of an internal bot, but
wasn't confirmed.
Differential Revision: http://reviews.llvm.org/D16268
From: Mehdi Amini <mehdi.amini@apple.com>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262812 91177308-0d34-0410-b5e6-96231b3b80d8
We may assume the type of 1st argument as PCSTR in PENUMLOADED_MODULES_CALLBACK. PSTR was in the ancient mingw32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262810 91177308-0d34-0410-b5e6-96231b3b80d8
When div+rem calls on the same arguments are found, the ARM back-end merges the
two calls into one __aeabi_divmod call for up to 32-bits values. However,
for 64-bit values, which also have a lib call (__aeabi_ldivmod), it wasn't
merging the calls, and thus calling ldivmod twice and spilling the temporary
results, which generated pretty bad code.
This patch legalises 64-bit lib calls for divmod, so that now all the spilling
and the second call are gone. It also relaxes the DivRem combiner a bit on the
legal type check, since it was already checking for isLegalOrCustom on every
value, so the extra check for isTypeLegal was redundant.
Second attempt, creating TLI.isOperationCustom like isOperationExpand, to make
sure we only emit valid types or the ones that were explicitly marked as custom.
Now, passing check-all and test-suite on x86, ARM and AArch64.
This patch fixes PR17193 (and a long time FIXME in the tests).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262738 91177308-0d34-0410-b5e6-96231b3b80d8
We have known UB in some ilists where we static cast half nodes to
(larger) derived types and use the address. See llvm.org/PR26753.
This needs to be fixed, but in the meantime it'd be nice if running
ubsan didn't complain. This adds annotations in the two places where
ubsan complains while running check-all of a sanitized clang build.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262683 91177308-0d34-0410-b5e6-96231b3b80d8
The vast majority of LiveRanges (ie, 4/5) have exactly 1 segment and 1
value number, and a good chunk of the rest have 2 of each, so
allocating space for 4 is wasteful. This is especially noticeable when
dealing with a very large number of vregs, and I have an internal case
where dropping this to 2 shaves over 5% off of peak memory when
compiling a particularly large function.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262681 91177308-0d34-0410-b5e6-96231b3b80d8
Exploit ScalarEvolution::getRange's newly acquired smartness (since
r262438) by using that to infer nsw and nuw when possible.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262639 91177308-0d34-0410-b5e6-96231b3b80d8
This will be used in a later patch to ScalarEvolution. Right now only
the unit tests exercise the newly added code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262637 91177308-0d34-0410-b5e6-96231b3b80d8
This patch provides the following infrastructure for PGO enhancements in inliner:
Enable the use of block level profile information in inliner
Incremental update of block frequency information during inlining
Update the function entry counts of callees when they get inlined into callers.
Differential Revision: http://reviews.llvm.org/D16381
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262636 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: With discriminator, LineLocation can uniquely identify a callsite without the need to specifying callee name. Remove Callee function name from the key, and put it in the value (FunctionSamples).
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17827
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262634 91177308-0d34-0410-b5e6-96231b3b80d8
Catch objects with a displacement of zero do not initialize a catch
object. The displacement is relative to %rsp at the end of the
function's prologue for x86_64 targets.
If we place an object at the top-of-stack, we will end up wit a
displacement of zero resulting in our catch object remaining
uninitialized.
Address this by creating our catch objects as fixed objects. We will
ensure that the UnwindHelp object is created after the catch objects so
that no catch object will have a displacement of zero.
Differential Revision: http://reviews.llvm.org/D17823
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262546 91177308-0d34-0410-b5e6-96231b3b80d8
The placement new calls here were all calling the allocation function
in RecyclingAllocator/Recycler for SDNode, instead of the function for
the specific subclass we were constructing.
Since this particular allocator always overallocates it more or less
worked, but would hide what we're actually doing from any memory
tools. Also, if you tried to change this allocator so something like a
BumpPtrAllocator or MallocAllocator, the compiler would crash horribly
all the time.
Part of llvm.org/PR26808.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262500 91177308-0d34-0410-b5e6-96231b3b80d8
parts of the AA interface out of the base class of every single AA
result object.
Because this logic reformulates the query in terms of some other aspect
of the API, it would easily cause O(n^2) query patterns in alias
analysis. These could in turn be magnified further based on the number
of call arguments, and then further based on the number of AA queries
made for a particular call. This ended up causing problems for Rust that
were actually noticable enough to get a bug (PR26564) and probably other
places as well.
When originally re-working the AA infrastructure, the desire was to
regularize the pattern of refinement without losing any generality.
While I think it was successful, that is clearly proving to be too
costly. And the cost is needless: we gain no actual improvement for this
generality of making a direct query to tbaa actually be able to
re-use some other alias analysis's refinement logic for one of the other
APIs, or some such. In short, this is entirely wasted work.
To the extent possible, delegation to other API surfaces should be done
at the aggregation layer so that we can avoid re-walking the
aggregation. In fact, this significantly simplifies the logic as we no
longer need to smuggle the aggregation layer into each alias analysis
(or the TargetLibraryInfo into each alias analysis just so we can form
argument memory locations!).
However, we also have some delegation logic inside of BasicAA and some
of it even makes sense. When the delegation logic is baking in specific
knowledge of aliasing properties of the LLVM IR, as opposed to simply
reformulating the query to utilize a different alias analysis interface
entry point, it makes a lot of sense to restrict that logic to
a different layer such as BasicAA. So one aspect of the delegation that
was in every AA base class is that when we don't have operand bundles,
we re-use function AA results as a fallback for callsite alias results.
This relies on the IR properties of calls and functions w.r.t. aliasing,
and so seems a better fit to BasicAA. I've lifted the logic up to that
point where it seems to be a natural fit. This still does a bit of
redundant work (we query function attributes twice, once via the
callsite and once via the function AA query) but it is *exactly* twice
here, no more.
The end result is that all of the delegation logic is hoisted out of the
base class and into either the aggregation layer when it is a pure
retargeting to a different API surface, or into BasicAA when it relies
on the IR's aliasing properties. This should fix the quadratic query
pattern reported in PR26564, although I don't have a stand-alone test
case to reproduce it.
It also seems general goodness. Now the numerous AAs that don't need
target library info don't carry it around and depend on it. I think
I can even rip out the general access to the aggregation layer and only
expose that in BasicAA as it is the only place where we re-query in that
manner.
However, this is a non-trivial change to the AA infrastructure so I want
to get some additional eyes on this before it lands. Sadly, it can't
wait long because we should really cherry pick this into 3.8 if we're
going to go this route.
Differential Revision: http://reviews.llvm.org/D17329
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262490 91177308-0d34-0410-b5e6-96231b3b80d8
Have ScalarEvolution::getRange re-consider cases like "{C?A:B,+,C?P:Q}"
by factoring out "C" and computing RangeOf{A,+,P} union RangeOf({B,+,Q})
instead.
The latter can be easier to compute precisely in cases like
"{C?0:N,+,C?1:-1}" N is the backedge taken count of the loop; since in
such cases the latter form simplifies to [0,N+1) union [0,N+1).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262438 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: SampleProfile pass needs to be performed after InstructionCombiningPass, which helps eliminate un-inlinable function calls.
Reviewers: davidxl, dnovillo
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17742
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262419 91177308-0d34-0410-b5e6-96231b3b80d8
TableGen checks at compiletime that for scheduling models with
"CompleteModel = 1" one of the following holds:
- Is marked with the hasNoSchedulingInfo flag
- The instruction is a subclass of Sched
- There are InstRW definitions in the scheduling model
Typical steps necessary to complete a model:
- Ensure all pseudo instructions that are expanded before machine
scheduling (usually everything handled with EmitYYY() functions in
XXXTargetLowering).
- If a CPU does not support some instructions mark the corresponding
resource unsupported: "WriteRes<WriteXXX, []> { let Unsupported = 1; }".
- Add missing scheduling information.
Differential Revision: http://reviews.llvm.org/D17747
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262384 91177308-0d34-0410-b5e6-96231b3b80d8
This introduces a new flag that indicates that a specific instruction
will never be present when the MachineScheduler runs and therefore needs
no scheduling information.
This is in preparation for an upcoming commit which checks completeness
of a scheduling model when tablegen runs.
Differential Revision: http://reviews.llvm.org/D17728
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262383 91177308-0d34-0410-b5e6-96231b3b80d8
Most portions of InstCombine properly propagate fast math flags, but
apparently the vector scalarization section was overlooked.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262376 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Calls sometimes need to be convergent. This is already handled at the
LLVM IR level, but it also needs to be handled at the MI level.
Ideally we'd propagate convergence from instructions, down through the
selection DAG, and into MIs. But this is Hard, and would affect
optimizations in the SDNs -- right now only SDNs with two operands have
any flags at all.
Instead, here's a much simpler hack: Add new opcodes for NVPTX for
convergent calls, and generate these when lowering convergent LLVM
calls.
Reviewers: jholewinski
Subscribers: jholewinski, chandlerc, joker.eph, jhen, tra, llvm-commits
Differential Revision: http://reviews.llvm.org/D17423
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262373 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This adds the beginning of an update API to preserve MemorySSA. In particular,
this patch adds a way to remove memory SSA accesses when instructions are
deleted.
It also adds relevant unit testing infrastructure for MemorySSA's API.
(There is an actual user of this API, i will make that diff dependent on this one. In practice, a ton of opt passes remove memory instructions, so it's hopefully an obviously useful API :P)
Reviewers: hfinkel, reames, george.burgess.iv
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D17157
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262362 91177308-0d34-0410-b5e6-96231b3b80d8
This adds support to convert ProfileSummary object to Metadata and create a
ProfileSummary object from metadata. This would allow attaching profile summary
information to Module allowing optimization passes to use it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262360 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch impleemnts DS_PERMUTE/DS_BPERMUTE instruction definitions and intrinsics,
which are new since VI.
Reviewers: tstellarAMD, arsenm
Subscribers: llvm-commits, arsenm
Differential Revision: http://reviews.llvm.org/D17614
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262356 91177308-0d34-0410-b5e6-96231b3b80d8
In the code below on 32-bit targets, x would previously get forwarded to g()
without sign-extension to 32 bits as required by the parameter attribute.
void g(signed short);
void f(unsigned short x) {
g(x);
}
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262352 91177308-0d34-0410-b5e6-96231b3b80d8
Function lto_module_create_in_local_context() would previously
rely on the default LLVMContext being created for it by
LTOModule::makeLTOModule(). This context exits the program on
error and is not arranged to update sLastStringError in
tools/lto/lto.cpp.
Function lto_module_create_in_local_context() now creates an
LLVMContext by itself, sets it up correctly to its needs and then
passes it to LTOModule::createInLocalContext() which takes
ownership of the context and keeps it present for the lifetime of
the returned LTOModule.
Function LTOModule::makeLTOModule() is modified to take a
reference to LLVMContext (instead of a pointer) and no longer
creates a default context when nullptr is passed to it. Method
LTOModule::createInContext() that takes a pointer to LLVMContext
is removed because it allows to pass a nullptr to it. Instead
LTOModule::createFromBuffer() (that takes a reference to
LLVMContext) should be used.
Differential Revision: http://reviews.llvm.org/D17715
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262330 91177308-0d34-0410-b5e6-96231b3b80d8
in the PassBuilder.
These are really just stubs for now, but they give a nice API surface
that Clang or other tools can start learning about and enabling for
experimentation.
I've also wired up parsing various synthetic module pass names to
generate these set pipelines. This allows the pipelines to be combined
with other passes and have their order controlled, with clear separation
between the *kind* of canned pipeline, and the *level* of optimization
to be used within that canned pipeline.
The most interesting part of this patch is almost certainly the spec for
the different optimization levels. I don't think we can ever have hard
and fast rules that would make it easy to determine whether a particular
optimization makes sense at a particular level -- it will always be in
large part a judgement call. But hopefully this will outline the
expected rationale that should be used, and the direction that the
pipelines should be taken. Much of this was based on a long llvm-dev
discussion I started years ago to try and crystalize the intent behind
these pipelines, and now, at long long last I'm returning to the task of
actually writing it down somewhere that we can cite and try to be
consistent with.
Differential Revision: http://reviews.llvm.org/D12826
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@262196 91177308-0d34-0410-b5e6-96231b3b80d8