metadata out of InstCombine and into helpers.
NFC, this just exposes the logic used by InstCombine when propagating
metadata from one load instruction to another. The plan is to use this
in SROA to address PR32902.
If anyone has better ideas about how to factor this or name variables,
I'm all ears, but this seemed like a pretty good start and lets us make
progress on the PR.
This is based on a patch by Ariel Ben-Yehuda (D34285).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306267 91177308-0d34-0410-b5e6-96231b3b80d8
This was reverted in r306252, but I already had the bug fixed and was
just trying to form a test case.
The original commit factored the logic for forming dedicated exits
inside of LoopSimplify into a helper that could be used elsewhere and
with an approach that required fewer intermediate data structures. See
that commit for full details including the change to the statistic, etc.
The code looked fine to me and my reviewers, but in fact didn't handle
indirectbr correctly -- it left the 'InLoopPredecessors' vector dirty.
If you have code that looks *just* right, you can end up leaking these
predecessors into a subsequent rewrite, and crash deep down when trying
to update PHI nodes for predecessors that don't exist.
I've added an assert that makes the bug much more obvious, and then
changed the code to reliably clear the vector so we don't get this bug
again in some other form as the code changes.
I've also added a test case that *does* manage to catch this while also
giving some nice positive coverage in the face of indirectbr.
The real code that found this came out of what I think is CPython's
interpreter loop, but any code with really "creative" interpreter loops
mixing indirectbr and other exit paths could manage to tickle the bug.
I was hard to reduce the original test case because in addition to
having a particular pattern of IR, the whole thing depends on the order
of the predecessors which is in turn depends on use list order. The test
case added here was designed so that in multiple different predecessor
orderings it should always end up going down the same path and tripping
the same bug. I hope. At least, it tripped it for me without
manipulating the use list order which is better than anything bugpoint
could do...
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306257 91177308-0d34-0410-b5e6-96231b3b80d8
This leads to a segfault. Chandler already has a test case and should be
able to recommit with a fix soon.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306252 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch changes getRange to getRangeRef and returns a reference to the ConstantRange object stored inside the DenseMap caches. We then take advantage of that to add new helper methods that can return min/max value of a signed or unsigned ConstantRange using that reference without first copying the ConstantRange.
getRangeRef calls itself recursively and I believe the reference return is fine for those calls.
I've left getSignedRange and getUnsignedRange returning a ConstantRange object so they will make a copy now. This is to ensure safety since the reference will be invalidated if the DenseMap changes.
I'm sure there are still more places that can take advantage of the reference and I'll submit future patches as I find them.
Reviewers: sanjoy, davide
Reviewed By: sanjoy
Subscribers: zzheng, llvm-commits, mzolotukhin
Differential Revision: https://reviews.llvm.org/D32978
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306229 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Turns out creating matchers with combineOr isn't very efficient as we have to build matcher objects for both sides of the OR. Those objects aren't free, the trees usually contain several objects that contain a reference to a Value *, ConstantInt *, APInt * or some such thing. The compiler isn't always willing to inline all the matcher code to get rid of these member variables. Thus we end up loads and stores of these variables.
Using combineOR ends up creating two complete copies of the tree and the associated stores. I believe we're also paying for the opcode check twice.
This patch adds a commutable mode to several of the matcher objects as a bool template parameter that can be used to enable commutable support directly in the match functions of the corresponding objects. This avoids the duplicate object creation and the opcode checks.
This shows about an ~7-8k reduction in the opt binary size on my local build.
Reviewers: spatel, majnemer, davide
Reviewed By: majnemer
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34592
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306226 91177308-0d34-0410-b5e6-96231b3b80d8
When SelectionDAG expands memcpy (or memmove) call into a sequence of load and store instructions, it disregards dereferenceable flag even the source pointer is known to be dereferenceable.
This results in an assertion failure if SelectionDAG commonizes a load instruction generated for memcpy with another load instruction for the source pointer.
This patch makes SelectionDAG to set the dereferenceable flag for the load instructions properly to avoid the assertion failure.
Differential Revision: https://reviews.llvm.org/D34467
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306209 cdac9f57-aa62-4fd3-8940-286f4534e8a0
Summary:
m_CombineOr isn't very efficient. The code using it is also quite verbose.
This patch adds m_Shift and m_BitwiseLogic matchers to make the using code more concise and improve the match efficiency.
Reviewers: spatel, davide
Reviewed By: davide
Subscribers: davide, llvm-commits
Differential Revision: https://reviews.llvm.org/D34593
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306206 cdac9f57-aa62-4fd3-8940-286f4534e8a0
processFixupValue is called on every relaxation iteration. applyFixup
is only called once at the very end. applyFixup is then the correct
place to do last minute changes and value checks.
While here, do proper range checks again for fixup_arm_thumb_bl. We
used to do it, but dropped because of thumb2. We now do it again, but
use the thumb2 range.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306177 91177308-0d34-0410-b5e6-96231b3b80d8
Revert "[ORC] Remove redundant semicolons from DEFINE_SIMPLE_CONVERSION_FUNCTIONS uses."
Revert "[ORC] Move ORC IR layer interface from addModuleSet to addModule and fix the module type as std::shared_ptr<Module>."
They broke ExecutionEngine/OrcMCJIT/test-global-ctors.ll on linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306176 91177308-0d34-0410-b5e6-96231b3b80d8
The trailing bit methods will early out if they find a bit of the opposite while popcount must always look at all bits. I also assume that more CPUs implement trailing bit counting with native instructions than population count.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306154 91177308-0d34-0410-b5e6-96231b3b80d8
This patch dumps the raw bytes of the pdb name map which contains
the mapping of stream name to stream index for the string table
and other reserved streams.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306148 91177308-0d34-0410-b5e6-96231b3b80d8
The goal here is to make it possible to display absolute
file offsets when dumping byets from an MSF. The problem is
that when dumping bytes from an MSF, often the bytes will
cross a block boundary and encounter a discontinuity. We
can't use the normal formatBinary() function for this because
this would just treat the sequence as entirely ascending, and
not account out-of-order blocks.
This patch adds a formatMsfData() function to our printer, and
then uses this function to improve the output of the -stream-data
command line option for dumping bytes from a particular stream.
Test coverage is also expanded to make sure to include all possible
scenarios of offsets, sizes, and crossing block boundaries.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306141 91177308-0d34-0410-b5e6-96231b3b80d8
This is useful when an upper limit on the cache size needs to be
controlled independently of the amount of the amount of free space.
One use case is a machine with a large number of cache directories
(e.g. a buildbot slave hosting a large number of independent build
jobs). By imposing an upper size limit on each cache directory,
users can more easily estimate the server's capacity.
Differential Revision: https://reviews.llvm.org/D34547
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306126 91177308-0d34-0410-b5e6-96231b3b80d8
This is essentially just a BinaryStreamRef packaged with an
offset and the logic for reading one is no different than the
logic for reading a BinaryStreamRef, except that we save the
current offset.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306122 91177308-0d34-0410-b5e6-96231b3b80d8
It was trying to do too many things. The basic lumping together of values for
legalization purposes is now handled by G_MERGE_VALUES. More complex things
involving gaps and odd sizes are handled by G_INSERT sequences.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306120 91177308-0d34-0410-b5e6-96231b3b80d8
G_SEQUENCE is going away soon so as a first step the MachineIRBuilder needs to
be taught how to emulate it with alternatives. We use G_MERGE_VALUES where
possible, and a sequence of G_INSERTs if not.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306119 91177308-0d34-0410-b5e6-96231b3b80d8
It seems some targets don't have std::strtof and friends. Hopefully,
dropping the std:: will be fine, as that's what the compiler recommends.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306098 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
The function matches the interface of llvm::to_integer, but as we are
calling out to a C library function, I let it take a Twine argument, so
we can avoid a string copy at least in some cases.
I add a test and replace a couple of existing uses of strtod with this
function.
Reviewers: zturner
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34518
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306096 91177308-0d34-0410-b5e6-96231b3b80d8
Currently JumpThreading can use LazyValueInfo to analyze an 'and' or 'or' of compare if the compare is fed by a livein of a basic block. This can be used to to prove the condition can't be met for some predecessor and the jump from that predecessor can be moved to the false path of the condition.
But if the compare is something that InstCombine turns into an add and a single compare, it can't be analyzed because the livein is now an input to the add and not the compare.
This patch adds a new method to LVI to get a ConstantRange on an edge. Then we teach jump threading to detect the add livein feeding a compare and to get the ConstantRange and propagate it.
Differential Revision: https://reviews.llvm.org/D33262
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306085 91177308-0d34-0410-b5e6-96231b3b80d8
X86_64 COFF only has support for 32 bit pcrel relocations. Produce an
error on all others.
Note that gnu as has extended the relocation values to support
this. It is not clear if we should support the gnu extension.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306082 91177308-0d34-0410-b5e6-96231b3b80d8
I want to use the same logic as LoopSimplify to form dedicated exits in
another pass (SimpleLoopUnswitch) so I wanted to factor it out here.
I also noticed that there is a pretty significantly more efficient way
to implement this than the way the code in LoopSimplify worked. We don't
need to actually retain the set of unique exit blocks, we can just
rewrite them as we find them and use only a set to deduplicate.
This did require changing one part of LoopSimplify to not re-use the
unique set of exits, but it only used it to check that there was
a single unique exit. That part of the code is about to walk the exiting
blocks anyways, so it seemed better to rewrite it to use those exiting
blocks to compute this property on-demand.
I also had to ditch a statistic, but it doesn't seem terribly valuable.
Differential Revision: https://reviews.llvm.org/D34049
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306081 91177308-0d34-0410-b5e6-96231b3b80d8
move the ObjectCache from the IRCompileLayer to SimpleCompiler.
This is the first in a series of patches aimed at cleaning up and improving the
robustness and performance of the ORC APIs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306058 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
These intrinsics aren't used by clang and haven't been for a while.
There's some really terrible codegen in the 32-bit target for avx512bw due to i64 not being legal. But as I said these intrinsics aren't used by clang even before this patch so this codegen reflects our clang behavior today.
Reviewers: spatel, RKSimon, zvi, igorb
Reviewed By: RKSimon
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D34389
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306047 91177308-0d34-0410-b5e6-96231b3b80d8
All NativeRawSymbols will have a unique symbol ID (retrievable via
getSymIndexId). For now, these are initialized to 0, but soon the
NativeSession will be responsible for creating the raw symbols, and it will
assign unique IDs.
The symbol cache in the NativeSession will also require the ability to clone
raw symbols, so I've provided implementations for that as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306042 91177308-0d34-0410-b5e6-96231b3b80d8