Handle chains in which the same offset is used for both loads and
stores to the same array.
Fixes rdar://11410078.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174789 91177308-0d34-0410-b5e6-96231b3b80d8
isn't using the default calling convention. However, if the transformation is
from a call to inline IR, then the calling convention doesn't matter.
rdar://13157990
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174724 91177308-0d34-0410-b5e6-96231b3b80d8
In the loop vectorizer cost model, we used to ignore stores/loads of a pointer
type when computing the widest type within a loop. This meant that if we had
only stores/loads of pointers in a loop we would return a widest type of 8bits
(instead of 32 or 64 bit) and therefore a vector factor that was too big.
Now, if we see a consecutive store/load of pointers we use the size of a pointer
(from data layout).
This problem occured in SingleSource/Benchmarks/Shootout-C++/hash.cpp (reduced
test case is the first test in vector_ptr_load_store.ll).
radar://13139343
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174377 91177308-0d34-0410-b5e6-96231b3b80d8
The main lists of debug info metadata attached to the compile_unit had an extra
layer of metadata nodes they went through for no apparent reason. This patch
removes that (& still passes just as much of the GDB 7.5 test suite). If anyone
can show evidence as to why these extra metadata nodes are there I'm open to
reverting this patch & documenting why they're there.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174266 91177308-0d34-0410-b5e6-96231b3b80d8
Prepare it for vectors of pointers and handle simple cases. We don't handle
complicated cases because accumulateConstantOffset bails on pointer vectors.
Fixes selfhost on i386.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174179 91177308-0d34-0410-b5e6-96231b3b80d8
remaining use of AliasAnalysis concepts such as isIdentifiedObject to
prove pointer inequality.
@external_compare in test/Transforms/InstSimplify/compare.ll shows a simple
case where a noalias argument can be equal to a global variable address, and
while AliasAnalysis can get away with saying that these pointers don't alias,
instsimplify cannot say that they are not equal.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174122 91177308-0d34-0410-b5e6-96231b3b80d8
be equal, since there's nothing preventing a caller from correctly predicting
the stack location of an alloca.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174119 91177308-0d34-0410-b5e6-96231b3b80d8
The AttrBuilder is for building a collection of attributes. The Attribute object
holds only one attribute. So it's not really useful for the Attribute object to
have a creator which takes an AttrBuilder.
This has two fallouts:
1. The AttrBuilder no longer holds its internal attributes in a bit-mask form.
2. The attributes are now ordered alphabetically (hence why the tests have changed).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@174110 91177308-0d34-0410-b5e6-96231b3b80d8
Changing ARMBaseTargetMachine to return ARMTargetLowering intead of
the generic one (similar to x86 code).
Tests showing which instructions were added to cast when necessary
or cost zero when not. Downcast to 16 bits are not lowered in NEON,
so costs are not there yet.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173849 91177308-0d34-0410-b5e6-96231b3b80d8
The AttributeSetNode contains all of the attributes. This removes one (hopefully
last) use of the Attribute class as a container of multiple attributes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173761 91177308-0d34-0410-b5e6-96231b3b80d8
These tests in particular try to use escaped square brackets as an
argument to grep, which is failing for me with native win32 python. It
appears the backslash is being lost near the CreateProcess*() call.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173506 91177308-0d34-0410-b5e6-96231b3b80d8
loops over instructions in the basic block or the use-def list of the
value, neither of which are really efficient when repeatedly querying
about values in the same basic block.
What's more, we already know that the CondBB is small, and so we can do
a much more efficient test by counting the uses in CondBB, and seeing if
those account for all of the uses.
Finally, we shouldn't blanket fail on any such instruction, instead we
should conservatively assume that those instructions are part of the
cost.
Note that this actually fixes a bug in the pass because
isUsedInBasicBlock has a really terrible bug in it. I'll fix that in my
next commit, but the fix for it would make this code suddenly take the
compile time hit I thought it already was taking, so I wanted to go
ahead and migrate this code to a faster & better pattern.
The bug in isUsedInBasicBlock was also causing other tests to test the
wrong thing entirely: for example we weren't actually disabling
speculation for floating point operations as intended (and tested), but
the test passed because we failed to speculate them due to the
isUsedInBasicBlock failure.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173417 91177308-0d34-0410-b5e6-96231b3b80d8
Original commit message:
Plug TTI into the speculation logic, giving it a real cost interface
that can be specialized by targets.
The goal here is not to be more aggressive, but to just be more accurate
with very obvious cases. There are instructions which are known to be
truly free and which were not being modeled as such in this code -- see
the regression test which is distilled from an inner loop of zlib.
Everywhere the TTI cost model is insufficiently conservative I've added
explicit checks with FIXME comments to go add proper modelling of these
cost factors.
If this causes regressions, the likely solution is to make TTI even more
conservative in its cost estimates, but test cases will help here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173357 91177308-0d34-0410-b5e6-96231b3b80d8
We use constant folding to see if an intrinsic evaluates to the same value as a
constant that we know. If we don't take the undefinedness into account we get a
value that doesn't match the actual implementation, and miscompiled code.
This was uncovered by Chandler's simplifycfg changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173356 91177308-0d34-0410-b5e6-96231b3b80d8
that can be specialized by targets.
The goal here is not to be more aggressive, but to just be more accurate
with very obvious cases. There are instructions which are known to be
truly free and which were not being modeled as such in this code -- see
the regression test which is distilled from an inner loop of zlib.
Everywhere the TTI cost model is insufficiently conservative I've added
explicit checks with FIXME comments to go add proper modelling of these
cost factors.
If this causes regressions, the likely solution is to make TTI even more
conservative in its cost estimates, but test cases will help here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173342 91177308-0d34-0410-b5e6-96231b3b80d8
a cost fuction that seems both a bit ad-hoc and also poorly suited to
evaluating constant expressions.
Notably, it is missing any support for trivial expressions such as
'inttoptr'. I could fix this routine, but it isn't clear to me all of
the constraints its other users are operating under.
The core protection that seems relevant here is avoiding the formation
of a select instruction wich a further chain of select operations in
a constant expression operand. Just explicitly encode that constraint.
Also, update the comments and organization here to make it clear where
this needs to go -- this should be driven off of real cost measurements
which take into account the number of constants expressions and the
depth of the constant expression tree.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@173340 91177308-0d34-0410-b5e6-96231b3b80d8