is itself a bitcast. Since we have gep(bitcast(bitcast(y))) in this
case, just wait for the two bitcasts to get zapped. This prevents
instcombine from confusing some aliasing stuff, and allows it to
directly eliminate the load in the testcase.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80508 91177308-0d34-0410-b5e6-96231b3b80d8
make check in a non-clean directory causes it to fail (for example when running
make check twice), since execution counts will differ.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80362 91177308-0d34-0410-b5e6-96231b3b80d8
- I'm still trying to figure out the cleanest way to implement this and match the assembler, currently there are some substantial differences.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80347 91177308-0d34-0410-b5e6-96231b3b80d8
calls into a function and if the calls bring in arrays, try to merge
them together to reduce stack size. For example, in the testcase
we'd previously end up with 4 allocas, now we end up with 2 allocas.
As described in the comments, this is not really the ideal solution
to this problem, but it is surprisingly effective. For example, on
176.gcc, we end up eliminating 67 arrays at "gccas" time and another
24 at "llvm-ld" time.
One piece of concern that I didn't look into: at -O0 -g with
forced inlining this will almost certainly result in worse debug
info. I think this is acceptable though given that this is a case
of "debugging optimized code", and we don't want debug info to
prevent the optimizer from doing things anyway.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80215 91177308-0d34-0410-b5e6-96231b3b80d8
moves. This avoids the need to promote the operands (or implicitly
extend them, a partial register update condition), and can reduce
i8 register pressure. This substantially speeds up code such as
write_hex in lib/Support/raw_ostream.cpp.
subclass-coalesce.ll is too trivial and no longer tests what it was
originally intended to test.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80184 91177308-0d34-0410-b5e6-96231b3b80d8
sections, etc.
- The quick and dirty way, just clone the TargetLoweringObjectFile
code. Eventually this should be shared... somehow.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80168 91177308-0d34-0410-b5e6-96231b3b80d8
- I moved section creation back into AsmParser. I think policy decisions like
this should be pushed higher, not lower, when possible (in addition the
assembler has flags which change this behavior, for example).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80162 91177308-0d34-0410-b5e6-96231b3b80d8
leads to partial-register definitions. To help avoid redundant
zero-extensions, also teach the h-register matching patterns that
use movzbl to match anyext as well as zext.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80099 91177308-0d34-0410-b5e6-96231b3b80d8
This is a simple AliasAnalysis implementation which works by making
ScalarEvolution queries. ScalarEvolution has a more complete understanding
of arithmetic than BasicAA's collection of ad-hoc checks, so it handles
some cases that BasicAA misses, for example p[i] and p[i+1] within the
same iteration of a loop.
This is currently experimental. It may be that the main use for this pass
will be to help find cases where BasicAA can be profitably extended, or
to help in the development of the overall AliasAnalysis infrastructure,
however it's also possible that it could grow up to become a directly
useful pass.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80098 91177308-0d34-0410-b5e6-96231b3b80d8
- I haven't really tried to find the "right" way to store the fixups or apply
them, yet. This works, but isn't particularly elegant or fast.
- Still no evaluation support, so we don't actually ever not turn a fixup into
a relocation entry.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@80089 91177308-0d34-0410-b5e6-96231b3b80d8