We have found that -- when the selected subarchitecture has a scheduling model
and we are not optimizing for size -- the machine-instruction combiner uses a
too-simple algorithm to compute the cost of one of the two alternatives [before
and after running a combining pass on a section of code], and therefor it throws
away the combination results too often.
This fix has the potential to help any ISA with the potential to combine
instructions and for which at least one subarchitecture has a scheduling model.
As of now, this is only known to definitely affect AArch64 subarchitectures with
a scheduling model.
Regression tested on AMD64/GNU-Linux, new test case tested to fail on an
unpatched compiler and pass on a patched compiler.
Patch by Abe Skolnik and Sebastian Pop.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289399 91177308-0d34-0410-b5e6-96231b3b80d8
This is NFC today, but won't be once D27216 (or an equivalent patch) is
in.
This change fixes a design problem in SCEVExpander -- it relied on a
hoisting optimization to generate correct code for add recurrences.
This meant changing the hoisting optimization to not kick in under
certain circumstances (to avoid speculating faulting instructions, say)
would break correctness.
The fix is to make the correctness requirements explicit, and have it
not rely on the hoisting optimization for correctness.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289397 91177308-0d34-0410-b5e6-96231b3b80d8
check file to not be unreasonably slow in the face of multiple check
prefixes.
The previous logic would repeatedly scan potentially large portions of
the check file looking for alternative prefixes. In the worst case this
would scan most of the file looking for a rare prefix between every
single occurance of a common prefix. Even if we bounded the scan, this
would do bad things if the order of the prefixes was "unlucky" and the
distant prefix was scanned for first.
None of this is necessary. It is straightforward to build a state
machine that recognizes the first, longest of the set of alternative
prefixes. That is in fact exactly whan a regular expression does.
This patch builds a regular expression once for the set of prefixes and
then uses it to search incrementally for the next prefix. This requires
some threading of state but actually makes the code dramatically
simpler. I've also added a big comment describing the algorithm as it
was not at all obvious to me when I started.
With this patch, several previously pathological test cases in
test/CodeGen/X86 are 5x and more faster. Overall, running all tests
under test/CodeGen/X86 uses 10% less CPU after this, and because all the
slowest tests were hitting this, finishes in 40% less wall time on my
system (going from just over 5.38s to just over 3.23s) on a release
build! This patch substantially improves the time of all 7 X86 tests
that were in the top 20 reported by --time-tests, 5 of them are
completely off the list and the remaining 2 are much lower. (Sadly, the
new tests on the list include 2 new X86 ones that are slow for unrelated
reasons, so the count stays at 4 of the top 20.)
It isn't clear how much this helps debug builds in aggregate in part
because of the noise, but it again makes mane of the slowest x86 tests
significantly faster (10% or more improvement).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289382 91177308-0d34-0410-b5e6-96231b3b80d8
a commandline flag and test the flag directly. NFC.
If we ever need this generality it can be added back.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289381 91177308-0d34-0410-b5e6-96231b3b80d8
This fixes one formatting goof I left in my previous commit and *many*
other inconsistencies.
I'm planning to make substantial changes here and so wanted to get to
a clean baseline.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289379 91177308-0d34-0410-b5e6-96231b3b80d8
make some readability improvements.
Both the check file and input file have to be fully buffered to
normalize their whitespace. But previously this would be done in a stack
SmallString and then copied into a heap allocated MemoryBuffer. That
seems pretty wasteful, especially for something like FileCheck where
there are only ever two such entities.
This just rearranges the code so that we can keep the canonicalized
buffers on the stack of the main function, use reasonably large stack
buffers to reduce allocation. A rough estimate seems to show that about
80% of LLVM's .ll and .s files will fit into a 4k buffer, so this should
completely avoid heap allocation for the buffer in those cases. My
system's malloc is fast enough that the allocations don't directly show
up in timings. However, on some very slow test cases, this saves 1% - 2%
by avoiding the copy into the heap allocated buffer.
This also splits out the code which checks the input into a helper much
like the code to build the checks as that made the code much more
readable to me. Nit picks and suggestions welcome here. It has really
exposed a *bunch* of stuff that could be cleaned up though, so I'm
probably going to go and spring clean all of this code as I have more
changes coming to speed things up.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289378 91177308-0d34-0410-b5e6-96231b3b80d8
This teaches SimplifyDemandedElts that the FMA can be removed if the lower element isn't used. It also teaches it that if upper elements of the first operand aren't used then we can simplify them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289377 91177308-0d34-0410-b5e6-96231b3b80d8
iteration.
Instead, load the byte at the needle length, compare it directly, and
save it to use in the lookup table of lengths we can skip forward.
I also added an annotation to expect that the comparison fails so that
the loop gets laid out contiguously without the call to memcpy (and the
substantial register shuffling that the ABI requires of that call).
Finally, because this behaves especially badly with a needle length of
one (by calling memcmp with a zero length) special case that to directly
call memchr, which is what we should have been doing anyways.
This was motivated by the fact that there are a large number of test
cases in 'check-llvm' where FileCheck's performance is dominated by
calls to StringRef::find (in a release, no-asserts build). I'm working
on patches to generally improve matters there, but this alone was worth
a 12.5% improvement in one test case where FileCheck spent 92% of its
time in this routine.
I experimented a bunch with different minor variations on this theme,
for example setting the pointer *at* the last byte and indexing
backwards for the call to memcmp. That didn't improve anything on this
version and seemed more complex. I also tried other things to make the
loop flow more nicely and none worked. =/ It is a bit unfortunate, the
generated code here remains pretty gross, but I don't see any obvious
ways to improve it. At this point, most of my ideas would be really
elaborate:
1) While the remainder of the string is long enough, we could load
a 16-byte or 32-byte vector at the address of the last byte and use
palignr to rotate that and check the first 15- or 31-bytes at the
front of the next segment, essentially pre-loading the first several
bytes of the next iteration so we could quickly detect a mismatch in
those bytes without an additional memory access. Down side would be
the code complexity, having a fallback loop, and likely misaligned
vector load. Plus it would make the common case of the last byte not
matching somewhat slower (need some extraction from a vector).
2) While we have space, we could do an aligned load of a 16- or 32-byte
vector that *contains* the end byte, and use any peceding bytes to
have a more precise "no" test, and any subsequent bytes could be
saved for the next iteration. This remove any unaligned load penalty,
but still requires us to pay the overhead of vector extraction for
the cases where we didn't need to do anything other than load and
compare the last byte.
3) Try to walk from the last byte in a way that is more friendly to
cache and/or memory pre-fetcher considering we have to poke the last
byte anyways.
No idea if any of these are really worth pursuing though. They all seem
somewhat unlikely to yield big wins in practice and to be a lot of work
and complexity. So I settled here, which at least seems like a strict
improvement over the previous version.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289373 91177308-0d34-0410-b5e6-96231b3b80d8
These intrinsics don't read the upper bits of their second and third inputs so we can try to simplify them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289372 91177308-0d34-0410-b5e6-96231b3b80d8
These intrinsics don't read the upper elements of their first and second input. These are slightly different the the SSE version which does use the upper bits of its first element as passthru bits since the result goes to an XMM register. For AVX-512 the result goes to a mask register instead.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289371 91177308-0d34-0410-b5e6-96231b3b80d8
These intrinsics don't read the upper bits of their second input. And the third input is the passthru for masking and that only uses the lower element as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289370 91177308-0d34-0410-b5e6-96231b3b80d8
These are currently limited to integer types, but we should
be able to extend to splat vectors and possibly general vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289343 91177308-0d34-0410-b5e6-96231b3b80d8
This was failing when trying to fold immediates into operand 1 of a
phi, which only has one statically known operand.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289337 91177308-0d34-0410-b5e6-96231b3b80d8
Extension to D27129 which already supported bitcasts from 'small element' vector to 'large element' scalar/vector types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289329 91177308-0d34-0410-b5e6-96231b3b80d8
There was a bug where we would hit an assertion if 'Q' was used as a
constraint.
I also removed hardcoded register names to prefer regexes so the tests
don't break when the register allocator changes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289325 91177308-0d34-0410-b5e6-96231b3b80d8
It looks like some time in the past, constraint codes were changed from
chars being passed around to enums.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289323 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This never really got implemented, and was very hard to test before
a lot of the refactoring changes to make things more robust. But now we
can test it thoroughly and cleanly, especially at the CGSCC level.
The core idea is that when an inner analysis manager proxy receives the
invalidation event for the outer IR unit, it needs to walk the inner IR
units and propagate it to the inner analysis manager for each of those
units. For example, each function in the SCC needs to get an
invalidation event when the SCC gets one.
The function / module interaction is somewhat boring here. This really
becomes interesting in the face of analysis-backed IR units. This patch
effectively handles all of the CGSCC layer's needs -- both invalidating
SCC analysis and invalidating function analysis when an SCC gets
invalidated.
However, this second aspect doesn't really handle the
LoopAnalysisManager well at this point. That one will need some change
of design in order to fully integrate, because unlike the call graph,
the entire function behind a LoopAnalysis's results can vanish out from
under us, and we won't even have a cached API to access. I'd like to try
to separate solving the loop problems into a subsequent patch though in
order to keep this more focused so I've adapted them to the API and
updated the tests that immediately fail, but I've not added the level of
testing and validation at that layer that I have at the CGSCC layer.
An important aspect of this change is that the proxy for the
FunctionAnalysisManager at the SCC pass layer doesn't work like the
other proxies for an inner IR unit as it doesn't directly manage the
FunctionAnalysisManager and invalidation or clearing of it. This would
create an ever worsening problem of dual ownership of this
responsibility, split between the module-level FAM proxy and this
SCC-level FAM proxy. Instead, this patch changes the SCC-level FAM proxy
to work in terms of the module-level proxy and defer to it to handle
much of the updates. It only does SCC-specific invalidation. This will
become more important in subsequent patches that support more complex
invalidaiton scenarios.
Reviewers: jlebar
Subscribers: mehdi_amini, mcrosier, mzolotukhin, llvm-commits
Differential Revision: https://reviews.llvm.org/D27197
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289317 91177308-0d34-0410-b5e6-96231b3b80d8
Ideally ISD::FP_TO_SINT and ISD::FP_TO_UINT would only be used for cases with the same number of input and output elements.
Similar things have already been done for other convert intrinsics.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@289316 91177308-0d34-0410-b5e6-96231b3b80d8