Turns out udivrem can write its output to the same location as one of its inputs so the extra temporary isn't needed.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302772 91177308-0d34-0410-b5e6-96231b3b80d8
This lets toString take advantage of the degenerate case checks in udivrem and is just generally cleaner.
One minor downside of this is that the divisor APInt now needs to be the same size as Tmp which requires an additional allocation. But we were doing a poor job of reusing allocations before so the new code should still be an improvement.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302704 91177308-0d34-0410-b5e6-96231b3b80d8
The description says it returns the number of words needed to represent the results. But the way it was coded it always returns (lhsWords + rhsWords) or (lhsWords + rhsWords - 1). But the result could be even smaller than that and it wouldn't tell you.
No one uses the result today so rather than try to fix it, just remove it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302551 91177308-0d34-0410-b5e6-96231b3b80d8
The value of 'i' is always the smaller of DstParts and SrcParts so we can just use that fact to write all the code in terms of SrcParts and DstParts.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302408 91177308-0d34-0410-b5e6-96231b3b80d8
Currently multiply is implemented in operator*=. Operator* makes a copy and uses operator*= to modify the copy.
Operator*= itself allocates a temporary buffer to hold the multiply result as it computes it. Then copies it to the buffer in *this.
Operator*= attempts to bound the size of the result based on the number of active bits in its inputs. It also has a couple special cases to handle 0 inputs without any memory allocations or multiply operations. The best case is that it calculates a single word regardless of input bit width. The worst case is that it calculates the a 2x input width result and drop the upper bits.
Since operator* uses operator*= it incurs two allocations, one for a copy of *this and one for the temporary allocation. Neither of these allocations are kept after the method operation is done.
The main usage in the backend appears to be ConstantRange::multiply which uses operator* rather than operator*=.
This patch moves the multiply operation to operator* and implements operator*= using it. This avoids the copy in operator*. operator* now allocates a result buffer sized the same width as its inputs no matter what. This buffer will be used as the buffer for the returned APInt. Finally, we reuse tcMultiply to implement the multiply operation. This function is capable of not calculating additional upper words that will be discarded.
This change does lose the special optimizations for the inputs using less words than their size implies. But it also removed the getActiveBits calls from all multiplies. If we think those optimizations are important we could look at providing additional bounds to tcMultiply to limit the computations.
Differential Revision: https://reviews.llvm.org/D32830
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302171 91177308-0d34-0410-b5e6-96231b3b80d8
Currently several places assume the VAL member is always at least the same size as pVal. In particular for a memcpy in the move assignment operator. While this is a true assumption, it isn't good practice to assume this.
This patch gives the union a name so we can write the memcpy in terms of the union itself. This also adds a similar memcpy to the move constructor where we previously just copied using VAL directly.
This patch is mostly just a mechanical addition of the U in front of VAL and pVAL everywhere. But several constructors had to be modified since we can't directly initializer a field of named union from the initializer list.
Differential Revision: https://reviews.llvm.org/D30629
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@302040 91177308-0d34-0410-b5e6-96231b3b80d8
This replaces a hand written copy loop with a call to memcpy for both zext and sext.
For sext, it replaces multiple if/else blocks propagating sign information forward. Now we just do a copy, a sign extension on the last copied word, a memset, and clearUnusedBits.
Differential Revision: https://reviews.llvm.org/D32417
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301201 91177308-0d34-0410-b5e6-96231b3b80d8
This patch adds an in place version of ashr to match lshr and shl which were recently added.
I've tried to make this similar to the lshr code with additions to handle the sign extension. I've also tried to do this with less if checks than the current ashr code by sign extending the original result to a word boundary before doing any of the shifting. This removes a lot of the complexity of determining where to fill in sign bits after the shifting.
Differential Revision: https://reviews.llvm.org/D32415
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301198 91177308-0d34-0410-b5e6-96231b3b80d8
Previously single word would always return 0 regardless of the original sign. Multi word would return all 0s or all 1s based on the original sign. Now single word takes into account the sign as well.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301159 91177308-0d34-0410-b5e6-96231b3b80d8
The current code is trying to be clever with shifts to avoid needing to clear unused bits. But it looks like the compiler is unable to optimize out the unused bit handling in the APInt constructor. Given this its better to just use SignExtend64 and have more readable code.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301133 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r301105, 4, 3 and 1, as a follow up of the previous
revert, which broke even more bots.
For reference:
Revert "[APInt] Use operator<<= where possible. NFC"
Revert "[APInt] Use operator<<= instead of shl where possible. NFC"
Revert "[APInt] Use ashInPlace where possible."
PR32754.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301111 91177308-0d34-0410-b5e6-96231b3b80d8
For single word, shift by BitWidth was always returning 0, but for multiword it was based on original sign. Now single word matches multi word.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301094 91177308-0d34-0410-b5e6-96231b3b80d8
The unused upper bits are guaranteed to be 0 so we don't need to worry about accidentally counting them.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301091 91177308-0d34-0410-b5e6-96231b3b80d8
Currently sle and ule have to call slt/ult and eq to get the proper answer. This results in extra code for both calls and additional scans of multiword APInts.
This patch replaces slt/ult with a compareSigned/compare that can return -1, 0, or 1 so we can cover all the comparison functions with a single call.
While I was there I removed the activeBits calls and other checks at the start of the slow part of ult. Both of the activeBits calls potentially scan through each of the APInts separately. I can't imagine that's any better than just scanning them in parallel and doing the compares. Now we just share the code with tcCompare.
These changes seem to be good for about a 7-8k reduction on the size of the opt binary on my local x86-64 build.
Differential Revision: https://reviews.llvm.org/D32339
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300995 91177308-0d34-0410-b5e6-96231b3b80d8
This question comes up in many places in SimplifyDemandedBits. This makes it easy to ask without allocating additional temporary APInts.
The BitVector class provides a similar functionality through its (IMHO badly named) test(const BitVector&) method. Though its output polarity is reversed.
I've provided one example use case in this patch. I plan to do more as a follow up.
Differential Revision: https://reviews.llvm.org/D32258
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300851 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: This is a simple question we should be able to answer without creating a temporary to hold the AND result. We can also get an early out as soon as we find a word that intersects.
Reviewers: RKSimon, hans, spatel, davide
Reviewed By: hans, davide
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D32253
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300812 91177308-0d34-0410-b5e6-96231b3b80d8
This patch uses lshrInPlace to replace code where the object that lshr is called on is being overwritten with the result.
This adds an lshrInPlace(const APInt &) version as well.
Differential Revision: https://reviews.llvm.org/D32155
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300566 91177308-0d34-0410-b5e6-96231b3b80d8
This merges the two different multiword shift right implementations into a single version located in tcShiftRight. lshrInPlace now calls tcShiftRight for the multiword case.
I retained the memmove fast path from lshrInPlace and used a memset for the zeroing. The for loop is basically tcShiftRight's implementation with the zeroing and the intra-shift of 0 removed.
Differential Revision: https://reviews.llvm.org/D32114
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300503 91177308-0d34-0410-b5e6-96231b3b80d8
This was throwing an assert because we determined the intra-word shift amount by subtracting the size of the full word shift from the total shift amount. But we failed to account for the fact that we clipped the full word shifts by total words first. To fix this just calculate the intra-word shift as the remainder of dividing by bits per word.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300405 91177308-0d34-0410-b5e6-96231b3b80d8
Switch from Euclid's algorithm to Stein's algorithm for computing GCD. This
avoids the (expensive) APInt division operation in favour of bit operations.
Remove all memory allocation from within the GCD loop by tweaking our `lshr`
implementation so it can operate in-place.
Differential Revision: https://reviews.llvm.org/D31968
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300252 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
APInt is currently implemented with an unsigned BitWidth field first and then a uint_64/pointer union. Due to the 64-bit size of the union there is a hole after the bitwidth.
Putting the union first allows the class to be packed. Making it 12 bytes instead of 16 bytes. An APSInt goes from 20 bytes to 16 bytes.
This shows a 4k reduction on the size of the opt binary on my local x86-64 build. So this enables some other improvement to the code as well.
Reviewers: dblaikie, RKSimon, hans, davide
Reviewed By: davide
Subscribers: davide, llvm-commits
Differential Revision: https://reviews.llvm.org/D32001
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@300171 91177308-0d34-0410-b5e6-96231b3b80d8