Disable bypassing if one of the operands looks like a hash value. Slow
division often occurs in hashtable implementations and fast division is
never taken there because a hash value is extremely unlikely to have
enough upper bits set to zero.
A value is considered to be hash-like if it is produced by
1) XOR operation
2) Multiplication by a constant wider than the shorter type
3) PHI node with all incoming values being hash-like
Differential Revision: https://reviews.llvm.org/D28200
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299329 91177308-0d34-0410-b5e6-96231b3b80d8
The code already allowed vector types in via "isInteger" (which might want
a more specific name), so use splat-friendly constant predicates to match
those types.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299304 91177308-0d34-0410-b5e6-96231b3b80d8
This can only happen when we have a mix of zero and undef elements and the two vectors have a different arrangement of zeros/undefs. The shuffle should eventually be constant folded to all zeros.
Fixes PR32484.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299291 91177308-0d34-0410-b5e6-96231b3b80d8
REG_SEQUENCE falls into the same category as COPY for operands mapping:
- They don't have MCInstrDesc with register constraints
- The input variable could use whatever register classes
- It is possible to have register class already assigned to the operands
In particular, given REG_SEQUENCE are always target specific because of
the subreg indices. Those indices must apply to the register class of
the definition of the REG_SEQUENCE and therefore, the target must set a
register class to that definition. As a result, the generic code can
always use that register class to derive a valid mapping for a
REG_SEQUENCE.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299285 91177308-0d34-0410-b5e6-96231b3b80d8
This patch refactors the code used in llc such that all the users of the
addPassesToEmitFile API have access to a homogeneous way of handling
start/stop-after/before options right out of the box.
Previously each user would have needed to duplicate this logic and set
up its own options.
NFC
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299282 91177308-0d34-0410-b5e6-96231b3b80d8
This is helpful when extracting objects from archives produced by MSVC's
lib.exe, which users absolute paths to describe the archive members.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299264 91177308-0d34-0410-b5e6-96231b3b80d8
(and (setlt X, 0), (setlt Y, 0)) --> (setlt (and X, Y), 0)
We have 7 similar folds, but this one got away. The fact that the
x86 test with a branch didn't change is probably a separate bug. We
may also be missing this and the related folds in instcombine.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299252 91177308-0d34-0410-b5e6-96231b3b80d8
A common way to implement nearbyint is by fiddling with the floating
point environment and calling rint. This is used at least by the BSD
libm and musl. As such, canonicalizing the latter to the former will
create infinite loops for libm and generally pessimize performance, at
least when the generic C versions are used.
This change preserves the rint in the libcall translation and also
handles the domain truncation logic, so that rint with float argument
will be reduced to rintf etc.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299247 91177308-0d34-0410-b5e6-96231b3b80d8
Add a new node to act as a fancy bitcast from f16 operations to
i32 that implicitly zero the high 16-bits of the result.
Alternatively could try making v2f16 legal and canonicalizing
on build_vectors.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299246 91177308-0d34-0410-b5e6-96231b3b80d8
These are the same tests added for x86 with r299238,
but PPC doesn't specify all branches as cheap, so we
see different patterns in tests with branches.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299244 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This feature enables folding of logical shift operations of up to 3 places into addressing mode on Kryo and Falkor that have a fastpath LSL.
Reviewers: mcrosier, rengolin, t.p.northover
Subscribers: junbuml, gberry, llvm-commits, aemerson
Differential Revision: https://reviews.llvm.org/D31113
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299240 91177308-0d34-0410-b5e6-96231b3b80d8
This wasn't covering for the case where you have multiple latches and hence
the use of the same loop-id which needs to be mapped to the same loop-id.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299237 91177308-0d34-0410-b5e6-96231b3b80d8
Our _MM_HINT_T0/T1 constant values are 3/2 which matches gcc, but not icc or Intel documentation. Interestingly gcc had this same bug on their implementation of the gather/scatter builtins at one point too.
Fixes PR32411.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299234 91177308-0d34-0410-b5e6-96231b3b80d8
Summary: Currently the VP metadata was dropped when InstCombine converts a call to direct call. This patch converts the VP metadata to branch_weights so that its hotness is recorded.
Reviewers: eraman, davidxl
Reviewed By: davidxl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31344
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299228 91177308-0d34-0410-b5e6-96231b3b80d8
Implementation of TargetInstrInfo::findCommutedOpIndices for MIPS target,
restricting commutativity to second and third operand only for
dpaadd_[su].df instructions therein.
Prior to this change, there were cases where the vector that is to be added
to the dot product of the other two could take a position other than the
first one in the instruction, generating false output in the destination
vector.
Such behavior has been noticed in the two functions generating v2i64 output
values so far. Other ones may exhibit such behavior as well, just not for
the vector operands which are present in the test at the moment.
Tests altered so that the function's first operand is a constant splat so
that it can be loaded with a ldi instruction, since that is the case in
which the erroneous instruction operand placement has occurred. We check
that the register which is present in the ldi instruction is placed as the
first operand in the corresponding dpadd instruction.
Patch by Stefan Maksimovic.
Differential Revision: https://reviews.llvm.org/D30827
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299223 91177308-0d34-0410-b5e6-96231b3b80d8
Since LOCR only accepts GR32 virtual registers, its operands must be copied
into this regclass in insertSelect(), when an LOCR is built. Otherwise, the
case where the source operand was GRX32 will produce invalid IR.
Review: Ulrich Weigand
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299220 91177308-0d34-0410-b5e6-96231b3b80d8
Currently ComputeNumSignBits returns the minimum number of sign bits for all elements of vector data, when we may only be interested in one/some of the elements.
This patch adds a DemandedElts argument that allows us to specify the elements we actually care about. The original ComputeNumSignBits implementation calls with a DemandedElts demanding all elements to match current behaviour. Scalar types set this to 1.
I've only added support for BUILD_VECTOR and EXTRACT_VECTOR_ELT so far, all others will default to demanding all elements but can be updated in due course.
Followup to D25691.
Differential Revision: https://reviews.llvm.org/D31311
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299219 91177308-0d34-0410-b5e6-96231b3b80d8
Even on older subtargets that lack vector support, there may be vector values
with just one element in the input program. These are converted during DAG
legalization to scalar values.
The pre-legalize SystemZ DAGCombiner methods should in this circumstance not
touch these nodes. This patch adds a check for this in
SystemZTargetLowering::combineEXTRACT_VECTOR_ELT().
Review: Ulrich Weigand
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299213 91177308-0d34-0410-b5e6-96231b3b80d8
The patch rL298481 was reverted due to crash on clang-with-lto-ubuntu build.
The reason of the crash was type mismatch between either a or b and RHS in the following situation:
LHS = sext(a +nsw b) > RHS.
This is quite rare, but still possible situation. Normally we need to cast all {a, b, RHS} to their widest type.
But we try to avoid creation of new SCEV that are not constants to avoid initiating recursive analysis that
can take a lot of time and/or cache a bad value for iterations number. To deal with this, in this patch we
reject this case and will not try to analyze it if the type of sum doesn't match with the type of RHS. In this
situation we don't need to create any non-constant SCEVs.
This patch also adds an assertion to the method IsProvedViaContext so that we could fail on it and not
go further into range analysis etc (because in some situations these analyzes succeed even when the passed
arguments have wrong types, what should not normally happen).
The patch also contains a fix for a problem with too narrow scope of the analysis caused by wrong
usage of predicates in recursive invocations.
The regression test on the said failure: test/Analysis/ScalarEvolution/implied-via-addition.ll
Reviewers: reames, apilipenko, anna, sanjoy
Reviewed By: sanjoy
Subscribers: mzolotukhin, mehdi_amini, llvm-commits
Differential Revision: https://reviews.llvm.org/D31238
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299205 91177308-0d34-0410-b5e6-96231b3b80d8
Previously compiler often extracted common immediates into specific register, e.g.:
```
%vreg0 = S_MOV_B32 0xff;
%vreg2 = V_AND_B32_e32 %vreg0, %vreg1
%vreg4 = V_AND_B32_e32 %vreg0, %vreg3
```
Because of this SDWA peephole failed to find SDWA convertible pattern. E.g. in previous example this could be converted into 2 SDWA src operands:
```
SDWA src: %vreg2 src_sel:BYTE_0
SDWA src: %vreg4 src_sel:BYTE_0
```
With this change peephole check if operand is either immediate or register that is copy of immediate.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299202 91177308-0d34-0410-b5e6-96231b3b80d8
Adding some test-cases demonstrating cases that need to be improved.
To be followed by patches that improve these cases.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299189 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Triggered by commit r298620: "[LV] Vectorize GEPs".
If we encounter a vector GEP with scalar arguments, we splat the scalar
into a vector of appropriate size before we scatter the argument.
Reviewers: arsenm, mehdi_amini, bkramer
Reviewed By: arsenm
Subscribers: bjope, mssimpso, wdng, llvm-commits
Differential Revision: https://reviews.llvm.org/D31416
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299186 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Assertions assuming that function calls may not have zero durations do
not seem to hold in the wild. There are valid cases where the conversion
of the tsc counters end up becoming zero-length durations. These
assertions don't really hold and the algorithms don't need those to be
true for them to work.
Reviewers: dblaikie, echristo
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D31519
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299150 91177308-0d34-0410-b5e6-96231b3b80d8
BIND_OPCODE_DONE/REBASE_OPCODE_DONE may appear at the end of the opcode array,
but they are not required to. The linker only adds them as padding to align the
opcodes to pointer size.
This fixes rdar://problem/31285560.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299104 91177308-0d34-0410-b5e6-96231b3b80d8
Mostly this change adds support converting to and from
YAML which will allow us to write more test cases for
the WebAssembly MC and lld ports.
Better support for objdump, readelf, and nm will be in
followup CLs.
I had to update the two wasm test binaries because they
used the old style 'name' section which is no longer
supported.
Differential Revision: https://reviews.llvm.org/D31099
Patch by Sam Clegg
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299101 91177308-0d34-0410-b5e6-96231b3b80d8
Now alternatively to the TargetOption.AllowFPOpFusion global flag, FMUL->FADD
can also use the per operation FMF to allow fusion.
The idea here is not to port everything to the new scheme (e.g. fused
multiply-and-sub will be ported later) but that this work all the way from
clang.
The transformation is conditionalized on *both* the FADD and the FMUL having
the FMF contract flag.
Differential Revision: https://reviews.llvm.org/D31169
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299096 91177308-0d34-0410-b5e6-96231b3b80d8
In the long-term, we want to replace statistics with something
finer-grained that lets us gather per-function data.
Remarks are that replacement.
Create an ORE instance in SelectionDAGISel, and pass it to
SelectionDAG.
SelectionDAG was used so that we can emit remarks from all
SelectionDAG-related code, including TargetLowering and DAGCombiner.
This isn't used in the current patch but Adam tells me he's interested
for the fp-contract combines.
Use the ORE instance to emit FastISel failures as remarks (instead of
the mix of dbgs() dumps and statistics that we currently have).
Eventually, we want to have an API that tells us whether remarks are
enabled (http://llvm.org/PR32352) so that we don't emit expensive
remarks (in this case, dumping IR) when it's not needed. For now, use
'isEnabled' as a crude replacement.
This does mean that the replacement for '-fast-isel-verbose' is now
'-pass-remarks-missed=isel'. Additionally, clang users also need to
enable remark diagnostics, using '-Rpass-missed=isel'.
This also removes '-fast-isel-verbose2': there are no static statistics
that we want to only enable in asserts builds, so we can always use
the remarks regardless of the build type.
Differential Revision: https://reviews.llvm.org/D31405
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@299093 91177308-0d34-0410-b5e6-96231b3b80d8