Add a couple of tests to increase coverage for the TableGen'erated code,
in particular for rules where 2 generic instructions may be combined
into a single machine instruction.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304971 91177308-0d34-0410-b5e6-96231b3b80d8
The zero heuristic assumes that integers are more likely positive than negative,
but this also has the effect of assuming that strcmp return values are more
likely positive than negative. Given that for nonzero strcmp return values it's
the ordering of arguments that determines the sign of the result there's no
reason to assume that's true.
Fix this by inspecting the LHS of the compare and using TargetLibraryInfo to
decide if it's strcmp-like, and if so only assume that nonzero is more likely
than zero i.e. strings are more often different than the same. This causes a
slight code generation change in the spec2006 benchmark 403.gcc, but with no
noticeable performance impact. The intent of this patch is to allow better
optimisation of dhrystone on Cortex-M cpus, but currently it won't as there are
also some changes that need to be made to if-conversion.
Differential Revision: https://reviews.llvm.org/D33934
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304970 91177308-0d34-0410-b5e6-96231b3b80d8
The FirePro and Radeon versions of Hawaii have different 64 bit floating point configurations so use distinct target names for them. Rename the target name for Kabini to accommodate.
Differential Revision: https://reviews.llvm.org/D34016
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304959 91177308-0d34-0410-b5e6-96231b3b80d8
This was discussed in D33338. We have larger pattern-matching ending in a truncate that
we can reduce or remove by handling these smaller patterns first. Further motivation is
that narrower shift ops are easier for value tracking and zext is better than sext.
http://rise4fun.com/Alive/rhh
Name: boolshift
%sext = sext i1 %x to i8
%r = lshr i8 %sext, 7
=>
%r = zext i1 %x to i8
Name: noboolshift
%sext = sext i3 %x to i8
%r = lshr i8 %sext, 7
=>
%sh = lshr i3 %x, 2
%r = zext i3 %sh to i8
Differential Revision: https://reviews.llvm.org/D33879
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304939 91177308-0d34-0410-b5e6-96231b3b80d8
When considering merging stores values are the results of loads only
consider stores whose values come from loads from the same base.
This fixes much of the longer compile times in PR33330.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304934 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Check that the first access before one being tested is valid.
Before this patch, if there was no definition prior to the Use being tested,
the first time Iter was deferenced, it hit the sentinel.
Reviewers: dberlin, gbiv
Subscribers: sanjoy, Prazek, llvm-commits
Differential Revision: https://reviews.llvm.org/D33950
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304926 91177308-0d34-0410-b5e6-96231b3b80d8
This could be viewed as another shortcoming of the DAGCombiner:
when both operands of a compare are zexted from the same source
type, we should be able to compare the original types.
The effect on PowerPC perf is likely unnoticeable, but there's a
visible regression for x86 if we feed the suboptimal IR for memcmp
expansion to the DAG:
_cmp_eq4_zexted_to_i64:
movl (%rdi), %ecx
movl (%rsi), %edx
xorl %eax, %eax
cmpq %rdx, %rcx
sete %al
_cmp_eq4_better:
movl (%rdi), %ecx
xorl %eax, %eax
cmpl (%rsi), %ecx
sete %al
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304923 91177308-0d34-0410-b5e6-96231b3b80d8
This makes it so that the code quality for CFI checks when compiling
with -O2 and linking with --lto-O0 is similar to that of the rest of
the code.
Reduces the size of a chrome binary built with -O2/--lto-O0 by
about 750KB.
Differential Revision: https://reviews.llvm.org/D33925
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304921 91177308-0d34-0410-b5e6-96231b3b80d8
Changed immediate type for repl.ph from uimm10 to simm10 as per the specs.
Repl.qb still accepts uimm8. Both instructions now mimic the behaviour of
GNU as.
Patch by Stefan Maksimovic.
Differential Revision: https://reviews.llvm.org/D33594
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304918 91177308-0d34-0410-b5e6-96231b3b80d8
If we know that both operands of an unsigned integer vector comparison are non-negative,
then it's safe to directly use a signed-compare-greater-than instruction (the only non-equality
integer vector compare predicate provided by SSE/AVX).
We're intentionally not changing the condition code to signed in order to preserve the
existing transforms that use min/max/psubus below here.
This should solve PR33276:
https://bugs.llvm.org/show_bug.cgi?id=33276
Differential Revision: https://reviews.llvm.org/D33862
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304909 91177308-0d34-0410-b5e6-96231b3b80d8
In the special (but also the likely common) case, we can avoid
the multi-block complexity of the general algorithm, so moving
this part off on its own will make it re-usable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304908 91177308-0d34-0410-b5e6-96231b3b80d8
The clang compiler by default uses FastISel when invoked with -O0, which
is also the default. In that case, passing of -mxgot does not get honored,
i.e. the code path that is to deal with large got is not taken.
Clang produces same output regardless of -mxgot being present or not.
This change checks whether -mxgot is passed as an option, and turns off
FastISel if it is.
Patch by Stefan Maksimovic.
Differential Revision: https://reviews.llvm.org/D33593
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304906 91177308-0d34-0410-b5e6-96231b3b80d8
According to the commit message from r296921, G_MERGE_VALUES and
G_INSERT are to be preferred over G_SEQUENCE. Therefore, stop generating
G_SEQUENCE in the ARM backend and remove the code dealing with it.
This boils down to the code breaking up double values for the soft float
calling convention. Use G_MERGE_VALUES + G_UNMERGE_VALUES instead of
G_SEQUENCE + G_EXTRACT for it. This maps very nicely to VMOVDRR +
VMOVRRD and simplifies the code in the instruction selector.
There's one occurence of G_SEQUENCE left in arm-irtranslator.ll, but
that is part of the target-independent code for translating constant
structs. Therefore, it is beyond the scope of this commit.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304902 91177308-0d34-0410-b5e6-96231b3b80d8
If there's enough data in fron of it the skipped region would just
become arbitrarily large, and we scan for the CHECK-NOT everywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304900 91177308-0d34-0410-b5e6-96231b3b80d8
Same as the other binary operators:
- legalize to 32 bits
- map to GPRs
- select to EORrr via TableGen'erated code
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304898 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit r301394. It broke some internal buildbots, reverting
while the issue is being investigated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304896 91177308-0d34-0410-b5e6-96231b3b80d8
We were checking that the index was in range of the destination vector type, not the (larger) source vector type
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304894 91177308-0d34-0410-b5e6-96231b3b80d8
Same as the other binary operators:
- legalize to 32 bits
- map to GPRs
- select ORRrr thanks to TableGen'erated code
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304890 91177308-0d34-0410-b5e6-96231b3b80d8
This is identical to the support for the other binary operators:
- widen to s32
- map into GPR
- select ANDrr (via TableGen'erated code)
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304885 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch updates Triple::isCompatibleWith to make armxx and thumbxx
triples compatible, as long as the subarch, vendor, os, envorionment and
object format match. Thumb/ARM code generation should be controlled
using the thumb-mode per-function target feature rather than by the
triple to allow mixing Thumb and ARM functions.
D33448 updates Clang's codegen to add thumb-mode for all functions with
armxx or thumbxx triples.
Reviewers: echristo, t.p.northover, rafael, kristof.beyls, rengolin, tejohnson
Reviewed By: tejohnson
Subscribers: rinon, eugenis, pcc, srhines, aemerson, mehdi_amini, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D33287
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304884 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Relocations are required for unconditional branches to function symbols with
different execution mode. Without this patch, incorrect branches are
generated for tail calls between functions with different execution
mode.
Reviewers: peter.smith, rafael, echristo, kristof.beyls
Reviewed By: peter.smith
Subscribers: aemerson, javed.absar, llvm-commits
Differential Revision: https://reviews.llvm.org/D33898
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304882 91177308-0d34-0410-b5e6-96231b3b80d8
These methods are specifically optimized to only counting leading zeros without an additional uint64_t compare.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304876 91177308-0d34-0410-b5e6-96231b3b80d8
I believe this code used to use APInt references which would have worked. But then they were changed to pointers to allow m_APInt to be used.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@304875 91177308-0d34-0410-b5e6-96231b3b80d8