Schedule more conservatively to account for stalls on floating point
resources and latency. Use the AGU resource to model latency stalls
since it's shared between FP and LD/ST instructions. This might not be
completely accurate but should work well in practice.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198125 91177308-0d34-0410-b5e6-96231b3b80d8
Many vector operations never had itineraries. Since the new machine
model was a mapping from existing itinerary classes, we don't have a
model for these. We still want to migrate A9 even though no one has
invested in a complete model, so mark it incomplete to avoid the
scheduler asserting.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198123 91177308-0d34-0410-b5e6-96231b3b80d8
PostGenericScheduler uses either the new machine model or the hazard
checker for top-down scheduling. Most of the infrastructure for PreRA
machine scheduling is reused.
With a some tuning, this should allow MachineScheduler to be default
for all ARM targets, including cortex-A9, using the new machine
model. Likewise, with additional tuning, it should be able to replace
PostRAScheduler for all targets.
The PostMachineScheduler pass does not currently run the
AntiDepBreaker. There is less need for it on targets that are already
running preRA MachineScheduler. I want to prove it's necessary before
committing to the maintenance burden.
The PostMachineScheduler also currently removes kill flags and adds
them all back later. This is a bit ridiculous. I'd prefer passes to
directly use a liveness utility than rely on flags.
A test case that enables this scheduler will be included in a
subsequent checkin that updates the A9 model.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198122 91177308-0d34-0410-b5e6-96231b3b80d8
Factor the MachineFunctionPass into MachineSchedulerBase.
Split the DAG class into ScheduleDAGMI and SchedulerDAGMILive.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198119 91177308-0d34-0410-b5e6-96231b3b80d8
vector shift by immedate count (VSHLI/VSRLI/VSRAI) into a build_vector when
the vector in input to the shift is a build_vector of all constants or UNDEFs.
Target specific nodes for packed shifts by immediate count are in
general introduced by function 'getTargetVShiftByConstNode' (in
X86ISelLowering.cpp) when lowering shift operations, SSE/AVX immediate
shift intrinsics and (only in very few cases) SIGN_EXTEND_INREG dag
nodes.
This patch adds extra rules for simplifying vector shifts inside
function 'getTargetVShiftByConstNode'.
Added file test/CodeGen/X86/vec_shift5.ll to verify that packed
shifts by immediate are correctly folded into a build_vector when the
input vector to the shift dag node is a vector of constants or undefs.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198113 91177308-0d34-0410-b5e6-96231b3b80d8
The GNU assembler supports .rep as an alias for .rept. This simply creates the
alias for it and introduces a test for both .rept and .rep.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198097 91177308-0d34-0410-b5e6-96231b3b80d8
widespread glibc bugs.
The glibc implementation of exp10 has a very serious precision bug in
version 2.15 (and older versions). This is still very widely used (the
current Ubuntu LTS for example uses it) and so it isn't reasonable to
make transforms that produce these functions. This fixes many
miscompiles introduced when we started transforming pow(10.0, ...) into
exp10, and it may have fixed other latent miscompiles where exp10
provided sufficient precision but exp10f did not.
This is all really horrible. The primary bug has been fixed for over
a year and glibc 2.18 works correctly for the test cases I have, but it
will be 2017 before the LTS using 2.15 is no longer supported by Ubuntu
(and thus reasonable for folks to be relying on). =[ We're either going
to need to live without these optimizations, or find a way to switch
behavior more dynamically than using simply the fact that the OS is
"Linux".
To make matters worse, there appears to be significant testing and
fixing of numerous other bugs in the exp10 family of functions right now
in glibc. While those haven't been causing problems I've seen in the
wild, it gives me concerns that we may need to wait until an even later
release of glibc before we can reliably transform code into exp10.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198093 91177308-0d34-0410-b5e6-96231b3b80d8
just calling into MAI and is only abstracting for a single interface that
we actually need to check in multiple places.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198092 91177308-0d34-0410-b5e6-96231b3b80d8
ConstantSDNodes (or UNDEFs) into a simple BUILD_VECTOR.
For example, given the following sequence of dag nodes:
i32 C = Constant<1>
v4i32 V = BUILD_VECTOR C, C, C, C
v4i32 Result = SIGN_EXTEND_INREG V, ValueType:v4i1
The SIGN_EXTEND_INREG node can be folded into a build_vector since
the vector in input is a BUILD_VECTOR of constants.
The optimized sequence is:
i32 C = Constant<-1>
v4i32 Result = BUILD_VECTOR C, C, C, C
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198084 91177308-0d34-0410-b5e6-96231b3b80d8
It's no longer necessary to lazily add members to the DICompositeType
member list. Instead any lazy members (special member functions and
member template instantiations) are added to the parent late based on
their context link, the same way that nested types have always been
handled (never being in the member list - just added to the parent DIE
lazily based on context).
Clang's been updated not to use this function anymore as it improves
type unit consistency by never emitting lazy members in type units.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198079 91177308-0d34-0410-b5e6-96231b3b80d8
much more clear to me. I meant to make this change before committing the
original patch, but forgot to merge it in. Sorry.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198069 91177308-0d34-0410-b5e6-96231b3b80d8
This is an iterator which you can build around a MemoryBuffer. It will
iterate through the non-empty, non-comment lines of the buffer as
a forward iterator. It should be small and reasonably fast (although it
could be made much faster if anyone cares, I don't really...).
This will be used to more simply support the text-based sample
profile file format, and is largely based on the original patch by
Diego. I've re-worked the style of it and separated it from the work of
producing a MemoryBuffer from a file which both simplifies the interface
and makes it easier to test.
The style of the API follows the C++ standard naming conventions to fit
in better with iterators in general, much like the Path and FileSystem
interfaces follow standard-based naming conventions.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198068 91177308-0d34-0410-b5e6-96231b3b80d8
The .even directive aligns content to an evan-numbered address. This is an ARM
specific directive applicable to any section.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198031 91177308-0d34-0410-b5e6-96231b3b80d8
E.g. the codegen result is
fmls v1.2s, v0.2s, v2.s[3]
which is expected to be
fmls v0.2s, v1.2s, v2.s[3]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@198001 91177308-0d34-0410-b5e6-96231b3b80d8
...namely LOAD AND ADD, LOAD AND AND, LOAD AND OR and LOAD AND EXCLUSIVE OR.
LOAD AND ADD LOGICAL isn't really separately useful for LLVM.
I'll look at adding reusing the CC results in new year.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197985 91177308-0d34-0410-b5e6-96231b3b80d8
DAG.getVectorShuffle() doesn't always return a vector_shuffle node.
If mask is the exact sequence of it's operand(For example, operand_0
is v8i8, and the mask is 0, 1, 2, 3, 4, 5, 6, 7), it will directly
return that operand. So a check is added here.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197967 91177308-0d34-0410-b5e6-96231b3b80d8
This failure caused by improper condition when lowering shuffle_vector
to scalar_to_vector. After this patch NEON_VDUP with v1i64 will not
be generated.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197966 91177308-0d34-0410-b5e6-96231b3b80d8
Check for single use of fmul node in fused multiply patterns
to allow generation of fused multiply add/sub instructions.
Otherwise fmul operation ends up being repeated more than
once which does not help peformance on targets with
only one MAC unit, as for example cortex-a53.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197929 91177308-0d34-0410-b5e6-96231b3b80d8
The correct pattern matching should be:
- fnmadd is (-Ra) + (-Rn)*Rm which should be matched as:
fma (fneg node:$Rn), node:$Rm, (fneg node:$Ra) and as
(f32 (fsub (f32 (fneg FPR32:$Ra)), (f32 (fmul FPR32:$Rn, FPR32:$Rm))))
- fnmsub is (-Ra) + Rn*Rm which should be matched as
fma node:$Rn, node:$Rm, (fneg node:$Ra) and as
(f32 (fsub (f32 (fmul FPR32:$Rn, FPR32:$Rm)), FPR32:$Ra))))
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197928 91177308-0d34-0410-b5e6-96231b3b80d8
Split sadd.with.overflow into add + sadd.with.overflow to allow
analysis and optimization. This should ideally be done after
InstCombine, which can perform code motion (eventually indvars should
run after all canonical instcombines). We want ISEL to recombine the
add and the check, at least on x86.
This is currently under an option for reducing live induction
variables: -liv-reduce. The next step is reducing liveness of IVs that
are live out of the overflow check paths. Once the related
optimizations are fully developed, reviewed and tested, I do expect
this to become default.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197926 91177308-0d34-0410-b5e6-96231b3b80d8
(optional) DWARF sections, so compiling with -g does not result in
different code being generated.
rdar://problem/15623193
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197922 91177308-0d34-0410-b5e6-96231b3b80d8
The bkpt mnemonic has an implicit immediate constant of 0 unless otherwise
specified. Add an instruction alias for the unvalued breakpoint mnemonic to
treat it as a 0. This improves compatibility with GNU AS.
Signed-off-by: Saleem Abdulrasool <compnerd@compnerd.org>
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197913 91177308-0d34-0410-b5e6-96231b3b80d8
If the Scalarizer scalarized a vector PHI but could not scalarize
all uses of it, it would insert a series of insertelements to reconstruct
the vector PHI value from the scalar ones. The problem was that it would
emit these insertelements immediately after the PHI, even if there were
other PHIs after it.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197909 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Before this change the instrumented code before Ret instructions looked like:
<Unpoison Frame Redzones>
if (Frame != OriginalFrame) // I.e. Frame is fake
<Poison Complete Frame>
Now the instrumented code looks like:
if (Frame != OriginalFrame) // I.e. Frame is fake
<Poison Complete Frame>
else
<Unpoison Frame Redzones>
Reviewers: eugenis
Reviewed By: eugenis
CC: llvm-commits
Differential Revision: http://llvm-reviews.chandlerc.com/D2458
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@197907 91177308-0d34-0410-b5e6-96231b3b80d8