This ensures that all of the various pieces are working. The next patch
will wire up commandline-driven alias analysis chain building and allow
BasicAA to work with the AAManager.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260838 91177308-0d34-0410-b5e6-96231b3b80d8
into the new pass manager and fix the latent bugs there.
This lets everything live together nicely, but it isn't really useful
yet. I never finished wiring the AA layer up for the new pass manager,
and so subsequent patches will change this to do that wiring and get AA
stuff more fully integrated into the new pass manager. Turns out this is
necessary even to get functionattrs ported over. =]
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260836 91177308-0d34-0410-b5e6-96231b3b80d8
r180893 added an indirect include of llvm/Config/Targets.def to
llvm/Support/CodeGen.h, which in turn is included by things like
llvm/IR/Module.h. After a full build of LLVM and Clang, ninja had to
rebuild 1274 files after reconfiguring.
This commit strips CodeGen.h back down to just a pile of enums and moves
the expensive includes over to CodeGenCWrappers.h (which is only
included in two places). This gets ninja down to 88 files if you
reconfigure with, e.g., -DLLVM_TARGETS_TO_BUILD=X86.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260835 91177308-0d34-0410-b5e6-96231b3b80d8
This patch attempts to represent a shuffle as a repeating shuffle (recognisable by is128BitLaneRepeatedShuffleMask) with the source input(s) in their original lanes, followed by a single permutation of the 128-bit lanes to their final destinations.
On AVX2 we can additionally attempt to match using 64-bit sub-lane permutation. AVX2 can also now match a similar 'broadcasted' repeating shuffle.
This patch has several benefits:
* Avoids prematurely matching with lowerVectorShuffleByMerging128BitLanes which can require both inputs to have their input lanes permuted before shuffling.
* Can replace PERMPS/PERMD instructions - although these are useful for cross-lane unary shuffling, they require their shuffle mask to be pre-loaded (and increase register pressure).
* Matching the repeating shuffle makes use of a lot of existing shuffle lowering.
There is an outstanding minor AVX1 regression (combine_unneeded_subvector1 in vector-shuffle-combining.ll) of a previously 128-bit shuffle + subvector splat being converted to a subvector splat + (2 instruction) 256-bit shuffle, I intend to fix this in a followup patch for review.
Differential Revision: http://reviews.llvm.org/D16537
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260834 91177308-0d34-0410-b5e6-96231b3b80d8
Gcc 4.7.2-4 does not seem to have "emplace" in its implementation of map.
This should fix the build failure on polly-amd64-linux.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260816 91177308-0d34-0410-b5e6-96231b3b80d8
than the SCC object, and have it scan the instruction stream directly
rather than relying on call records.
This makes the behavior of this routine consistent between libc routines
and LLVM intrinsics for libc routines. We can go and start teaching it
about those being norecurse, but we should behave the same for the
intrinsic and the libc routine rather than differently. I chatted with
James Molloy and the inconsistency doesn't seem intentional and likely
is due to intrinsic calls not being modelled in the call graph analyses.
This also fixes a bug where we would deduce norecurse on optnone
functions, when generally we try to handle optnone functions as-if they
were replaceable and thus unanalyzable.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260813 91177308-0d34-0410-b5e6-96231b3b80d8
This requirement was a huge hack to keep LiveVariables alive because it
was optionally used by TwoAddressInstructionPass and PHIElimination.
However we have AnalysisUsage::addUsedIfAvailable() which we can use in
those passes.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260806 91177308-0d34-0410-b5e6-96231b3b80d8
Tests for the new scalarize all private access options will be
included with a future commit.
The only functional change is to make the split/scalarize behavior
for private access of > 4 element vectors to be consistent
with the flat/global handling. This makes the spilling worse
in the two changed tests.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260804 91177308-0d34-0410-b5e6-96231b3b80d8
This intrinsic will be used to expose dpp functionality to higher-level
languages. It will map to the dpp version of v_mov_b32.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260792 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
Export the CloneDebugInfoMetadata utility, which clones all debug info
associated with a function into the first module. Also use this function
in CloneModule on each function we clone (the CloneFunction entrypoint
already does this).
Without this, cloning a module will lead to DI quality regressions,
especially since r252219 reversed the Function <-> DISubprogram edge
(before we could get lucky and have this edge preserved if the
DISubprogram itself was, e.g. due to location metadata).
This was verified to fix missing debug information in julia and
a unittest to verify the new behavior is included.
Patch by Yichao Yu! Thanks!
Reviewers: loladiro, pcc
Differential Revision: http://reviews.llvm.org/D17165
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260791 91177308-0d34-0410-b5e6-96231b3b80d8
As support expands to more runtimes, we'll need to
distinguish between more than just HSA and unknown.
This also lets us stop using unknown everywhere.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260790 91177308-0d34-0410-b5e6-96231b3b80d8
These provide direct access to the hardware instruction without
the unit version required like llvm.sin/llvm.cos lowering requires.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260782 91177308-0d34-0410-b5e6-96231b3b80d8
Summary:
This patch skips DAG combine of fp_round (fp_round x) if it results in
an fp_round from f80 to f16.
fp_round from f80 to f16 always generates an expensive (and as yet,
unimplemented) libcall to __truncxfhf2. This prevents selection of
native f16 conversion instructions from f32 or f64. Moreover, the first
(value-preserving) fp_round from f80 to either f32 or f64 may become a
NOP in platforms like x86.
Reviewers: ab
Subscribers: srhines, llvm-commits
Differential Revision: http://reviews.llvm.org/D17221
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260769 91177308-0d34-0410-b5e6-96231b3b80d8
- Remove a comment that was clearly copy pasted from Android.cmake and
isn't relevant.
- Remove the toolchain's sensitivity to the environment. It's less
error prone to just allow users to set CMAKE_OSX_SYSROOT if they
want to use a custom SDK.
- Stop explicitly setting -mios-version-min to the default value. It
just adds needless complexity.
This makes building the native tablegen work for me even when SDKROOT
is set in the environment (or passed in as -DCMAKE_OSX_SYSROOT).
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260763 91177308-0d34-0410-b5e6-96231b3b80d8
Replace spills to memory with spills to registers, if possible. This
applies mostly to predicate registers (both scalar and vector), since
they are very limited in number. A spill of a predicate register may
happen even if there is a general-purpose register available. In cases
like this the stack spill/reload may be eliminated completely.
This optimization will consider all stack objects, regardless of where
they came from and try to match the live range of the stack slot with
a dead range of a register from an appropriate register class.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@260758 91177308-0d34-0410-b5e6-96231b3b80d8