28 Commits

Author SHA1 Message Date
Farhana Aleen
9f7f417dbb [AMDGPU] Re-enabled 128bit wide-vector generation for local addr space by default.
Summary: Bug reported here https://bugs.freedesktop.org/show_bug.cgi?id=105464 found
         to be resolved by some other fixes.

Author: FarhanaAleen

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@333380 91177308-0d34-0410-b5e6-96231b3b80d8
2018-05-28 18:15:11 +00:00
Benjamin Kramer
5b7ab74398 [LoadStoreVectorize] Ignore interleaved invariant loads.
The memory location an invariant load is using can never be clobbered by
any store, so it's safe to move the load ahead of the store.

Differential Revision: https://reviews.llvm.org/D46011

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@330725 91177308-0d34-0410-b5e6-96231b3b80d8
2018-04-24 15:28:47 +00:00
Marek Olsak
0a5f9fa9c2 AMDGPU: enable 128-bit for local addr space under an option
Author: Samuel Pitoiset

ds_read_b128 and ds_write_b128 have been recently enabled
under the amdgpu-ds128 option because the performance benefit
is unclear.

Though, using 128-bit loads/stores for the local address space
appears to introduce regressions in tessellation shaders. Not
sure what is broken, but as ds_read_b128/ds_write_b128 are not
enabled by default, just introduce a global option and enable
128-bit only if requested (until it's fixed/used correctly).

v2: - fix regressions in merge-stores.ll and multiple_tails.ll

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105464

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@329764 91177308-0d34-0410-b5e6-96231b3b80d8
2018-04-10 22:48:23 +00:00
Farhana Aleen
2006e6286b [AMDGPU] Supported ds_read_b128 generation; Widened vector length for local address-space.
Summary: Starting from GCN 2nd generation, ISA supports ds_read_b128 on top of ds_read_b64.
         This patch supports ds_read_b128 instruction pattern and generation of this instruction.
         In the vectorizer, this patch also widen the vector length so that vectorizer generates
         128 bit loads for local address-space which gets translated to ds_read_b128.
         Since the performance benefit is not clear; compiler generates ds_read_b128 under -amdgpu-ds128.

Author: FarhanaAleen

Reviewed By: rampitec, arsenm

Subscribers: llvm-commits, AMDGPU

Differential Revision: https://reviews.llvm.org/D44210

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@327153 91177308-0d34-0410-b5e6-96231b3b80d8
2018-03-09 17:41:39 +00:00
Yaxun Liu
a2386c3b26 [AMDGPU] Switch to the new addr space mapping by default
This requires corresponding clang change.

Differential Revision: https://reviews.llvm.org/D40955


git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@324101 91177308-0d34-0410-b5e6-96231b3b80d8
2018-02-02 16:07:16 +00:00
Bjorn Pettersson
4ff96e5fbb [LSV] Avoid adding vectors of pointers as candidates
Summary:
We no longer add vectors of pointers as candidates for
load/store vectorization. It does not seem to work anyway,
but without this patch we can end up in asserts when trying
to create casts between an integer type and the pointer of
vectors type.

The test case I've added used to assert like this when trying to
cast between i64 and <2 x i16*>:
opt: ../lib/IR/Instructions.cpp:2565: Assertion `castIsValid(op, S, Ty) && "Invalid cast!"' failed.
#0 PrintStackTraceSignalHandler(void*)
#1 SignalHandler(int)
#2 __restore_rt
#3 __GI_raise
#4 __GI_abort
#5 __GI___assert_fail
#6 llvm::CastInst::Create(llvm::Instruction::CastOps, llvm::Value*, llvm::Type*, llvm::Twine const&, llvm::Instruction*)
#7 llvm::IRBuilder<llvm::ConstantFolder, llvm::IRBuilderDefaultInserter>::CreateBitOrPointerCast(llvm::Value*, llvm::Type*, llvm::Twine const&)
#8 Vectorizer::vectorizeStoreChain(llvm::ArrayRef<llvm::Instruction*>, llvm::SmallPtrSet<llvm::Instruction*, 16u>*)

Reviewers: arsenm

Reviewed By: arsenm

Subscribers: nhaehnle, llvm-commits

Differential Revision: https://reviews.llvm.org/D39296

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@316665 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-26 13:59:15 +00:00
Stanislav Mekhanoshin
dff33b262e Skip bitcasts while looking for GEP in LoadStoreVectorizer
Differential Revisison: https://reviews.llvm.org/D32101

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@301343 91177308-0d34-0410-b5e6-96231b3b80d8
2017-04-25 18:00:08 +00:00
Matt Arsenault
d706d030af AMDGPU: Mark all unspecified CC functions in tests as amdgpu_kernel
Currently the default C calling convention functions are treated
the same as compute kernels. Make this explicit so the default
calling convention can be changed to a non-kernel.

Converted with perl -pi -e 's/define void/define amdgpu_kernel void/'
on the relevant test directories (and undoing in one place that actually
wanted a non-kernel).

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@298444 91177308-0d34-0410-b5e6-96231b3b80d8
2017-03-21 21:39:51 +00:00
Matt Arsenault
210095c5c9 LoadStoreVectorizer: Split even sized illegal chains properly
Implement isLegalToVectorizeLoadChain for AMDGPU to avoid
producing private address spaces accesses that will need to be
split up later. This was doing the wrong thing in the case
where the queried chain was an even number of elements.

A possible <4 x i32> store was being split into
store <2 x i32>
store i32
store i32

rather than
store <2 x i32>
store <2 x i32>

when legal.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@295933 91177308-0d34-0410-b5e6-96231b3b80d8
2017-02-23 03:58:53 +00:00
Alina Sbirlea
106bcf1838 [LoadStoreVectorizer] Enable vectorization of stores in the presence of an aliasing load
Summary:
The "getVectorizablePrefix" method would give up if it found an aliasing load for a store chain.
In practice, the aliasing load can be treated as a memory barrier and all stores that precede it
are a valid vectorizable prefix.
Issue found by volkan in D26962. Testcase is a pruned version of the one in the original patch.

Reviewers: jlebar, arsenm, tstellarAMD

Subscribers: mzolotukhin, wdng, nhaehnle, anna, volkan, llvm-commits

Differential Revision: https://reviews.llvm.org/D27008

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@287781 91177308-0d34-0410-b5e6-96231b3b80d8
2016-11-23 17:43:15 +00:00
Tom Stellard
a9c6165732 AMDGPU/SI: Don't allow unaligned scratch access
Summary: The hardware doesn't support this.

Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, llvm-commits, tony-tye

Differential Revision: https://reviews.llvm.org/D25523

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@284257 91177308-0d34-0410-b5e6-96231b3b80d8
2016-10-14 18:10:39 +00:00
Matt Arsenault
7474f828e2 LSV: Fix incorrectly increasing alignment
If the unaligned access has a dynamic offset, it may be odd which
would make the adjusted alignment incorrect to use.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281110 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-09 22:20:14 +00:00
Justin Lebar
600740bf18 [LSV] Use the original loads' names for the extractelement instructions.
Summary:
LSV replaces multiple adjacent loads with one vectorized load and a
bunch of extractelement instructions.  This patch makes the
extractelement instructions' names match those of the original loads,
for (hopefully) improved readability.

Reviewers: asbirlea, tstellarAMD

Subscribers: arsenm, mzolotukhin

Differential Revision: https://reviews.llvm.org/D23748

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280818 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-07 15:49:48 +00:00
Alina Sbirlea
74a597b31a [LoadStoreVectorizer] Change VectorSet to Vector to match head and tail positions. Resolves PR29148.
Summary:
LSV was using two vector sets (heads and tails) to track pairs of adjiacent position to vectorize.
A recent optimization is trying to obtain the longest chain to vectorize and assumes the positions
in heads(H) and tails(T) match, which is not the case is there are multiple tails for the same head.

e.g.:
i1: store a[0]
i2: store a[1]
i3: store a[1]
Leads to:
H: i1
T: i2 i3
Instead of:
H: i1 i1
T: i2 i3
So the positions for instructions that follow i3 will have different indexes in H/T.
This patch resolves PR29148.

This issue also surfaced the fact that if the chain is too long, and TLI
returns a "not-fast" answer, the whole chain will be abandoned for
vectorization, even though a smaller one would be beneficial.
Added a testcase and FIXME for this.

Reviewers: tstellarAMD, arsenm, jlebar

Subscribers: mzolotukhin, wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D24057

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@280179 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-30 23:53:59 +00:00
Justin Lebar
dbf6ad39ae [LSV] Don't assume that loads/stores appear in address order in the BB.
Summary:
getVectorizablePrefix previously didn't work properly in the face of
aliasing loads/stores.  It unwittingly assumed that the loads/stores
appeared in the BB in address order.  If they didn't, it would do the
wrong thing.

Reviewers: asbirlea, tstellarAMD

Subscribers: arsenm, llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D22535

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276072 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-20 00:55:12 +00:00
Justin Lebar
85ee7e9487 [LSV] Insert stores at the right point.
Summary:
Previously, the insertion point for stores was the last instruction in
Chain *before calling getVectorizablePrefixEndIdx*.  Thus if
getVectorizablePrefixEndIdx didn't return Chain.size(), we still would
insert at the last instruction in Chain.

This patch changes our internal API a bit in an attempt to make it less
prone to this sort of error.  As a result, we end up recalculating the
Chain's boundary instructions, but I think worrying about the speed hit
of this is a premature optimization right now.

Reviewers: asbirlea, tstellarAMD

Subscribers: mzolotukhin, arsenm, llvm-commits

Differential Revision: https://reviews.llvm.org/D22534

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@276056 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-19 23:19:20 +00:00
Alina Sbirlea
334778a70b Correct ordering of loads/stores.
Summary:
Aiming to correct the ordering of loads/stores. This patch changes the
insert point for loads to the position of the first load.
It updates the ordering method for loads to insert before, rather than after.

Before this patch the following sequence:
"load a[1], store a[1], store a[0], load a[2]"
Would incorrectly vectorize to "store a[0,1], load a[1,2]".
The correctness check was assuming the insertion point for loads is at
the position of the first load, when in practice it was at the last
load. An alternative fix would have been to invert the correctness check.
The current fix changes insert position but also requires reordering of
instructions before the vectorized load.

Updated testcases to reflect the changes.

Reviewers: tstellarAMD, llvm-commits, jlebar, arsenm

Subscribers: mzolotukhin

Differential Revision: http://reviews.llvm.org/D22071

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275117 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-11 22:34:29 +00:00
Alina Sbirlea
87c883dc04 Add TLI.allowsMisalignedMemoryAccesses to LoadStoreVectorizer
Summary: Extend TTI to access TLI.allowsMisalignedMemoryAccesses(). Check condition when vectorizing load and store chains.
Add additional parameters: AddressSpace, Alignment, Fast.

Reviewers: llvm-commits, jlebar

Subscribers: arsenm, mzolotukhin

Differential Revision: http://reviews.llvm.org/D21935

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@275100 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-11 20:46:17 +00:00
Alina Sbirlea
1e53a5fcec Address two correctness issues in LoadStoreVectorizer
Summary:
GetBoundryInstruction returns the last instruction as the instruction which follows or end(). Otherwise the last instruction in the boundry set is not being tested by isVectorizable().
Partially solve reordering of instructions. More extensive solution to follow.

Reviewers: tstellarAMD, llvm-commits, jlebar

Subscribers: escha, arsenm, mzolotukhin

Differential Revision: http://reviews.llvm.org/D21934

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274389 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 21:44:12 +00:00
Matt Arsenault
2623a06878 LoadStoreVectorizer: Don't increase alignment with no align set
If no alignment was set on the load/stores, it would vectorize
to the new type even though this increases the default alignment.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274323 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 02:09:38 +00:00
Matt Arsenault
d8f310cdf9 LoadStoreVectorizer: Check TTI for vec reg bit width
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274322 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 02:07:22 +00:00
Matt Arsenault
0b5646a61a LoadStoreVectorizer: Fix assert when merging pointer ops
This needs to use inttoptr/ptrtoint if combining an int and pointer
load. If a pointer is used always do an integer load.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274321 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 01:55:52 +00:00
Matt Arsenault
7e961b6fb8 LoadStoreVectorizer: Use AA metadata
This was not passing the full instruction with metadata
to the alias query.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274318 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 01:47:46 +00:00
Matt Arsenault
378e1693a1 LoadStoreVectorizer: if one element of a vector is integer, default to
integer.

Fixes issues on some architectures where we use arithmetic ops to build
vectors, which can cause bad things to happen for loads/stores of mixed
types.

Patch by Fiona Glaser

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274307 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 00:37:01 +00:00
Matt Arsenault
9de35f8460 LoadStoreVectorizer: Fix crashes on sub-byte types
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274306 91177308-0d34-0410-b5e6-96231b3b80d8
2016-07-01 00:36:54 +00:00
Matt Arsenault
531c987fec LoadStoreVectorizer: Check skipFunction first.
Also add test I forgot to add to r274296.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274299 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-30 23:50:18 +00:00
Matt Arsenault
34f70b6118 LoadStoreVectorizer: Skip optnone functions
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274296 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-30 23:30:29 +00:00
Matt Arsenault
8b86030deb Add LoadStoreVectorizer pass
This was contributed by Apple, and I've been working on
minimal cleanups and generalizing it.

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@274293 91177308-0d34-0410-b5e6-96231b3b80d8
2016-06-30 23:11:38 +00:00