Commit Graph

6 Commits

Author SHA1 Message Date
Francis Visoiu Mistrih
ca0df55065 [CodeGen] Unify MBB reference format in both MIR and debug output
As part of the unification of the debug format and the MIR format, print
MBB references as '%bb.5'.

The MIR printer prints the IR name of a MBB only for block definitions.

* find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" \) -type f -print0 | xargs -0 sed -i '' -E 's/BB#" << ([a-zA-Z0-9_]+)->getNumber\(\)/" << printMBBReference(*\1)/g'
* find . \( -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" \) -type f -print0 | xargs -0 sed -i '' -E 's/BB#" << ([a-zA-Z0-9_]+)\.getNumber\(\)/" << printMBBReference(\1)/g'
* find . \( -name "*.txt" -o -name "*.s" -o -name "*.mir" -o -name "*.cpp" -o -name "*.h" -o -name "*.ll" \) -type f -print0 | xargs -0 sed -i '' -E 's/BB#([0-9]+)/%bb.\1/g'
* grep -nr 'BB#' and fix

Differential Revision: https://reviews.llvm.org/D40422

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@319665 91177308-0d34-0410-b5e6-96231b3b80d8
2017-12-04 17:18:51 +00:00
Craig Topper
406408a5da [X86] Redefine MOVSS/MOVSD instructions to take VR128 regclass as input instead of FR32/FR64
This patch redefines the MOVSS/MOVSD instructions to take VR128 as its second input. This allows the MOVSS/SD->BLEND commute to work without requiring a COPY to be inserted.

This should fix PR33079

Overall this looks to be an improvement in the generated code. I haven't checked the EXPENSIVE_CHECKS build but I'll do that and update with results.

Differential Revision: https://reviews.llvm.org/D38449

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@314914 91177308-0d34-0410-b5e6-96231b3b80d8
2017-10-04 17:20:12 +00:00
Simon Pilgrim
723ef71dbf [X86][SSE] Dropped -mcpu from vector shift tests
Use triple and attribute only for consistency 

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@306662 91177308-0d34-0410-b5e6-96231b3b80d8
2017-06-29 11:09:53 +00:00
Simon Pilgrim
9320a1273e [X86][SSE] Don't blend vector shifts with MOVSS/MOVSD directly, lower from generic shuffle
Shuffle lowering will correctly lower to MOVSS/MOVSD/PBLEND, improving commutation opportunities

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@281471 91177308-0d34-0410-b5e6-96231b3b80d8
2016-09-14 14:08:18 +00:00
Simon Pilgrim
de70ff26c5 [X86][SSE] Regenerate vector shift lowering tests
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278232 91177308-0d34-0410-b5e6-96231b3b80d8
2016-08-10 15:13:49 +00:00
Andrea Di Biagio
749e8fee34 [X86] Improve the lowering of packed shifts by constant build_vector.
This patch teaches the backend how to efficiently lower logical and
arithmetic packed shifts on both SSE and AVX/AVX2 machines.

When possible, instead of scalarizing a vector shift, the backend should try
to expand the shift into a sequence of two packed shifts by immedate count
followed by a MOVSS/MOVSD.

Example
  (v4i32 (srl A, (build_vector < X, Y, Y, Y>)))

Can be rewritten as:
  (v4i32 (MOVSS (srl A, <Y,Y,Y,Y>), (srl A, <X,X,X,X>)))

[with X and Y ConstantInt]

The advantage is that the two new shifts from the example would be lowered into
X86ISD::VSRLI nodes. This is always cheaper than scalarizing the vector into
four scalar shifts plus four pairs of vector insert/extract.



git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@206316 91177308-0d34-0410-b5e6-96231b3b80d8
2014-04-15 19:30:48 +00:00