X86ISelLowering: Fix cmov(cmov) special lowering bug

There's a special case in EmitLoweredSelect() that produces an improved
lowering for cmov(cmov) patterns. However this special lowering is
currently broken if the inner cmov has multiple users so this patch
stops using it in this case.

If you wonder why this wasn't fixed by continuing to use the special
lowering and inserting a 2nd PHI for the inner cmov: I believe this
would incur additional copies/register pressure so the special lowering
does not improve upon the normal one anymore in this case.

This fixes http://llvm.org/PR26256 (= rdar://24329747)

llvm-svn: 258729
This commit is contained in:
Matthias Braun 2016-01-25 22:08:25 +00:00
parent a52cfbb4e3
commit 69950e68ba
2 changed files with 51 additions and 1 deletions

View File

@ -22050,7 +22050,8 @@ X86TargetLowering::EmitLoweredSelect(MachineInstr *MI,
if (LastCMOV == MI &&
NextMIIt != BB->end() && NextMIIt->getOpcode() == MI->getOpcode() &&
NextMIIt->getOperand(2).getReg() == MI->getOperand(2).getReg() &&
NextMIIt->getOperand(1).getReg() == MI->getOperand(0).getReg()) {
NextMIIt->getOperand(1).getReg() == MI->getOperand(0).getReg() &&
NextMIIt->getOperand(1).isKill()) {
CascadedCMOV = &*NextMIIt;
}

View File

@ -224,3 +224,52 @@ entry:
}
attributes #0 = { nounwind }
@g8 = global i8 0
; The following test failed because llvm had a bug where a structure like:
;
; %vreg12<def> = CMOV_GR8 %vreg7, %vreg11 ... (lt)
; %vreg13<def> = CMOV_GR8 %vreg12, %vreg11 ... (gt)
;
; was lowered to:
;
; The first two cmovs got expanded to:
; BB#0:
; JL_1 BB#9
; BB#7:
; JG_1 BB#9
; BB#8:
; BB#9:
; vreg12 = phi(vreg7, BB#8, vreg11, BB#0, vreg12, BB#7)
; vreg13 = COPY vreg12
; Which was invalid as %vreg12 is not the same value as %vreg13
; CHECK-LABEL: no_cascade_opt:
; CMOV-DAG: cmpl %edx, %esi
; CMOV-DAG: movb $20, %al
; CMOV-DAG: movb $20, %dl
; CMOV: jl [[BB0:.LBB[0-9_]+]]
; CMOV: movb %cl, %dl
; CMOV: [[BB0]]:
; CMOV: jg [[BB1:.LBB[0-9_]+]]
; CMOV: movb %dl, %al
; CMOV: [[BB1]]:
; CMOV: testl %edi, %edi
; CMOV: je [[BB2:.LBB[0-9_]+]]
; CMOV: movb %dl, %al
; CMOV: [[BB2]]:
; CMOV: movb %al, g8(%rip)
; CMOV: retq
define void @no_cascade_opt(i32 %v0, i32 %v1, i32 %v2, i32 %v3) {
entry:
%c0 = icmp eq i32 %v0, 0
%c1 = icmp slt i32 %v1, %v2
%c2 = icmp sgt i32 %v1, %v2
%trunc = trunc i32 %v3 to i8
%sel0 = select i1 %c1, i8 20, i8 %trunc
%sel1 = select i1 %c2, i8 20, i8 %sel0
%sel2 = select i1 %c0, i8 %sel1, i8 %sel0
store volatile i8 %sel2, i8* @g8
ret void
}