[TableGen] Support multi-alternative pattern fragments

A TableGen instruction record usually contains a DAG pattern that will
describe the SelectionDAG operation that can be implemented by this
instruction. However, there will be cases where several different DAG
patterns can all be implemented by the same instruction. The way to
represent this today is to write additional patterns in the Pattern
(or usually Pat) class that map those extra DAG patterns to the
instruction. This usually also works fine.

However, I've noticed cases where the current setup seems to require
quite a bit of extra (and duplicated) text in the target .td files.
For example, in the SystemZ back-end, there are quite a number of
instructions that can implement an "add-with-overflow" operation.
The same instructions also need to be used to implement just plain
addition (simply ignoring the extra overflow output). The current
solution requires creating extra Pat pattern for every instruction,
duplicating the information about which particular add operands
map best to which particular instruction.

This patch enhances TableGen to support a new PatFrags class, which
can be used to encapsulate multiple alternative patterns that may
all match to the same instruction.  It operates the same way as the
existing PatFrag class, except that it accepts a list of DAG patterns
to match instead of just a single one.  As an example, we can now define
a PatFrags to match either an "add-with-overflow" or a regular add
operation:

  def z_sadd : PatFrags<(ops node:$src1, node:$src2),
                        [(z_saddo node:$src1, node:$src2),
                         (add node:$src1, node:$src2)]>;

and then use this in the add instruction pattern:

  defm AR : BinaryRRAndK<"ar", 0x1A, 0xB9F8, z_sadd, GR32, GR32>;

These SystemZ target changes are implemented here as well.


Note that PatFrag is now defined as a subclass of PatFrags, which
means that some users of internals of PatFrag need to be updated.
(E.g. instead of using PatFrag.Fragment you now need to use
!head(PatFrag.Fragments).)


The implementation is based on the following main ideas:
- InlinePatternFragments may now replace each original pattern
  with several result patterns, not just one.
- parseInstructionPattern delays calling InlinePatternFragments
  and InferAllTypes.  Instead, it extracts a single DAG match
  pattern from the main instruction pattern.
- Processing of the DAG match pattern part of the main instruction
  pattern now shares most code with processing match patterns from
  the Pattern class.
- Direct use of main instruction patterns in InferFromPattern and
  EmitResultInstructionAsOperand is removed; everything now operates
  solely on DAG match patterns.


Reviewed by: hfinkel

Differential Revision: https://reviews.llvm.org/D48545

llvm-svn: 336999
This commit is contained in:
Ulrich Weigand 2018-07-13 13:18:00 +00:00
parent 1bd8a9908e
commit 535942804d
11 changed files with 483 additions and 518 deletions

View File

@ -615,13 +615,16 @@ class CodePatPred<code predicate> : PatPred {
// compact and readable.
//
/// PatFrag - Represents a pattern fragment. This can match something on the
/// DAG, from a single node to multiple nested other fragments.
/// PatFrags - Represents a set of pattern fragments. Each single fragment
/// can match something on the DAG, from a single node to multiple nested other
/// fragments. The whole set of fragments matches if any of the single
/// fragemnts match. This allows e.g. matching and "add with overflow" and
/// a regular "add" with the same fragment set.
///
class PatFrag<dag ops, dag frag, code pred = [{}],
SDNodeXForm xform = NOOP_SDNodeXForm> : SDPatternOperator {
class PatFrags<dag ops, list<dag> frags, code pred = [{}],
SDNodeXForm xform = NOOP_SDNodeXForm> : SDPatternOperator {
dag Operands = ops;
dag Fragment = frag;
list<dag> Fragments = frags;
code PredicateCode = pred;
code GISelPredicateCode = [{}];
code ImmediateCode = [{}];
@ -682,6 +685,11 @@ class PatFrag<dag ops, dag frag, code pred = [{}],
ValueType ScalarMemoryVT = ?;
}
// PatFrag - A version of PatFrags matching only a single fragment.
class PatFrag<dag ops, dag frag, code pred = [{}],
SDNodeXForm xform = NOOP_SDNodeXForm>
: PatFrags<ops, [frag], pred, xform>;
// OutPatFrag is a pattern fragment that is used as part of an output pattern
// (not an input pattern). These do not have predicates or transforms, but are
// used to avoid repeated subexpressions in output patterns.

View File

@ -258,7 +258,7 @@ class Not2<PatFrag P>
: PatFrag<(ops node:$A, node:$B), (P node:$A, (not node:$B))>;
class Su<PatFrag Op>
: PatFrag<Op.Operands, Op.Fragment, [{ return hasOneUse(N); }],
: PatFrag<Op.Operands, !head(Op.Fragments), [{ return hasOneUse(N); }],
Op.OperandTransform>;
// Main selection macros.
@ -538,7 +538,7 @@ def: Pat<(i1 (setult I32:$Rs, u32_0ImmPred:$u9)),
// Patfrag to convert the usual comparison patfrags (e.g. setlt) to ones
// that reverse the order of the operands.
class RevCmp<PatFrag F>
: PatFrag<(ops node:$rhs, node:$lhs), F.Fragment, F.PredicateCode,
: PatFrag<(ops node:$rhs, node:$lhs), !head(F.Fragments), F.PredicateCode,
F.OperandTransform>;
def: OpR_RR_pat<C2_cmpeq, seteq, i1, I32>;
@ -2190,7 +2190,7 @@ class Stoream_pat<PatFrag Store, PatFrag Value, PatFrag Addr, PatFrag ValueMod,
// swapped. This relies on the knowledge that the F.Fragment uses names
// "ptr" and "val".
class AtomSt<PatFrag F>
: PatFrag<(ops node:$val, node:$ptr), F.Fragment, F.PredicateCode,
: PatFrag<(ops node:$val, node:$ptr), !head(F.Fragments), F.PredicateCode,
F.OperandTransform> {
let IsAtomic = F.IsAtomic;
let MemoryVT = F.MemoryVT;

View File

@ -7468,7 +7468,7 @@ class WMMA_LOAD_INTR_HELPER<string Geometry, string Abc, string Layout,
}];
let Operands = !if(WithStride, (ops node:$src, node:$ldm), (ops node:$src));
let Fragment = !foreach(tmp, Operands, !subst(ops, Intr, tmp));
let Fragments = [!foreach(tmp, Operands, !subst(ops, Intr, tmp))];
let PredicateCode = !if(!eq(Space, ".shared"), match_shared,
!if(!eq(Space, ".global"), match_global, match_generic));
}
@ -7608,7 +7608,7 @@ class WMMA_STORE_INTR_HELPER<string Geometry, string Layout, string Space,
node:$r4, node:$r5, node:$r6, node:$r7));
dag StrideArg = !if(WithStride, (ops node:$ldm), (ops));
let Operands = !con(Args, StrideArg);
let Fragment = !foreach(tmp, Operands, !subst(ops, Intr, tmp));
let Fragments = [!foreach(tmp, Operands, !subst(ops, Intr, tmp))];
let PredicateCode = !if(!eq(Space, ".shared"), match_shared,
!if(!eq(Space, ".global"), match_global, match_generic));
}

View File

@ -895,8 +895,8 @@ def : Pat<(or (zext32 GR32:$src), imm64hf32:$imm),
let Defs = [CC], CCValues = 0xF, CompareZeroCCMask = 0x8 in {
// Addition of a register.
let isCommutable = 1 in {
defm AR : BinaryRRAndK<"ar", 0x1A, 0xB9F8, z_saddo, GR32, GR32>;
defm AGR : BinaryRREAndK<"agr", 0xB908, 0xB9E8, z_saddo, GR64, GR64>;
defm AR : BinaryRRAndK<"ar", 0x1A, 0xB9F8, z_sadd, GR32, GR32>;
defm AGR : BinaryRREAndK<"agr", 0xB908, 0xB9E8, z_sadd, GR64, GR64>;
}
def AGFR : BinaryRRE<"agfr", 0xB918, null_frag, GR64, GR32>;
@ -907,38 +907,38 @@ let Defs = [CC], CCValues = 0xF, CompareZeroCCMask = 0x8 in {
Requires<[FeatureHighWord]>;
// Addition of signed 16-bit immediates.
defm AHIMux : BinaryRIAndKPseudo<"ahimux", z_saddo, GRX32, imm32sx16>;
defm AHI : BinaryRIAndK<"ahi", 0xA7A, 0xECD8, z_saddo, GR32, imm32sx16>;
defm AGHI : BinaryRIAndK<"aghi", 0xA7B, 0xECD9, z_saddo, GR64, imm64sx16>;
defm AHIMux : BinaryRIAndKPseudo<"ahimux", z_sadd, GRX32, imm32sx16>;
defm AHI : BinaryRIAndK<"ahi", 0xA7A, 0xECD8, z_sadd, GR32, imm32sx16>;
defm AGHI : BinaryRIAndK<"aghi", 0xA7B, 0xECD9, z_sadd, GR64, imm64sx16>;
// Addition of signed 32-bit immediates.
def AFIMux : BinaryRIPseudo<z_saddo, GRX32, simm32>,
def AFIMux : BinaryRIPseudo<z_sadd, GRX32, simm32>,
Requires<[FeatureHighWord]>;
def AFI : BinaryRIL<"afi", 0xC29, z_saddo, GR32, simm32>;
def AIH : BinaryRIL<"aih", 0xCC8, z_saddo, GRH32, simm32>,
def AFI : BinaryRIL<"afi", 0xC29, z_sadd, GR32, simm32>;
def AIH : BinaryRIL<"aih", 0xCC8, z_sadd, GRH32, simm32>,
Requires<[FeatureHighWord]>;
def AGFI : BinaryRIL<"agfi", 0xC28, z_saddo, GR64, imm64sx32>;
def AGFI : BinaryRIL<"agfi", 0xC28, z_sadd, GR64, imm64sx32>;
// Addition of memory.
defm AH : BinaryRXPair<"ah", 0x4A, 0xE37A, z_saddo, GR32, asextloadi16, 2>;
defm A : BinaryRXPair<"a", 0x5A, 0xE35A, z_saddo, GR32, load, 4>;
def AGH : BinaryRXY<"agh", 0xE338, z_saddo, GR64, asextloadi16, 2>,
defm AH : BinaryRXPair<"ah", 0x4A, 0xE37A, z_sadd, GR32, asextloadi16, 2>;
defm A : BinaryRXPair<"a", 0x5A, 0xE35A, z_sadd, GR32, load, 4>;
def AGH : BinaryRXY<"agh", 0xE338, z_sadd, GR64, asextloadi16, 2>,
Requires<[FeatureMiscellaneousExtensions2]>;
def AGF : BinaryRXY<"agf", 0xE318, z_saddo, GR64, asextloadi32, 4>;
def AG : BinaryRXY<"ag", 0xE308, z_saddo, GR64, load, 8>;
def AGF : BinaryRXY<"agf", 0xE318, z_sadd, GR64, asextloadi32, 4>;
def AG : BinaryRXY<"ag", 0xE308, z_sadd, GR64, load, 8>;
// Addition to memory.
def ASI : BinarySIY<"asi", 0xEB6A, null_frag, imm32sx8>;
def AGSI : BinarySIY<"agsi", 0xEB7A, null_frag, imm64sx8>;
def ASI : BinarySIY<"asi", 0xEB6A, add, imm32sx8>;
def AGSI : BinarySIY<"agsi", 0xEB7A, add, imm64sx8>;
}
defm : SXB<z_saddo, GR64, AGFR>;
defm : SXB<z_sadd, GR64, AGFR>;
// Addition producing a carry.
let Defs = [CC] in {
// Addition of a register.
let isCommutable = 1 in {
defm ALR : BinaryRRAndK<"alr", 0x1E, 0xB9FA, z_uaddo, GR32, GR32>;
defm ALGR : BinaryRREAndK<"algr", 0xB90A, 0xB9EA, z_uaddo, GR64, GR64>;
defm ALR : BinaryRRAndK<"alr", 0x1E, 0xB9FA, z_uadd, GR32, GR32>;
defm ALGR : BinaryRREAndK<"algr", 0xB90A, 0xB9EA, z_uadd, GR64, GR64>;
}
def ALGFR : BinaryRRE<"algfr", 0xB91A, null_frag, GR64, GR32>;
@ -949,29 +949,29 @@ let Defs = [CC] in {
Requires<[FeatureHighWord]>;
// Addition of signed 16-bit immediates.
def ALHSIK : BinaryRIE<"alhsik", 0xECDA, z_uaddo, GR32, imm32sx16>,
def ALHSIK : BinaryRIE<"alhsik", 0xECDA, z_uadd, GR32, imm32sx16>,
Requires<[FeatureDistinctOps]>;
def ALGHSIK : BinaryRIE<"alghsik", 0xECDB, z_uaddo, GR64, imm64sx16>,
def ALGHSIK : BinaryRIE<"alghsik", 0xECDB, z_uadd, GR64, imm64sx16>,
Requires<[FeatureDistinctOps]>;
// Addition of unsigned 32-bit immediates.
def ALFI : BinaryRIL<"alfi", 0xC2B, z_uaddo, GR32, uimm32>;
def ALGFI : BinaryRIL<"algfi", 0xC2A, z_uaddo, GR64, imm64zx32>;
def ALFI : BinaryRIL<"alfi", 0xC2B, z_uadd, GR32, uimm32>;
def ALGFI : BinaryRIL<"algfi", 0xC2A, z_uadd, GR64, imm64zx32>;
// Addition of signed 32-bit immediates.
def ALSIH : BinaryRIL<"alsih", 0xCCA, null_frag, GRH32, simm32>,
Requires<[FeatureHighWord]>;
// Addition of memory.
defm AL : BinaryRXPair<"al", 0x5E, 0xE35E, z_uaddo, GR32, load, 4>;
def ALGF : BinaryRXY<"algf", 0xE31A, z_uaddo, GR64, azextloadi32, 4>;
def ALG : BinaryRXY<"alg", 0xE30A, z_uaddo, GR64, load, 8>;
defm AL : BinaryRXPair<"al", 0x5E, 0xE35E, z_uadd, GR32, load, 4>;
def ALGF : BinaryRXY<"algf", 0xE31A, z_uadd, GR64, azextloadi32, 4>;
def ALG : BinaryRXY<"alg", 0xE30A, z_uadd, GR64, load, 8>;
// Addition to memory.
def ALSI : BinarySIY<"alsi", 0xEB6E, null_frag, imm32sx8>;
def ALGSI : BinarySIY<"algsi", 0xEB7E, null_frag, imm64sx8>;
}
defm : ZXB<z_uaddo, GR64, ALGFR>;
defm : ZXB<z_uadd, GR64, ALGFR>;
// Addition producing and using a carry.
let Defs = [CC], Uses = [CC] in {
@ -988,54 +988,6 @@ let Defs = [CC], Uses = [CC] in {
def ALSIHN : BinaryRIL<"alsihn", 0xCCB, null_frag, GRH32, simm32>,
Requires<[FeatureHighWord]>;
// Map plain addition to either arithmetic or logical operation.
def : Pat<(add GR32:$src1, GR32:$src2),
(AR GR32:$src1, GR32:$src2)>;
def : Pat<(add GR64:$src1, GR64:$src2),
(AGR GR64:$src1, GR64:$src2)>;
defm : SXB<add, GR64, AGFR>;
defm : ZXB<add, GR64, ALGFR>;
def : Pat<(add GRX32:$src1, imm32sx16:$src2),
(AHIMux GRX32:$src1, imm32sx16:$src2)>, Requires<[FeatureHighWord]>;
def : Pat<(add GR32:$src1, imm32sx16:$src2),
(AHI GR32:$src1, imm32sx16:$src2)>;
def : Pat<(add GR64:$src1, imm64sx16:$src2),
(AGHI GR64:$src1, imm64sx16:$src2)>;
def : Pat<(add GRX32:$src1, simm32:$src2),
(AFIMux GRX32:$src1, simm32:$src2)>, Requires<[FeatureHighWord]>;
def : Pat<(add GR32:$src1, simm32:$src2),
(AFI GR32:$src1, simm32:$src2)>;
def : Pat<(add GRH32:$src1, simm32:$src2),
(AIH GRH32:$src1, simm32:$src2)>, Requires<[FeatureHighWord]>;
def : Pat<(add GR64:$src1, imm64sx32:$src2),
(AGFI GR64:$src1, imm64sx32:$src2)>;
def : Pat<(add GR64:$src1, imm64zx32:$src2),
(ALGFI GR64:$src1, imm64zx32:$src2)>;
def : Pat<(add GR32:$src1, (asextloadi16 bdxaddr12pair:$addr)),
(AH GR32:$src1, bdxaddr12pair:$addr)>;
def : Pat<(add GR32:$src1, (asextloadi16 bdxaddr20pair:$addr)),
(AHY GR32:$src1, bdxaddr20pair:$addr)>;
def : Pat<(add GR32:$src1, (load bdxaddr12pair:$addr)),
(A GR32:$src1, bdxaddr12pair:$addr)>;
def : Pat<(add GR32:$src1, (load bdxaddr20pair:$addr)),
(AY GR32:$src1, bdxaddr20pair:$addr)>;
def : Pat<(add GR64:$src1, (asextloadi16 bdxaddr20only:$addr)),
(AGH GR64:$src1, bdxaddr20only:$addr)>,
Requires<[FeatureMiscellaneousExtensions2]>;
def : Pat<(add GR64:$src1, (asextloadi32 bdxaddr20only:$addr)),
(AGF GR64:$src1, bdxaddr20only:$addr)>;
def : Pat<(add GR64:$src1, (azextloadi32 bdxaddr20only:$addr)),
(ALGF GR64:$src1, bdxaddr20only:$addr)>;
def : Pat<(add GR64:$src1, (load bdxaddr20only:$addr)),
(AG GR64:$src1, bdxaddr20only:$addr)>;
def : Pat<(store (add (load bdaddr20only:$addr), imm32sx8:$src2), bdaddr20only:$addr),
(ASI bdaddr20only:$addr, imm32sx8:$src2)>;
def : Pat<(store (add (load bdaddr20only:$addr), imm64sx8:$src2), bdaddr20only:$addr),
(AGSI bdaddr20only:$addr, imm64sx8:$src2)>;
//===----------------------------------------------------------------------===//
// Subtraction
@ -1044,9 +996,9 @@ def : Pat<(store (add (load bdaddr20only:$addr), imm64sx8:$src2), bdaddr20only:$
// Subtraction producing a signed overflow flag.
let Defs = [CC], CCValues = 0xF, CompareZeroCCMask = 0x8 in {
// Subtraction of a register.
defm SR : BinaryRRAndK<"sr", 0x1B, 0xB9F9, z_ssubo, GR32, GR32>;
defm SR : BinaryRRAndK<"sr", 0x1B, 0xB9F9, z_ssub, GR32, GR32>;
def SGFR : BinaryRRE<"sgfr", 0xB919, null_frag, GR64, GR32>;
defm SGR : BinaryRREAndK<"sgr", 0xB909, 0xB9E9, z_ssubo, GR64, GR64>;
defm SGR : BinaryRREAndK<"sgr", 0xB909, 0xB9E9, z_ssub, GR64, GR64>;
// Subtraction from a high register.
def SHHHR : BinaryRRFa<"shhhr", 0xB9C9, null_frag, GRH32, GRH32, GRH32>,
@ -1055,39 +1007,39 @@ let Defs = [CC], CCValues = 0xF, CompareZeroCCMask = 0x8 in {
Requires<[FeatureHighWord]>;
// Subtraction of memory.
defm SH : BinaryRXPair<"sh", 0x4B, 0xE37B, z_ssubo, GR32, asextloadi16, 2>;
defm S : BinaryRXPair<"s", 0x5B, 0xE35B, z_ssubo, GR32, load, 4>;
def SGH : BinaryRXY<"sgh", 0xE339, z_ssubo, GR64, asextloadi16, 2>,
defm SH : BinaryRXPair<"sh", 0x4B, 0xE37B, z_ssub, GR32, asextloadi16, 2>;
defm S : BinaryRXPair<"s", 0x5B, 0xE35B, z_ssub, GR32, load, 4>;
def SGH : BinaryRXY<"sgh", 0xE339, z_ssub, GR64, asextloadi16, 2>,
Requires<[FeatureMiscellaneousExtensions2]>;
def SGF : BinaryRXY<"sgf", 0xE319, z_ssubo, GR64, asextloadi32, 4>;
def SG : BinaryRXY<"sg", 0xE309, z_ssubo, GR64, load, 8>;
def SGF : BinaryRXY<"sgf", 0xE319, z_ssub, GR64, asextloadi32, 4>;
def SG : BinaryRXY<"sg", 0xE309, z_ssub, GR64, load, 8>;
}
defm : SXB<z_ssubo, GR64, SGFR>;
defm : SXB<z_ssub, GR64, SGFR>;
// Subtracting an immediate is the same as adding the negated immediate.
let AddedComplexity = 1 in {
def : Pat<(z_ssubo GR32:$src1, imm32sx16n:$src2),
def : Pat<(z_ssub GR32:$src1, imm32sx16n:$src2),
(AHIMux GR32:$src1, imm32sx16n:$src2)>,
Requires<[FeatureHighWord]>;
def : Pat<(z_ssubo GR32:$src1, simm32n:$src2),
def : Pat<(z_ssub GR32:$src1, simm32n:$src2),
(AFIMux GR32:$src1, simm32n:$src2)>,
Requires<[FeatureHighWord]>;
def : Pat<(z_ssubo GR32:$src1, imm32sx16n:$src2),
def : Pat<(z_ssub GR32:$src1, imm32sx16n:$src2),
(AHI GR32:$src1, imm32sx16n:$src2)>;
def : Pat<(z_ssubo GR32:$src1, simm32n:$src2),
def : Pat<(z_ssub GR32:$src1, simm32n:$src2),
(AFI GR32:$src1, simm32n:$src2)>;
def : Pat<(z_ssubo GR64:$src1, imm64sx16n:$src2),
def : Pat<(z_ssub GR64:$src1, imm64sx16n:$src2),
(AGHI GR64:$src1, imm64sx16n:$src2)>;
def : Pat<(z_ssubo GR64:$src1, imm64sx32n:$src2),
def : Pat<(z_ssub GR64:$src1, imm64sx32n:$src2),
(AGFI GR64:$src1, imm64sx32n:$src2)>;
}
// Subtraction producing a carry.
let Defs = [CC] in {
// Subtraction of a register.
defm SLR : BinaryRRAndK<"slr", 0x1F, 0xB9FB, z_usubo, GR32, GR32>;
defm SLR : BinaryRRAndK<"slr", 0x1F, 0xB9FB, z_usub, GR32, GR32>;
def SLGFR : BinaryRRE<"slgfr", 0xB91B, null_frag, GR64, GR32>;
defm SLGR : BinaryRREAndK<"slgr", 0xB90B, 0xB9EB, z_usubo, GR64, GR64>;
defm SLGR : BinaryRREAndK<"slgr", 0xB90B, 0xB9EB, z_usub, GR64, GR64>;
// Subtraction from a high register.
def SLHHHR : BinaryRRFa<"slhhhr", 0xB9CB, null_frag, GRH32, GRH32, GRH32>,
@ -1096,26 +1048,30 @@ let Defs = [CC] in {
Requires<[FeatureHighWord]>;
// Subtraction of unsigned 32-bit immediates.
def SLFI : BinaryRIL<"slfi", 0xC25, z_usubo, GR32, uimm32>;
def SLGFI : BinaryRIL<"slgfi", 0xC24, z_usubo, GR64, imm64zx32>;
def SLFI : BinaryRIL<"slfi", 0xC25, z_usub, GR32, uimm32>;
def SLGFI : BinaryRIL<"slgfi", 0xC24, z_usub, GR64, imm64zx32>;
// Subtraction of memory.
defm SL : BinaryRXPair<"sl", 0x5F, 0xE35F, z_usubo, GR32, load, 4>;
def SLGF : BinaryRXY<"slgf", 0xE31B, z_usubo, GR64, azextloadi32, 4>;
def SLG : BinaryRXY<"slg", 0xE30B, z_usubo, GR64, load, 8>;
defm SL : BinaryRXPair<"sl", 0x5F, 0xE35F, z_usub, GR32, load, 4>;
def SLGF : BinaryRXY<"slgf", 0xE31B, z_usub, GR64, azextloadi32, 4>;
def SLG : BinaryRXY<"slg", 0xE30B, z_usub, GR64, load, 8>;
}
defm : ZXB<z_usubo, GR64, SLGFR>;
defm : ZXB<z_usub, GR64, SLGFR>;
// Subtracting an immediate is the same as adding the negated immediate.
let AddedComplexity = 1 in {
def : Pat<(z_usubo GR32:$src1, imm32sx16n:$src2),
def : Pat<(z_usub GR32:$src1, imm32sx16n:$src2),
(ALHSIK GR32:$src1, imm32sx16n:$src2)>,
Requires<[FeatureDistinctOps]>;
def : Pat<(z_usubo GR64:$src1, imm64sx16n:$src2),
def : Pat<(z_usub GR64:$src1, imm64sx16n:$src2),
(ALGHSIK GR64:$src1, imm64sx16n:$src2)>,
Requires<[FeatureDistinctOps]>;
}
// And vice versa in one special case (but we prefer addition).
def : Pat<(add GR64:$src1, imm64zx32n:$src2),
(SLGFI GR64:$src1, imm64zx32n:$src2)>;
// Subtraction producing and using a carry.
let Defs = [CC], Uses = [CC] in {
// Subtraction of a register.
@ -1127,35 +1083,6 @@ let Defs = [CC], Uses = [CC] in {
def SLBG : BinaryRXY<"slbg", 0xE389, z_subcarry, GR64, load, 8>;
}
// Map plain subtraction to either arithmetic or logical operation.
def : Pat<(sub GR32:$src1, GR32:$src2),
(SR GR32:$src1, GR32:$src2)>;
def : Pat<(sub GR64:$src1, GR64:$src2),
(SGR GR64:$src1, GR64:$src2)>;
defm : SXB<sub, GR64, SGFR>;
defm : ZXB<sub, GR64, SLGFR>;
def : Pat<(add GR64:$src1, imm64zx32n:$src2),
(SLGFI GR64:$src1, imm64zx32n:$src2)>;
def : Pat<(sub GR32:$src1, (asextloadi16 bdxaddr12pair:$addr)),
(SH GR32:$src1, bdxaddr12pair:$addr)>;
def : Pat<(sub GR32:$src1, (asextloadi16 bdxaddr20pair:$addr)),
(SHY GR32:$src1, bdxaddr20pair:$addr)>;
def : Pat<(sub GR32:$src1, (load bdxaddr12pair:$addr)),
(S GR32:$src1, bdxaddr12pair:$addr)>;
def : Pat<(sub GR32:$src1, (load bdxaddr20pair:$addr)),
(SY GR32:$src1, bdxaddr20pair:$addr)>;
def : Pat<(sub GR64:$src1, (asextloadi16 bdxaddr20only:$addr)),
(SGH GR64:$src1, bdxaddr20only:$addr)>,
Requires<[FeatureMiscellaneousExtensions2]>;
def : Pat<(sub GR64:$src1, (asextloadi32 bdxaddr20only:$addr)),
(SGF GR64:$src1, bdxaddr20only:$addr)>;
def : Pat<(sub GR64:$src1, (azextloadi32 bdxaddr20only:$addr)),
(SLGF GR64:$src1, bdxaddr20only:$addr)>;
def : Pat<(sub GR64:$src1, (load bdxaddr20only:$addr)),
(SG GR64:$src1, bdxaddr20only:$addr)>;
//===----------------------------------------------------------------------===//
// AND

View File

@ -646,6 +646,20 @@ def z_inegabs64 : PatFrag<(ops node:$src), (ineg (z_iabs64 node:$src))>;
def z_muladd : PatFrag<(ops node:$src1, node:$src2, node:$src3),
(add (mul node:$src1, node:$src2), node:$src3)>;
// Alternatives to match operations with or without an overflow CC result.
def z_sadd : PatFrags<(ops node:$src1, node:$src2),
[(z_saddo node:$src1, node:$src2),
(add node:$src1, node:$src2)]>;
def z_uadd : PatFrags<(ops node:$src1, node:$src2),
[(z_uaddo node:$src1, node:$src2),
(add node:$src1, node:$src2)]>;
def z_ssub : PatFrags<(ops node:$src1, node:$src2),
[(z_ssubo node:$src1, node:$src2),
(sub node:$src1, node:$src2)]>;
def z_usub : PatFrags<(ops node:$src1, node:$src2),
[(z_usubo node:$src1, node:$src2),
(sub node:$src1, node:$src2)]>;
// Fused multiply-subtract, using the natural operand order.
def fms : PatFrag<(ops node:$src1, node:$src2, node:$src3),
(fma node:$src1, node:$src2, (fneg node:$src3))>;

View File

@ -1663,21 +1663,31 @@ static unsigned GetNumNodeResults(Record *Operator, CodeGenDAGPatterns &CDP) {
if (Operator->isSubClassOf("SDNode"))
return CDP.getSDNodeInfo(Operator).getNumResults();
if (Operator->isSubClassOf("PatFrag")) {
if (Operator->isSubClassOf("PatFrags")) {
// If we've already parsed this pattern fragment, get it. Otherwise, handle
// the forward reference case where one pattern fragment references another
// before it is processed.
if (TreePattern *PFRec = CDP.getPatternFragmentIfRead(Operator))
return PFRec->getOnlyTree()->getNumTypes();
if (TreePattern *PFRec = CDP.getPatternFragmentIfRead(Operator)) {
// The number of results of a fragment with alternative records is the
// maximum number of results across all alternatives.
unsigned NumResults = 0;
for (auto T : PFRec->getTrees())
NumResults = std::max(NumResults, T->getNumTypes());
return NumResults;
}
// Get the result tree.
DagInit *Tree = Operator->getValueAsDag("Fragment");
Record *Op = nullptr;
if (Tree)
if (DefInit *DI = dyn_cast<DefInit>(Tree->getOperator()))
Op = DI->getDef();
assert(Op && "Invalid Fragment");
return GetNumNodeResults(Op, CDP);
ListInit *LI = Operator->getValueAsListInit("Fragments");
assert(LI && "Invalid Fragment");
unsigned NumResults = 0;
for (Init *I : LI->getValues()) {
Record *Op = nullptr;
if (DagInit *Dag = dyn_cast<DagInit>(I))
if (DefInit *DI = dyn_cast<DefInit>(Dag->getOperator()))
Op = DI->getDef();
assert(Op && "Invalid Fragment");
NumResults = std::max(NumResults, GetNumNodeResults(Op, CDP));
}
return NumResults;
}
if (Operator->isSubClassOf("Instruction")) {
@ -1843,30 +1853,81 @@ void TreePatternNode::SubstituteFormalArguments(
/// InlinePatternFragments - If this pattern refers to any pattern
/// fragments, inline them into place, giving us a pattern without any
/// PatFrag references.
TreePatternNodePtr TreePatternNode::InlinePatternFragments(TreePatternNodePtr T,
TreePattern &TP) {
if (TP.hasError())
return nullptr;
/// fragments, return the set of inlined versions (this can be more than
/// one if a PatFrags record has multiple alternatives).
void TreePatternNode::InlinePatternFragments(
TreePatternNodePtr T, TreePattern &TP,
std::vector<TreePatternNodePtr> &OutAlternatives) {
if (TP.hasError())
return;
if (isLeaf()) {
OutAlternatives.push_back(T); // nothing to do.
return;
}
if (isLeaf())
return T; // nothing to do.
Record *Op = getOperator();
if (!Op->isSubClassOf("PatFrag")) {
// Just recursively inline children nodes.
if (!Op->isSubClassOf("PatFrags")) {
if (getNumChildren() == 0) {
OutAlternatives.push_back(T);
return;
}
// Recursively inline children nodes.
std::vector<std::vector<TreePatternNodePtr> > ChildAlternatives;
ChildAlternatives.resize(getNumChildren());
for (unsigned i = 0, e = getNumChildren(); i != e; ++i) {
TreePatternNodePtr Child = getChildShared(i);
TreePatternNodePtr NewChild = Child->InlinePatternFragments(Child, TP);
Child->InlinePatternFragments(Child, TP, ChildAlternatives[i]);
// If there are no alternatives for any child, there are no
// alternatives for this expression as whole.
if (ChildAlternatives[i].empty())
return;
assert((Child->getPredicateFns().empty() ||
NewChild->getPredicateFns() == Child->getPredicateFns()) &&
"Non-empty child predicate clobbered!");
setChild(i, std::move(NewChild));
for (auto NewChild : ChildAlternatives[i])
assert((Child->getPredicateFns().empty() ||
NewChild->getPredicateFns() == Child->getPredicateFns()) &&
"Non-empty child predicate clobbered!");
}
return T;
// The end result is an all-pairs construction of the resultant pattern.
std::vector<unsigned> Idxs;
Idxs.resize(ChildAlternatives.size());
bool NotDone;
do {
// Create the variant and add it to the output list.
std::vector<TreePatternNodePtr> NewChildren;
for (unsigned i = 0, e = ChildAlternatives.size(); i != e; ++i)
NewChildren.push_back(ChildAlternatives[i][Idxs[i]]);
TreePatternNodePtr R = std::make_shared<TreePatternNode>(
getOperator(), NewChildren, getNumTypes());
// Copy over properties.
R->setName(getName());
R->setPredicateFns(getPredicateFns());
R->setTransformFn(getTransformFn());
for (unsigned i = 0, e = getNumTypes(); i != e; ++i)
R->setType(i, getExtType(i));
// Register alternative.
OutAlternatives.push_back(R);
// Increment indices to the next permutation by incrementing the
// indices from last index backward, e.g., generate the sequence
// [0, 0], [0, 1], [1, 0], [1, 1].
int IdxsIdx;
for (IdxsIdx = Idxs.size() - 1; IdxsIdx >= 0; --IdxsIdx) {
if (++Idxs[IdxsIdx] == ChildAlternatives[IdxsIdx].size())
Idxs[IdxsIdx] = 0;
else
break;
}
NotDone = (IdxsIdx >= 0);
} while (NotDone);
return;
}
// Otherwise, we found a reference to a fragment. First, look up its
@ -1877,38 +1938,42 @@ TreePatternNodePtr TreePatternNode::InlinePatternFragments(TreePatternNodePtr T,
if (Frag->getNumArgs() != Children.size()) {
TP.error("'" + Op->getName() + "' fragment requires " +
Twine(Frag->getNumArgs()) + " operands!");
return {nullptr};
return;
}
TreePatternNodePtr FragTree = Frag->getOnlyTree()->clone();
TreePredicateFn PredFn(Frag);
if (!PredFn.isAlwaysTrue())
FragTree->addPredicateFn(PredFn);
// Resolve formal arguments to their actual value.
if (Frag->getNumArgs()) {
// Compute the map of formal to actual arguments.
std::map<std::string, TreePatternNodePtr> ArgMap;
for (unsigned i = 0, e = Frag->getNumArgs(); i != e; ++i) {
const TreePatternNodePtr &Child = getChildShared(i);
ArgMap[Frag->getArgName(i)] = Child->InlinePatternFragments(Child, TP);
}
FragTree->SubstituteFormalArguments(ArgMap);
// Compute the map of formal to actual arguments.
std::map<std::string, TreePatternNodePtr> ArgMap;
for (unsigned i = 0, e = Frag->getNumArgs(); i != e; ++i) {
const TreePatternNodePtr &Child = getChildShared(i);
ArgMap[Frag->getArgName(i)] = Child;
}
FragTree->setName(getName());
for (unsigned i = 0, e = Types.size(); i != e; ++i)
FragTree->UpdateNodeType(i, getExtType(i), TP);
// Loop over all fragment alternatives.
for (auto Alternative : Frag->getTrees()) {
TreePatternNodePtr FragTree = Alternative->clone();
// Transfer in the old predicates.
for (const TreePredicateFn &Pred : getPredicateFns())
FragTree->addPredicateFn(Pred);
TreePredicateFn PredFn(Frag);
if (!PredFn.isAlwaysTrue())
FragTree->addPredicateFn(PredFn);
// The fragment we inlined could have recursive inlining that is needed. See
// if there are any pattern fragments in it and inline them as needed.
return FragTree->InlinePatternFragments(FragTree, TP);
// Resolve formal arguments to their actual value.
if (Frag->getNumArgs())
FragTree->SubstituteFormalArguments(ArgMap);
// Transfer types. Note that the resolved alternative may have fewer
// (but not more) results than the PatFrags node.
FragTree->setName(getName());
for (unsigned i = 0, e = FragTree->getNumTypes(); i != e; ++i)
FragTree->UpdateNodeType(i, getExtType(i), TP);
// Transfer in the old predicates.
for (const TreePredicateFn &Pred : getPredicateFns())
FragTree->addPredicateFn(Pred);
// The fragment we inlined could have recursive inlining that is needed. See
// if there are any pattern fragments in it and inline them as needed.
FragTree->InlinePatternFragments(FragTree, TP, OutAlternatives);
}
}
/// getImplicitType - Check to see if the specified record has an implicit
@ -1955,7 +2020,7 @@ static TypeSetByHwMode getImplicitType(Record *R, unsigned ResNo,
return TypeSetByHwMode(T.getRegisterClass(R).getValueTypes());
}
if (R->isSubClassOf("PatFrag")) {
if (R->isSubClassOf("PatFrags")) {
assert(ResNo == 0 && "FIXME: PatFrag with multiple results?");
// Pattern fragment types will be resolved when they are inlined.
return TypeSetByHwMode(); // Unknown.
@ -2207,35 +2272,6 @@ bool TreePatternNode::ApplyTypeConstraints(TreePattern &TP, bool NotRegisters) {
return false;
}
// special handling for set, which isn't really an SDNode.
if (getOperator()->getName() == "set") {
assert(getNumTypes() == 0 && "Set doesn't produce a value");
assert(getNumChildren() >= 2 && "Missing RHS of a set?");
unsigned NC = getNumChildren();
TreePatternNode *SetVal = getChild(NC-1);
bool MadeChange = SetVal->ApplyTypeConstraints(TP, NotRegisters);
for (unsigned i = 0; i < NC-1; ++i) {
TreePatternNode *Child = getChild(i);
MadeChange |= Child->ApplyTypeConstraints(TP, NotRegisters);
// Types of operands must match.
MadeChange |= Child->UpdateNodeType(0, SetVal->getExtType(i), TP);
MadeChange |= SetVal->UpdateNodeType(i, Child->getExtType(0), TP);
}
return MadeChange;
}
if (getOperator()->getName() == "implicit") {
assert(getNumTypes() == 0 && "Node doesn't produce a value");
bool MadeChange = false;
for (unsigned i = 0; i < getNumChildren(); ++i)
MadeChange |= getChild(i)->ApplyTypeConstraints(TP, NotRegisters);
return MadeChange;
}
if (const CodeGenIntrinsic *Int = getIntrinsicInfo(CDP)) {
bool MadeChange = false;
@ -2546,7 +2582,7 @@ TreePatternNodePtr TreePattern::ParseTreePattern(Init *TheInit,
// Direct reference to a leaf DagNode or PatFrag? Turn it into a
// TreePatternNode of its own. For example:
/// (foo GPR, imm) -> (foo GPR, (imm))
if (R->isSubClassOf("SDNode") || R->isSubClassOf("PatFrag"))
if (R->isSubClassOf("SDNode") || R->isSubClassOf("PatFrags"))
return ParseTreePattern(
DagInit::get(DI, nullptr,
std::vector<std::pair<Init*, StringInit*> >()),
@ -2619,7 +2655,7 @@ TreePatternNodePtr TreePattern::ParseTreePattern(Init *TheInit,
}
// Verify that this is something that makes sense for an operator.
if (!Operator->isSubClassOf("PatFrag") &&
if (!Operator->isSubClassOf("PatFrags") &&
!Operator->isSubClassOf("SDNode") &&
!Operator->isSubClassOf("Instruction") &&
!Operator->isSubClassOf("SDNodeXForm") &&
@ -2941,17 +2977,17 @@ void CodeGenDAGPatterns::ParseComplexPatterns() {
/// inside a pattern fragment to a pattern fragment.
///
void CodeGenDAGPatterns::ParsePatternFragments(bool OutFrags) {
std::vector<Record*> Fragments = Records.getAllDerivedDefinitions("PatFrag");
std::vector<Record*> Fragments = Records.getAllDerivedDefinitions("PatFrags");
// First step, parse all of the fragments.
for (Record *Frag : Fragments) {
if (OutFrags != Frag->isSubClassOf("OutPatFrag"))
continue;
DagInit *Tree = Frag->getValueAsDag("Fragment");
ListInit *LI = Frag->getValueAsListInit("Fragments");
TreePattern *P =
(PatternFragments[Frag] = llvm::make_unique<TreePattern>(
Frag, Tree, !Frag->isSubClassOf("OutPatFrag"),
Frag, LI, !Frag->isSubClassOf("OutPatFrag"),
*this)).get();
// Validate the argument list, converting it to set, to discard duplicates.
@ -2999,13 +3035,15 @@ void CodeGenDAGPatterns::ParsePatternFragments(bool OutFrags) {
// this fragment uses it.
TreePredicateFn PredFn(P);
if (!PredFn.isAlwaysTrue())
P->getOnlyTree()->addPredicateFn(PredFn);
for (auto T : P->getTrees())
T->addPredicateFn(PredFn);
// If there is a node transformation corresponding to this, keep track of
// it.
Record *Transform = Frag->getValueAsDef("OperandTransform");
if (!getSDNodeTransform(Transform).second.empty()) // not noop xform?
P->getOnlyTree()->setTransformFn(Transform);
for (auto T : P->getTrees())
T->setTransformFn(Transform);
}
// Now that we've parsed all of the tree fragments, do a closure on them so
@ -3117,6 +3155,9 @@ static bool HandleUse(TreePattern &I, TreePatternNodePtr Pat,
// Ensure that the inputs agree if we've already seen this input.
if (Rec != SlotRec)
I.error("All $" + Pat->getName() + " inputs must agree with each other");
// Ensure that the types can agree as well.
Slot->UpdateNodeType(0, Pat->getExtType(0), I);
Pat->UpdateNodeType(0, Slot->getExtType(0), I);
if (Slot->getExtTypes() != Pat->getExtTypes())
I.error("All $" + Pat->getName() + " inputs must agree with each other");
return true;
@ -3130,6 +3171,17 @@ void CodeGenDAGPatterns::FindPatternInputsAndOutputs(
std::map<std::string, TreePatternNodePtr> &InstInputs,
std::map<std::string, TreePatternNodePtr> &InstResults,
std::vector<Record *> &InstImpResults) {
// The instruction pattern still has unresolved fragments. For *named*
// nodes we must resolve those here. This may not result in multiple
// alternatives.
if (!Pat->getName().empty()) {
TreePattern SrcPattern(I.getRecord(), Pat, true, *this);
SrcPattern.InlinePatternFragments();
SrcPattern.InferAllTypes();
Pat = SrcPattern.getOnlyTree();
}
if (Pat->isLeaf()) {
bool isUse = HandleUse(I, Pat, InstInputs);
if (!isUse && Pat->getTransformFn())
@ -3181,6 +3233,12 @@ void CodeGenDAGPatterns::FindPatternInputsAndOutputs(
unsigned NumDests = Pat->getNumChildren()-1;
for (unsigned i = 0; i != NumDests; ++i) {
TreePatternNodePtr Dest = Pat->getChildShared(i);
// For set destinations we also must resolve fragments here.
TreePattern DestPattern(I.getRecord(), Dest, false, *this);
DestPattern.InlinePatternFragments();
DestPattern.InferAllTypes();
Dest = DestPattern.getOnlyTree();
if (!Dest->isLeaf())
I.error("set destination should be a register!");
@ -3223,18 +3281,17 @@ public:
bool mayLoad;
bool isBitcast;
bool isVariadic;
bool hasChain;
InstAnalyzer(const CodeGenDAGPatterns &cdp)
: CDP(cdp), hasSideEffects(false), mayStore(false), mayLoad(false),
isBitcast(false), isVariadic(false) {}
void Analyze(const TreePattern *Pat) {
// Assume only the first tree is the pattern. The others are clobber nodes.
AnalyzeNode(Pat->getTree(0).get());
}
isBitcast(false), isVariadic(false), hasChain(false) {}
void Analyze(const PatternToMatch &Pat) {
AnalyzeNode(Pat.getSrcPattern());
const TreePatternNode *N = Pat.getSrcPattern();
AnalyzeNode(N);
// These properties are detected only on the root node.
isBitcast = IsNodeBitcast(N);
}
private:
@ -3242,20 +3299,12 @@ private:
if (hasSideEffects || mayLoad || mayStore || isVariadic)
return false;
if (N->getNumChildren() != 2)
if (N->isLeaf())
return false;
if (N->getNumChildren() != 1 || !N->getChild(0)->isLeaf())
return false;
const TreePatternNode *N0 = N->getChild(0);
if (!N0->isLeaf() || !isa<DefInit>(N0->getLeafValue()))
return false;
const TreePatternNode *N1 = N->getChild(1);
if (N1->isLeaf())
return false;
if (N1->getNumChildren() != 1 || !N1->getChild(0)->isLeaf())
return false;
const SDNodeInfo &OpInfo = CDP.getSDNodeInfo(N1->getOperator());
const SDNodeInfo &OpInfo = CDP.getSDNodeInfo(N->getOperator());
if (OpInfo.getNumResults() != 1 || OpInfo.getNumOperands() != 1)
return false;
return OpInfo.getEnumName() == "ISD::BITCAST";
@ -3281,17 +3330,12 @@ public:
for (unsigned i = 0, e = N->getNumChildren(); i != e; ++i)
AnalyzeNode(N->getChild(i));
// Ignore set nodes, which are not SDNodes.
if (N->getOperator()->getName() == "set") {
isBitcast = IsNodeBitcast(N);
return;
}
// Notice properties of the node.
if (N->NodeHasProperty(SDNPMayStore, CDP)) mayStore = true;
if (N->NodeHasProperty(SDNPMayLoad, CDP)) mayLoad = true;
if (N->NodeHasProperty(SDNPSideEffect, CDP)) hasSideEffects = true;
if (N->NodeHasProperty(SDNPVariadic, CDP)) isVariadic = true;
if (N->NodeHasProperty(SDNPHasChain, CDP)) hasChain = true;
if (const CodeGenIntrinsic *IntInfo = N->getIntrinsicInfo(CDP)) {
// If this is an intrinsic, analyze it.
@ -3354,7 +3398,13 @@ static bool InferFromPattern(CodeGenInstruction &InstInfo,
InstInfo.mayLoad |= PatInfo.mayLoad;
// These flags are silently added without any verification.
InstInfo.isBitcast |= PatInfo.isBitcast;
// FIXME: To match historical behavior of TableGen, for now add those flags
// only when we're inferring from the primary instruction pattern.
if (PatDef->isSubClassOf("Instruction")) {
InstInfo.isBitcast |= PatInfo.isBitcast;
InstInfo.hasChain |= PatInfo.hasChain;
InstInfo.hasChain_Inferred = true;
}
// Don't infer isVariadic. This flag means something different on SDNodes and
// instructions. For example, a CALL SDNode is variadic because it has the
@ -3425,20 +3475,13 @@ static bool checkOperandClass(CGIOperandList::OperandInfo &OI,
return false;
}
const DAGInstruction &CodeGenDAGPatterns::parseInstructionPattern(
void CodeGenDAGPatterns::parseInstructionPattern(
CodeGenInstruction &CGI, ListInit *Pat, DAGInstMap &DAGInsts) {
assert(!DAGInsts.count(CGI.TheDef) && "Instruction already parsed!");
// Parse the instruction.
auto I = llvm::make_unique<TreePattern>(CGI.TheDef, Pat, true, *this);
// Inline pattern fragments into it.
I->InlinePatternFragments();
// Infer as many types as possible. If we cannot infer all of them, we can
// never do anything with this instruction pattern: report it to the user.
if (!I->InferAllTypes())
I->error("Could not infer all types in pattern!");
TreePattern I(CGI.TheDef, Pat, true, *this);
// InstInputs - Keep track of all of the inputs of the instruction, along
// with the record they are declared as.
@ -3453,9 +3496,9 @@ const DAGInstruction &CodeGenDAGPatterns::parseInstructionPattern(
// Verify that the top-level forms in the instruction are of void type, and
// fill in the InstResults map.
SmallString<32> TypesString;
for (unsigned j = 0, e = I->getNumTrees(); j != e; ++j) {
for (unsigned j = 0, e = I.getNumTrees(); j != e; ++j) {
TypesString.clear();
TreePatternNodePtr Pat = I->getTree(j);
TreePatternNodePtr Pat = I.getTree(j);
if (Pat->getNumTypes() != 0) {
raw_svector_ostream OS(TypesString);
for (unsigned k = 0, ke = Pat->getNumTypes(); k != ke; ++k) {
@ -3463,13 +3506,13 @@ const DAGInstruction &CodeGenDAGPatterns::parseInstructionPattern(
OS << ", ";
Pat->getExtType(k).writeToStream(OS);
}
I->error("Top-level forms in instruction pattern should have"
I.error("Top-level forms in instruction pattern should have"
" void types, has types " +
OS.str());
}
// Find inputs and outputs, and verify the structure of the uses/defs.
FindPatternInputsAndOutputs(*I, Pat, InstInputs, InstResults,
FindPatternInputsAndOutputs(I, Pat, InstInputs, InstResults,
InstImpResults);
}
@ -3479,31 +3522,31 @@ const DAGInstruction &CodeGenDAGPatterns::parseInstructionPattern(
unsigned NumResults = InstResults.size();
// Parse the operands list from the (ops) list, validating it.
assert(I->getArgList().empty() && "Args list should still be empty here!");
assert(I.getArgList().empty() && "Args list should still be empty here!");
// Check that all of the results occur first in the list.
std::vector<Record*> Results;
SmallVector<TreePatternNodePtr, 2> ResNodes;
for (unsigned i = 0; i != NumResults; ++i) {
if (i == CGI.Operands.size())
I->error("'" + InstResults.begin()->first +
I.error("'" + InstResults.begin()->first +
"' set but does not appear in operand list!");
const std::string &OpName = CGI.Operands[i].Name;
// Check that it exists in InstResults.
TreePatternNodePtr RNode = InstResults[OpName];
if (!RNode)
I->error("Operand $" + OpName + " does not exist in operand list!");
I.error("Operand $" + OpName + " does not exist in operand list!");
Record *R = cast<DefInit>(RNode->getLeafValue())->getDef();
ResNodes.push_back(std::move(RNode));
if (!R)
I->error("Operand $" + OpName + " should be a set destination: all "
I.error("Operand $" + OpName + " should be a set destination: all "
"outputs must occur before inputs in operand list!");
if (!checkOperandClass(CGI.Operands[i], R))
I->error("Operand $" + OpName + " class mismatch!");
I.error("Operand $" + OpName + " class mismatch!");
// Remember the return type.
Results.push_back(CGI.Operands[i].Rec);
@ -3522,7 +3565,7 @@ const DAGInstruction &CodeGenDAGPatterns::parseInstructionPattern(
CGIOperandList::OperandInfo &Op = CGI.Operands[i];
const std::string &OpName = Op.Name;
if (OpName.empty())
I->error("Operand #" + Twine(i) + " in operands list has no name!");
I.error("Operand #" + Twine(i) + " in operands list has no name!");
if (!InstInputsCheck.count(OpName)) {
// If this is an operand with a DefaultOps set filled in, we can ignore
@ -3533,7 +3576,7 @@ const DAGInstruction &CodeGenDAGPatterns::parseInstructionPattern(
if (!getDefaultOperand(Op.Rec).DefaultOps.empty())
continue;
}
I->error("Operand $" + OpName +
I.error("Operand $" + OpName +
" does not appear in the instruction pattern");
}
TreePatternNodePtr InVal = InstInputsCheck[OpName];
@ -3542,7 +3585,7 @@ const DAGInstruction &CodeGenDAGPatterns::parseInstructionPattern(
if (InVal->isLeaf() && isa<DefInit>(InVal->getLeafValue())) {
Record *InRec = static_cast<DefInit*>(InVal->getLeafValue())->getDef();
if (!checkOperandClass(Op, InRec))
I->error("Operand $" + OpName + "'s register class disagrees"
I.error("Operand $" + OpName + "'s register class disagrees"
" between the operand and pattern");
}
Operands.push_back(Op.Rec);
@ -3566,36 +3609,37 @@ const DAGInstruction &CodeGenDAGPatterns::parseInstructionPattern(
}
if (!InstInputsCheck.empty())
I->error("Input operand $" + InstInputsCheck.begin()->first +
" occurs in pattern but not in operands list!");
I.error("Input operand $" + InstInputsCheck.begin()->first +
" occurs in pattern but not in operands list!");
TreePatternNodePtr ResultPattern = std::make_shared<TreePatternNode>(
I->getRecord(), ResultNodeOperands,
GetNumNodeResults(I->getRecord(), *this));
I.getRecord(), ResultNodeOperands,
GetNumNodeResults(I.getRecord(), *this));
// Copy fully inferred output node types to instruction result pattern.
for (unsigned i = 0; i != NumResults; ++i) {
assert(ResNodes[i]->getNumTypes() == 1 && "FIXME: Unhandled");
ResultPattern->setType(i, ResNodes[i]->getExtType(0));
}
// FIXME: Assume only the first tree is the pattern. The others are clobber
// nodes.
TreePatternNodePtr Pattern = I.getTree(0);
TreePatternNodePtr SrcPattern;
if (Pattern->getOperator()->getName() == "set") {
SrcPattern = Pattern->getChild(Pattern->getNumChildren()-1)->clone();
} else{
// Not a set (store or something?)
SrcPattern = Pattern;
}
// Create and insert the instruction.
// FIXME: InstImpResults should not be part of DAGInstruction.
Record *R = I->getRecord();
DAGInstruction &TheInst =
DAGInsts.emplace(std::piecewise_construct, std::forward_as_tuple(R),
std::forward_as_tuple(std::move(I), Results, Operands,
InstImpResults)).first->second;
Record *R = I.getRecord();
DAGInsts.emplace(std::piecewise_construct, std::forward_as_tuple(R),
std::forward_as_tuple(Results, Operands, InstImpResults,
SrcPattern, ResultPattern));
// Use a temporary tree pattern to infer all types and make sure that the
// constructed result is correct. This depends on the instruction already
// being inserted into the DAGInsts map.
TreePattern Temp(TheInst.getPattern()->getRecord(), ResultPattern, false,
*this);
Temp.InferAllTypes(&TheInst.getPattern()->getNamedNodesMap());
TheInst.setResultPattern(Temp.getOnlyTree());
return TheInst;
LLVM_DEBUG(I.dump());
}
/// ParseInstructions - Parse all of the instructions, inlining and resolving
@ -3635,44 +3679,26 @@ void CodeGenDAGPatterns::ParseInstructions() {
// Create and insert the instruction.
std::vector<Record*> ImpResults;
Instructions.insert(std::make_pair(Instr,
DAGInstruction(nullptr, Results, Operands, ImpResults)));
DAGInstruction(Results, Operands, ImpResults)));
continue; // no pattern.
}
CodeGenInstruction &CGI = Target.getInstruction(Instr);
const DAGInstruction &DI = parseInstructionPattern(CGI, LI, Instructions);
(void)DI;
LLVM_DEBUG(DI.getPattern()->dump());
parseInstructionPattern(CGI, LI, Instructions);
}
// If we can, convert the instructions to be patterns that are matched!
for (auto &Entry : Instructions) {
DAGInstruction &TheInst = Entry.second;
TreePattern *I = TheInst.getPattern();
if (!I) continue; // No pattern.
if (PatternRewriter)
PatternRewriter(I);
// FIXME: Assume only the first tree is the pattern. The others are clobber
// nodes.
TreePatternNodePtr Pattern = I->getTree(0);
TreePatternNodePtr SrcPattern;
if (Pattern->getOperator()->getName() == "set") {
SrcPattern = Pattern->getChild(Pattern->getNumChildren()-1)->clone();
} else{
// Not a set (store or something?)
SrcPattern = Pattern;
}
Record *Instr = Entry.first;
ListInit *Preds = Instr->getValueAsListInit("Predicates");
int Complexity = Instr->getValueAsInt("AddedComplexity");
AddPatternToMatch(
I,
PatternToMatch(Instr, makePredList(Preds), SrcPattern,
TheInst.getResultPattern(), TheInst.getImpResults(),
Complexity, Instr->getID()));
DAGInstruction &TheInst = Entry.second;
TreePatternNodePtr SrcPattern = TheInst.getSrcPattern();
TreePatternNodePtr ResultPattern = TheInst.getResultPattern();
if (SrcPattern && ResultPattern) {
TreePattern Pattern(Instr, SrcPattern, true, *this);
TreePattern Result(Instr, ResultPattern, false, *this);
ParseOnePattern(Instr, Pattern, Result, TheInst.getImpResults());
}
}
}
@ -3758,27 +3784,11 @@ void CodeGenDAGPatterns::InferInstructionFlags() {
ArrayRef<const CodeGenInstruction*> Instructions =
Target.getInstructionsByEnumValue();
// First try to infer flags from the primary instruction pattern, if any.
SmallVector<CodeGenInstruction*, 8> Revisit;
unsigned Errors = 0;
for (unsigned i = 0, e = Instructions.size(); i != e; ++i) {
CodeGenInstruction &InstInfo =
const_cast<CodeGenInstruction &>(*Instructions[i]);
// Get the primary instruction pattern.
const TreePattern *Pattern = getInstruction(InstInfo.TheDef).getPattern();
if (!Pattern) {
if (InstInfo.hasUndefFlags())
Revisit.push_back(&InstInfo);
continue;
}
InstAnalyzer PatInfo(*this);
PatInfo.Analyze(Pattern);
Errors += InferFromPattern(InstInfo, PatInfo, InstInfo.TheDef);
}
// Second, look for single-instruction patterns defined outside the
// instruction.
// Try to infer flags from all patterns in PatternToMatch. These include
// both the primary instruction patterns (which always come first) and
// patterns defined outside the instruction.
for (const PatternToMatch &PTM : ptms()) {
// We can only infer from single-instruction patterns, otherwise we won't
// know which instruction should get the flags.
@ -3802,9 +3812,11 @@ void CodeGenDAGPatterns::InferInstructionFlags() {
if (Errors)
PrintFatalError("pattern conflicts");
// Revisit instructions with undefined flags and no pattern.
// If requested by the target, guess any undefined properties.
if (Target.guessInstructionProperties()) {
for (CodeGenInstruction *InstInfo : Revisit) {
for (unsigned i = 0, e = Instructions.size(); i != e; ++i) {
CodeGenInstruction *InstInfo =
const_cast<CodeGenInstruction *>(Instructions[i]);
if (InstInfo->InferredFrom)
continue;
// The mayLoad and mayStore flags default to false.
@ -3816,7 +3828,9 @@ void CodeGenDAGPatterns::InferInstructionFlags() {
}
// Complain about any flags that are still undefined.
for (CodeGenInstruction *InstInfo : Revisit) {
for (unsigned i = 0, e = Instructions.size(); i != e; ++i) {
CodeGenInstruction *InstInfo =
const_cast<CodeGenInstruction *>(Instructions[i]);
if (InstInfo->InferredFrom)
continue;
if (InstInfo->hasSideEffects_Unset)
@ -3928,6 +3942,122 @@ static bool ForceArbitraryInstResultType(TreePatternNode *N, TreePattern &TP) {
return false;
}
void CodeGenDAGPatterns::ParseOnePattern(Record *TheDef,
TreePattern &Pattern, TreePattern &Result,
const std::vector<Record *> &InstImpResults) {
// Inline pattern fragments and expand multiple alternatives.
Pattern.InlinePatternFragments();
Result.InlinePatternFragments();
if (Result.getNumTrees() != 1)
Result.error("Cannot use multi-alternative fragments in result pattern!");
// Infer types.
bool IterateInference;
bool InferredAllPatternTypes, InferredAllResultTypes;
do {
// Infer as many types as possible. If we cannot infer all of them, we
// can never do anything with this pattern: report it to the user.
InferredAllPatternTypes =
Pattern.InferAllTypes(&Pattern.getNamedNodesMap());
// Infer as many types as possible. If we cannot infer all of them, we
// can never do anything with this pattern: report it to the user.
InferredAllResultTypes =
Result.InferAllTypes(&Pattern.getNamedNodesMap());
IterateInference = false;
// Apply the type of the result to the source pattern. This helps us
// resolve cases where the input type is known to be a pointer type (which
// is considered resolved), but the result knows it needs to be 32- or
// 64-bits. Infer the other way for good measure.
for (auto T : Pattern.getTrees())
for (unsigned i = 0, e = std::min(Result.getOnlyTree()->getNumTypes(),
T->getNumTypes());
i != e; ++i) {
IterateInference |= T->UpdateNodeType(
i, Result.getOnlyTree()->getExtType(i), Result);
IterateInference |= Result.getOnlyTree()->UpdateNodeType(
i, T->getExtType(i), Result);
}
// If our iteration has converged and the input pattern's types are fully
// resolved but the result pattern is not fully resolved, we may have a
// situation where we have two instructions in the result pattern and
// the instructions require a common register class, but don't care about
// what actual MVT is used. This is actually a bug in our modelling:
// output patterns should have register classes, not MVTs.
//
// In any case, to handle this, we just go through and disambiguate some
// arbitrary types to the result pattern's nodes.
if (!IterateInference && InferredAllPatternTypes &&
!InferredAllResultTypes)
IterateInference =
ForceArbitraryInstResultType(Result.getTree(0).get(), Result);
} while (IterateInference);
// Verify that we inferred enough types that we can do something with the
// pattern and result. If these fire the user has to add type casts.
if (!InferredAllPatternTypes)
Pattern.error("Could not infer all types in pattern!");
if (!InferredAllResultTypes) {
Pattern.dump();
Result.error("Could not infer all types in pattern result!");
}
// Promote the xform function to be an explicit node if set.
const TreePatternNodePtr &DstPattern = Result.getOnlyTree();
std::vector<TreePatternNodePtr> ResultNodeOperands;
for (unsigned ii = 0, ee = DstPattern->getNumChildren(); ii != ee; ++ii) {
TreePatternNodePtr OpNode = DstPattern->getChildShared(ii);
if (Record *Xform = OpNode->getTransformFn()) {
OpNode->setTransformFn(nullptr);
std::vector<TreePatternNodePtr> Children;
Children.push_back(OpNode);
OpNode = std::make_shared<TreePatternNode>(Xform, Children,
OpNode->getNumTypes());
}
ResultNodeOperands.push_back(OpNode);
}
TreePatternNodePtr DstShared =
DstPattern->isLeaf()
? DstPattern
: std::make_shared<TreePatternNode>(DstPattern->getOperator(),
ResultNodeOperands,
DstPattern->getNumTypes());
for (unsigned i = 0, e = Result.getOnlyTree()->getNumTypes(); i != e; ++i)
DstShared->setType(i, Result.getOnlyTree()->getExtType(i));
TreePattern Temp(Result.getRecord(), DstShared, false, *this);
Temp.InferAllTypes();
ListInit *Preds = TheDef->getValueAsListInit("Predicates");
int Complexity = TheDef->getValueAsInt("AddedComplexity");
if (PatternRewriter)
PatternRewriter(&Pattern);
// A pattern may end up with an "impossible" type, i.e. a situation
// where all types have been eliminated for some node in this pattern.
// This could occur for intrinsics that only make sense for a specific
// value type, and use a specific register class. If, for some mode,
// that register class does not accept that type, the type inference
// will lead to a contradiction, which is not an error however, but
// a sign that this pattern will simply never match.
if (Temp.getOnlyTree()->hasPossibleType())
for (auto T : Pattern.getTrees())
if (T->hasPossibleType())
AddPatternToMatch(&Pattern,
PatternToMatch(TheDef, makePredList(Preds),
T, Temp.getOnlyTree(),
InstImpResults, Complexity,
TheDef->getID()));
}
void CodeGenDAGPatterns::ParsePatterns() {
std::vector<Record*> Patterns = Records.getAllDerivedDefinitions("Pattern");
@ -3940,74 +4070,16 @@ void CodeGenDAGPatterns::ParsePatterns() {
TreePattern Pattern(CurPattern, Tree, true, *this);
// Inline pattern fragments into it.
Pattern.InlinePatternFragments();
ListInit *LI = CurPattern->getValueAsListInit("ResultInstrs");
if (LI->empty()) continue; // no pattern.
// Parse the instruction.
TreePattern Result(CurPattern, LI, false, *this);
// Inline pattern fragments into it.
Result.InlinePatternFragments();
if (Result.getNumTrees() != 1)
Result.error("Cannot handle instructions producing instructions "
"with temporaries yet!");
bool IterateInference;
bool InferredAllPatternTypes, InferredAllResultTypes;
do {
// Infer as many types as possible. If we cannot infer all of them, we
// can never do anything with this pattern: report it to the user.
InferredAllPatternTypes =
Pattern.InferAllTypes(&Pattern.getNamedNodesMap());
// Infer as many types as possible. If we cannot infer all of them, we
// can never do anything with this pattern: report it to the user.
InferredAllResultTypes =
Result.InferAllTypes(&Pattern.getNamedNodesMap());
IterateInference = false;
// Apply the type of the result to the source pattern. This helps us
// resolve cases where the input type is known to be a pointer type (which
// is considered resolved), but the result knows it needs to be 32- or
// 64-bits. Infer the other way for good measure.
for (unsigned i = 0, e = std::min(Result.getTree(0)->getNumTypes(),
Pattern.getTree(0)->getNumTypes());
i != e; ++i) {
IterateInference = Pattern.getTree(0)->UpdateNodeType(
i, Result.getTree(0)->getExtType(i), Result);
IterateInference |= Result.getTree(0)->UpdateNodeType(
i, Pattern.getTree(0)->getExtType(i), Result);
}
// If our iteration has converged and the input pattern's types are fully
// resolved but the result pattern is not fully resolved, we may have a
// situation where we have two instructions in the result pattern and
// the instructions require a common register class, but don't care about
// what actual MVT is used. This is actually a bug in our modelling:
// output patterns should have register classes, not MVTs.
//
// In any case, to handle this, we just go through and disambiguate some
// arbitrary types to the result pattern's nodes.
if (!IterateInference && InferredAllPatternTypes &&
!InferredAllResultTypes)
IterateInference =
ForceArbitraryInstResultType(Result.getTree(0).get(), Result);
} while (IterateInference);
// Verify that we inferred enough types that we can do something with the
// pattern and result. If these fire the user has to add type casts.
if (!InferredAllPatternTypes)
Pattern.error("Could not infer all types in pattern!");
if (!InferredAllResultTypes) {
Pattern.dump();
Result.error("Could not infer all types in pattern result!");
}
// Validate that the input pattern is correct.
std::map<std::string, TreePatternNodePtr> InstInputs;
std::map<std::string, TreePatternNodePtr> InstResults;
@ -4016,53 +4088,7 @@ void CodeGenDAGPatterns::ParsePatterns() {
FindPatternInputsAndOutputs(Pattern, Pattern.getTree(j), InstInputs,
InstResults, InstImpResults);
// Promote the xform function to be an explicit node if set.
const TreePatternNodePtr &DstPattern = Result.getOnlyTree();
std::vector<TreePatternNodePtr> ResultNodeOperands;
for (unsigned ii = 0, ee = DstPattern->getNumChildren(); ii != ee; ++ii) {
TreePatternNodePtr OpNode = DstPattern->getChildShared(ii);
if (Record *Xform = OpNode->getTransformFn()) {
OpNode->setTransformFn(nullptr);
std::vector<TreePatternNodePtr> Children;
Children.push_back(OpNode);
OpNode = std::make_shared<TreePatternNode>(Xform, Children,
OpNode->getNumTypes());
}
ResultNodeOperands.push_back(OpNode);
}
TreePatternNodePtr DstShared =
DstPattern->isLeaf()
? DstPattern
: std::make_shared<TreePatternNode>(DstPattern->getOperator(),
ResultNodeOperands,
DstPattern->getNumTypes());
for (unsigned i = 0, e = Result.getOnlyTree()->getNumTypes(); i != e; ++i)
DstShared->setType(i, Result.getOnlyTree()->getExtType(i));
TreePattern Temp(Result.getRecord(), DstShared, false, *this);
Temp.InferAllTypes();
// A pattern may end up with an "impossible" type, i.e. a situation
// where all types have been eliminated for some node in this pattern.
// This could occur for intrinsics that only make sense for a specific
// value type, and use a specific register class. If, for some mode,
// that register class does not accept that type, the type inference
// will lead to a contradiction, which is not an error however, but
// a sign that this pattern will simply never match.
if (Pattern.getTree(0)->hasPossibleType() &&
Temp.getOnlyTree()->hasPossibleType()) {
ListInit *Preds = CurPattern->getValueAsListInit("Predicates");
int Complexity = CurPattern->getValueAsInt("AddedComplexity");
if (PatternRewriter)
PatternRewriter(&Pattern);
AddPatternToMatch(&Pattern,
PatternToMatch(CurPattern, makePredList(Preds),
Pattern.getTree(0), Temp.getOnlyTree(),
std::move(InstImpResults), Complexity,
CurPattern->getID()));
}
ParseOnePattern(CurPattern, Pattern, Result, InstImpResults);
}
}

View File

@ -718,10 +718,11 @@ public: // Higher level manipulation routines.
SubstituteFormalArguments(std::map<std::string, TreePatternNodePtr> &ArgMap);
/// InlinePatternFragments - If this pattern refers to any pattern
/// fragments, inline them into place, giving us a pattern without any
/// PatFrag references.
TreePatternNodePtr InlinePatternFragments(TreePatternNodePtr T,
TreePattern &TP);
/// fragments, return the set of inlined versions (this can be more than
/// one if a PatFrags record has multiple alternatives).
void InlinePatternFragments(TreePatternNodePtr T,
TreePattern &TP,
std::vector<TreePatternNodePtr> &OutAlternatives);
/// ApplyTypeConstraints - Apply all of the type constraints relevant to
/// this node and its children in the tree. This returns true if it makes a
@ -845,10 +846,13 @@ public:
/// InlinePatternFragments - If this pattern refers to any pattern
/// fragments, inline them into place, giving us a pattern without any
/// PatFrag references.
/// PatFrags references. This may increase the number of trees in the
/// pattern if a PatFrags has multiple alternatives.
void InlinePatternFragments() {
for (unsigned i = 0, e = Trees.size(); i != e; ++i)
Trees[i] = Trees[i]->InlinePatternFragments(Trees[i], *this);
std::vector<TreePatternNodePtr> Copy = Trees;
Trees.clear();
for (unsigned i = 0, e = Copy.size(); i != e; ++i)
Copy[i]->InlinePatternFragments(Copy[i], *this, Trees);
}
/// InferAllTypes - Infer/propagate as many types throughout the expression
@ -911,28 +915,26 @@ struct DAGDefaultOperand {
};
class DAGInstruction {
std::unique_ptr<TreePattern> Pattern;
std::vector<Record*> Results;
std::vector<Record*> Operands;
std::vector<Record*> ImpResults;
TreePatternNodePtr SrcPattern;
TreePatternNodePtr ResultPattern;
public:
DAGInstruction(std::unique_ptr<TreePattern> &&TP,
const std::vector<Record*> &results,
DAGInstruction(const std::vector<Record*> &results,
const std::vector<Record*> &operands,
const std::vector<Record*> &impresults)
: Pattern(std::move(TP)), Results(results), Operands(operands),
ImpResults(impresults), ResultPattern(nullptr) {}
const std::vector<Record*> &impresults,
TreePatternNodePtr srcpattern = nullptr,
TreePatternNodePtr resultpattern = nullptr)
: Results(results), Operands(operands), ImpResults(impresults),
SrcPattern(srcpattern), ResultPattern(resultpattern) {}
TreePattern *getPattern() const { return Pattern.get(); }
unsigned getNumResults() const { return Results.size(); }
unsigned getNumOperands() const { return Operands.size(); }
unsigned getNumImpResults() const { return ImpResults.size(); }
const std::vector<Record*>& getImpResults() const { return ImpResults; }
void setResultPattern(TreePatternNodePtr R) { ResultPattern = R; }
Record *getResult(unsigned RN) const {
assert(RN < Results.size());
return Results[RN];
@ -948,6 +950,7 @@ public:
return ImpResults[RN];
}
TreePatternNodePtr getSrcPattern() const { return SrcPattern; }
TreePatternNodePtr getResultPattern() const { return ResultPattern; }
};
@ -1007,7 +1010,7 @@ public:
std::vector<Record *> dstregs, int complexity,
unsigned uid, unsigned setmode = 0)
: SrcRecord(srcrecord), SrcPattern(src), DstPattern(dst),
Predicates(std::move(preds)), Dstregs(std::move(dstregs)),
Predicates(preds), Dstregs(dstregs),
AddedComplexity(complexity), ID(uid), ForceMode(setmode) {}
Record *SrcRecord; // Originating Record for the pattern.
@ -1158,7 +1161,7 @@ public:
/// Parse the Pattern for an instruction, and insert the result in DAGInsts.
typedef std::map<Record*, DAGInstruction, LessRecordByID> DAGInstMap;
const DAGInstruction &parseInstructionPattern(
void parseInstructionPattern(
CodeGenInstruction &CGI, ListInit *Pattern,
DAGInstMap &DAGInsts);
@ -1195,6 +1198,9 @@ private:
std::vector<Predicate> makePredList(ListInit *L);
void ParseOnePattern(Record *TheDef,
TreePattern &Pattern, TreePattern &Result,
const std::vector<Record *> &InstImpResults);
void AddPatternToMatch(TreePattern *Pattern, PatternToMatch &&PTM);
void FindPatternInputsAndOutputs(
TreePattern &I, TreePatternNodePtr Pat,

View File

@ -346,6 +346,10 @@ CodeGenInstruction::CodeGenInstruction(Record *R)
ImplicitDefs = R->getValueAsListOfDefs("Defs");
ImplicitUses = R->getValueAsListOfDefs("Uses");
// This flag is only inferred from the pattern.
hasChain = false;
hasChain_Inferred = false;
// Parse Constraints.
ParseConstraints(R->getValueAsString("Constraints"), Operands);

View File

@ -260,6 +260,8 @@ template <typename T> class ArrayRef;
bool isConvergent : 1;
bool hasNoSchedulingInfo : 1;
bool FastISelShouldIgnore : 1;
bool hasChain : 1;
bool hasChain_Inferred : 1;
std::string DeprecatedReason;
bool HasComplexDeprecationPredicate;

View File

@ -110,9 +110,11 @@ struct PatternSortingPredicate {
if (LHSPatSize < RHSPatSize) return true;
if (LHSPatSize > RHSPatSize) return false;
// Sort based on the UID of the pattern, giving us a deterministic ordering
// if all other sorting conditions fail.
assert(LHS == RHS || LHS->ID != RHS->ID);
// Sort based on the UID of the pattern, to reflect source order.
// Note that this is not guaranteed to be unique, since a single source
// pattern may have been resolved into multiple match patterns due to
// alternative fragments. To ensure deterministic output, always use
// std::stable_sort with this predicate.
return LHS->ID < RHS->ID;
}
};
@ -156,7 +158,8 @@ void DAGISelEmitter::run(raw_ostream &OS) {
// We want to process the matches in order of minimal cost. Sort the patterns
// so the least cost one is at the start.
llvm::sort(Patterns.begin(), Patterns.end(), PatternSortingPredicate(CGP));
std::stable_sort(Patterns.begin(), Patterns.end(),
PatternSortingPredicate(CGP));
// Convert each variant of each pattern into a Matcher.

View File

@ -130,10 +130,6 @@ namespace {
return VarMapEntry-1;
}
/// GetInstPatternNode - Get the pattern for an instruction.
const TreePatternNode *GetInstPatternNode(const DAGInstruction &Ins,
const TreePatternNode *N);
void EmitResultOperand(const TreePatternNode *N,
SmallVectorImpl<unsigned> &ResultOps);
void EmitResultOfNamedOperand(const TreePatternNode *N,
@ -661,28 +657,6 @@ void MatcherGen::EmitResultLeafAsOperand(const TreePatternNode *N,
N->dump();
}
/// GetInstPatternNode - Get the pattern for an instruction.
///
const TreePatternNode *MatcherGen::
GetInstPatternNode(const DAGInstruction &Inst, const TreePatternNode *N) {
const TreePattern *InstPat = Inst.getPattern();
// FIXME2?: Assume actual pattern comes before "implicit".
TreePatternNode *InstPatNode;
if (InstPat)
InstPatNode = InstPat->getTree(0).get();
else if (/*isRoot*/ N == Pattern.getDstPattern())
InstPatNode = Pattern.getSrcPattern();
else
return nullptr;
if (InstPatNode && !InstPatNode->isLeaf() &&
InstPatNode->getOperator()->getName() == "set")
InstPatNode = InstPatNode->getChild(InstPatNode->getNumChildren()-1);
return InstPatNode;
}
static bool
mayInstNodeLoadOrStore(const TreePatternNode *N,
const CodeGenDAGPatterns &CGP) {
@ -720,25 +694,6 @@ EmitResultInstructionAsOperand(const TreePatternNode *N,
CodeGenInstruction &II = CGT.getInstruction(Op);
const DAGInstruction &Inst = CGP.getInstruction(Op);
// If we can, get the pattern for the instruction we're generating. We derive
// a variety of information from this pattern, such as whether it has a chain.
//
// FIXME2: This is extremely dubious for several reasons, not the least of
// which it gives special status to instructions with patterns that Pat<>
// nodes can't duplicate.
const TreePatternNode *InstPatNode = GetInstPatternNode(Inst, N);
// NodeHasChain - Whether the instruction node we're creating takes chains.
bool NodeHasChain = InstPatNode &&
InstPatNode->TreeHasProperty(SDNPHasChain, CGP);
// Instructions which load and store from memory should have a chain,
// regardless of whether they happen to have an internal pattern saying so.
if (Pattern.getSrcPattern()->TreeHasProperty(SDNPHasChain, CGP) &&
(II.hasCtrlDep || II.mayLoad || II.mayStore || II.canFoldAsLoad ||
II.hasSideEffects))
NodeHasChain = true;
bool isRoot = N == Pattern.getDstPattern();
// TreeHasOutGlue - True if this tree has glue.
@ -892,6 +847,26 @@ EmitResultInstructionAsOperand(const TreePatternNode *N,
NumNodesThatLoadOrStore != 1));
}
// Determine whether we need to attach a chain to this node.
bool NodeHasChain = false;
if (Pattern.getSrcPattern()->TreeHasProperty(SDNPHasChain, CGP)) {
// For some instructions, we were able to infer from the pattern whether
// they should have a chain. Otherwise, attach the chain to the root.
//
// FIXME2: This is extremely dubious for several reasons, not the least of
// which it gives special status to instructions with patterns that Pat<>
// nodes can't duplicate.
if (II.hasChain_Inferred)
NodeHasChain = II.hasChain;
else
NodeHasChain = isRoot;
// Instructions which load and store from memory should have a chain,
// regardless of whether they happen to have a pattern saying so.
if (II.hasCtrlDep || II.mayLoad || II.mayStore || II.canFoldAsLoad ||
II.hasSideEffects)
NodeHasChain = true;
}
assert((!ResultVTs.empty() || TreeHasOutGlue || NodeHasChain) &&
"Node has no result");