[mlir] NFC: fix trivial typo

Differential Revision: https://reviews.llvm.org/D79065
This commit is contained in:
Kazuaki Ishizaki 2020-04-29 14:47:35 +09:00
parent 628829254d
commit b2f5fd84e8
12 changed files with 31 additions and 31 deletions

View File

@ -153,7 +153,7 @@ bound symbol, for example, `def : Pat<(AOp $a, F32Attr), ...>`.
#### Matching DAG of operations
To match an DAG of ops, use nested `dag` objects:
To match a DAG of ops, use nested `dag` objects:
```tablegen
@ -530,7 +530,7 @@ the `TwoResultOp`'s two results, respectively.
The above example also shows how to replace a matched multi-result op.
To replace a `N`-result op, the result patterns must generate at least `N`
To replace an `N`-result op, the result patterns must generate at least `N`
declared values (see [Declared vs. actual value](#declared-vs-actual-value) for
definition). If there are more than `N` declared values generated, only the
last `N` declared values will be used to replace the matched op. Note that
@ -668,12 +668,12 @@ directive to provide finer control.
`location` is of the following syntax:
```tablgen
```tablegen
(location $symbol0, $symbol1, ...)
```
where all `$symbol` should be bound previously in the pattern and one optional
string may be specified as an attribute. The following locations are creted:
string may be specified as an attribute. The following locations are created:
* If only 1 symbol is specified then that symbol's location is used,
* If multiple are specified then a fused location is created;

View File

@ -72,7 +72,7 @@ argument or a memref where the corresponding dimension is either static or a
dynamic one in turn bound to a symbolic identifier. Dimensions may be bound not
only to anything that a symbol is bound to, but also to induction variables of
enclosing [`affine.for`](#affinefor-affineforop) and
[`afffine.parallel`](#affineparallel-affineparallelop) operations, and the
[`affine.parallel`](#affineparallel-affineparallelop) operations, and the
result of an
[`affine.apply` operation](#affineapply-operation) (which recursively may use
other dimensions and symbols).

View File

@ -146,7 +146,7 @@ template parameter to the `Op` class.
### Operation documentation
This includes both an one-line `summary` and a longer human-readable
This includes both a one-line `summary` and a longer human-readable
`description`. They will be used to drive automatic generation of dialect
documentation. They need to be provided in the operation's definition body:
@ -863,7 +863,7 @@ significantly involve writing constraints. We have the `Constraint` class in
An operation's constraint can cover different range; it may
* Only concern a single attribute (e.g. being an 32-bit integer greater than 5),
* Only concern a single attribute (e.g. being a 32-bit integer greater than 5),
* Multiple operands and results (e.g., the 1st result's shape must be the same
as the 1st operand), or
* Intrinsic to the operation itself (e.g., having no side effect).
@ -1039,13 +1039,13 @@ optionality, default values, etc.:
* `DefaultValuedAttr`: specifies the
[default value](#attributes-with-default-values) for an attribute.
* `OptionalAttr`: specfies an attribute as [optional](#optional-attributes).
* `OptionalAttr`: specifies an attribute as [optional](#optional-attributes).
* `Confined`: adapts an attribute with
[further constraints](#confining-attributes).
### Enum attributes
Some attributes can only take values from an predefined enum, e.g., the
Some attributes can only take values from a predefined enum, e.g., the
comparison kind of a comparison op. To define such attributes, ODS provides
several mechanisms: `StrEnumAttr`, `IntEnumAttr`, and `BitEnumAttr`.

View File

@ -382,7 +382,7 @@ static PassPipelineRegistration<> pipeline(
```
Pipeline registration also allows for simplified registration of
specifializations for existing passes:
specializations for existing passes:
```c++
static PassPipelineRegistration<> foo10(

View File

@ -232,7 +232,7 @@ which tensors can have fake_quant applied are somewhat involved), then
TensorFlow Lite would use the attributes of the fake_quant operations to make a
judgment about how to convert to use kernels from its quantized operations subset.
In MLIR-based quantization, fake_quant_\* operationss are handled by converting them to
In MLIR-based quantization, fake_quant_\* operations are handled by converting them to
a sequence of *qcast* (quantize) followed by *dcast* (dequantize) with an
appropriate *UniformQuantizedType* as the target of the qcast operation.
@ -242,7 +242,7 @@ flexibility to move the casts as it simplifies the computation and converts it
to a form based on integral arithmetic.
This scheme also naturally allows computations that are *partially quantized*
where the parts which could not be reduced to integral operationss are still carried out
where the parts which could not be reduced to integral operations are still carried out
in floating point with appropriate conversions at the boundaries.
## TFLite native quantization

View File

@ -67,7 +67,7 @@ their layouts, and subscripted accesses to these tensors in memory.
The information captured in the IR allows a compact expression of all loop
transformations, data remappings, explicit copying necessary for explicitly
addressed memory in accelerators, mapping to pre-tuned expert written
addressed memory in accelerators, mapping to pre-tuned expert-written
primitives, and mapping to specialized vector instructions. Loop transformations
that can be easily implemented include the body of affine transformations: these
subsume all traditional loop transformations (unimodular and non-unimodular)
@ -229,7 +229,7 @@ specifically abstracts the target-specific aspects that intersect with the
code-generation-related/lowering-related concerns explained above. In fact, the
`tensor` type even allows dialect-specific types as element types.
### Bit width of a non-primitive types and `index` is undefined
### Bit width of a non-primitive type and `index` is undefined
The bit width of a compound type is not defined by MLIR, it may be defined by a
specific lowering pass. In MLIR, bit width is a property of certain primitive
@ -259,7 +259,7 @@ abstraction, especially closer to source language, might want to differentiate
signedness with integer types; while others, especially closer to machine
instruction, might want signless integers. Instead of forcing each abstraction
to adopt the same integer modelling or develop its own one in house, Integer
types provides this as an option to help code reuse and consistency.
type provides this as an option to help code reuse and consistency.
For the standard dialect, the choice is to have signless integer types. An
integer value does not have an intrinsic sign, and it's up to the specific op

View File

@ -45,7 +45,7 @@ However, as the design of Linalg co-evolved with the design of MLIR, it became
apparent that it could extend to larger application domains than just machine
learning on dense tensors.
The design and evolution of Linalg follows a *codegen-friendly* approach where
The design and evolution of Linalg follow a *codegen-friendly* approach where
the IR and the transformations evolve hand-in-hand.
The key idea is that op semantics *declare* and transport information that is
traditionally obtained by compiler analyses.
@ -77,7 +77,7 @@ https://drive.google.com/drive/u/0/folders/1sRAsgsd8Bvpm_IxREmZf2agsGU2KvrK-),
with Linalg becoming its incarnation on tensors and buffers.
It is complemented by the
[Vector dialect](https://mlir.llvm.org/docs/Dialects/Vector/),
which define structured operations on vectors, following the same rationale and
which defines structured operations on vectors, following the same rationale and
design principles as Linalg. (Vector dialect includes the higher-level
operations on multi-dimensional vectors and abstracts away the lowering to
single-dimensional vectors).
@ -191,7 +191,7 @@ Linalg builds on, and helps separate concerns in the LIFT approach as follows:
structure abstractions) potentially reusable across different dialects in the
MLIR's open ecosystem.
LIFT is expected to further influence the design of Linalg as it evolve. In
LIFT is expected to further influence the design of Linalg as it evolves. In
particular, extending the data structure abstractions to support non-dense
tensors can use the experience of LIFT abstractions for
[sparse](https://www.lift-project.org/publications/2016/harries16sparse.pdf)
@ -255,9 +255,9 @@ Linalg hopes to additionally address the following:
transformations. But it's still too hard for newcomers to use or extend. The
level of performance you get from Halide is very different depending on
whether one is a seasoned veteran or a newcomer. This is especially true as
the number of transformations grow.
the number of transformations grows.
- Halide raises rather than lowers in two ways, going counter-current to the
design goals we set for high-level codegen abstractions in in MLIR. First,
design goals we set for high-level codegen abstractions in MLIR. First,
canonical Halide front-end code uses explicit indexing and math on scalar
values, so to target BLAS/DNN libraries one needs to add pattern matching
which is similarly brittle as in the affine case. While Halide's performance
@ -425,7 +425,7 @@ The problem at hand is fundamentally driven by compilation of domain-specific
workloads for high-performance and parallel hardware architectures: **this is
an HPC compilation problem**.
The selection of relevant transformations follows a codesign approach and
The selection of relevant transformations follows a co-design approach and
involves considerations related to:
- concrete current and future needs of the application domain,
- concrete current and future hardware properties and ISAs,
@ -462,7 +462,7 @@ levels of abstraction led to the following 2 principles.
#### Declarative Specification: Avoid Raising<a name="declarative_specification"></a>
Compiler transformations need static structural information (e.g. loop-nests,
graphs of basic blocks, pure functions etc). When that structural information
graphs of basic blocks, pure functions, etc). When that structural information
is lost, it needs to be reconstructed.
A good illustration of this phenomenon is the notion of *raising* in polyhedral
@ -518,7 +518,7 @@ declaratively. In turn this allows using local pattern rewrite rules in MLIR
- Allow creating customizable passes declaratively by simply selecting rewrite
rules. This allows mixing transformations, canonicalizations, constant folding
and other enabling rewrites in a single pass. The result is a system where pass
fusion is very simple to obtain and gives hope to solving certain
fusion is very simple to obtain and gives hope for solving certain
[phase ordering issues](https://dl.acm.org/doi/10.1145/201059.201061).
### Suitability for Search and Machine Learning<a name="ml"></a>
@ -551,7 +551,7 @@ ragged, sparse and mixed dens/sparse tensors as well as to trees, hash tables,
tables of records and maybe even graphs.
For such more advanced data types, the control-flow required to traverse the
data structures, termination conditions etc are much less simple to analyze and
data structures, termination conditions, etc are much less simple to analyze and
characterize statically. As a consequence we need to also design solutions that
stand a chance of evolving into runtime-adaptive computations (e.g.
inspector-executor in which an *inspector* runs a cheap runtime
@ -582,7 +582,7 @@ occurred,
### The Dialect Need not be Closed Under Transformations<a name="dialect_not_closed"></a>
This is probably the most surprising and counter-intuitive
observation. When one designs IR for transformations, closed-ness is
often a nonnegotiable property.
often a non-negotiable property.
This is a key design principle of polyhedral IRs such as
[URUK](http://icps.u-strasbg.fr/~bastoul/research/papers/GVBCPST06-IJPP.pdf)
and

View File

@ -117,7 +117,7 @@ impose a particular shape inference approach here.
is, these two type systems differ and both should be supported, but the
intersection of the two should not be required. As a particular example,
if a compiler only wants to differentiate exact shapes vs dynamic
shapes, then it need not consider a more generic shape latice even
shapes, then it need not consider a more generic shape lattice even
though the shape description supports it.
* Declarative (e.g., analyzable at compile time, possible to generate

View File

@ -134,8 +134,8 @@ using target_link_libraries() and the PUBLIC keyword. For instance:
add_mlir_conversion_library(MLIRBarToFoo
BarToFoo.cpp
ADDITIONAL_HEADER_DIRS
${MLIR_MAIN_INCLUDE_DIR}/mlir/Conversion/BarToFoo
ADDITIONAL_HEADER_DIRS
${MLIR_MAIN_INCLUDE_DIR}/mlir/Conversion/BarToFoo
)
target_link_libraries(MLIRBarToFoo
PUBLIC

View File

@ -46,7 +46,7 @@ PROJECT_NUMBER = @PACKAGE_VERSION@
PROJECT_BRIEF =
# With the PROJECT_LOGO tag one can specify an logo or icon that is included in
# With the PROJECT_LOGO tag one can specify a logo or icon that is included in
# the documentation. The maximum height of the logo should not exceed 55 pixels
# and the maximum width should not exceed 200 pixels. Doxygen will copy the logo
# to the output directory.

View File

@ -260,7 +260,7 @@ def MatmulOp : LinalgStructured_Op<"matmul", [NInputs<2>, NOutputs<1>]> {
/// OptionalAttr<I64ArrayAttr>:$strides
/// OptionalAttr<I64ArrayAttr>:$dilations
/// OptionalAttr<I64ElementsAttr>:$padding
/// `stirdes` denotes the step of each window along the dimension.
/// `strides` denotes the step of each window along the dimension.
class PoolingBase_Op<string mnemonic, list<OpTrait> props>
: LinalgStructured_Op<mnemonic, props> {
let description = [{

View File

@ -1440,7 +1440,7 @@ class DerivedAttr<code ret, code b, code convert = ""> :
let returnType = ret;
code body = b;
// Specify how to convert from the derived attribute to an attibute.
// Specify how to convert from the derived attribute to an attribute.
//
// ## Special placeholders
//