Commit Graph

6969 Commits

Author SHA1 Message Date
Alex Zinenko
78f3fb4f46 [mlir] Update comments in ArmNeon dialect. NFC
These were not updated when squashing LLVMArmNeon and ArmNeon dialects.
2021-03-10 13:35:57 +01:00
Alex Zinenko
a776942ba1 [mlir] squash LLVM_AVX512 dialect into AVX512
The dialect separation was introduced to demarkate ops operating in different
type systems. This is no longer the case after the LLVM dialect has migrated to
using built-in vector types, so the original reason for separation is no longer
valid. Squash the two dialects into one.

The code size decrease isn't quite large: the ops originally in LLVM_AVX512 are
preserved because they match LLVM IR intrinsics specialized for vector element
bitwidth. However, it is still conceptually beneficial to have only one
dialect. I originally considered to use Tablegen multiclasses to define both
the type-polymorphic op and its two intrinsic-related instantiations, but
decided against it given both the complexity of the required Tablegen input and
its dissimilarity with the rest of ODS-defined ops, both potentially resulting
in very poor maintainability.

Depends On D98327

Reviewed By: nicolasvasilache, springerm

Differential Revision: https://reviews.llvm.org/D98328
2021-03-10 13:07:26 +01:00
Alex Zinenko
0af53de369 [mlir] simplify type constraints in AVX512 dialect
VectorOfLengthAndType accepts a cartesian product of given lengths and types
rather than types produced by co-indexed values in the corresponding lists.
Update the definitions accordingly. The type validity is already enforced by
op traits.

Reviewed By: nicolasvasilache, springerm

Differential Revision: https://reviews.llvm.org/D98327
2021-03-10 13:07:25 +01:00
Inho Seo
2ce4caf414 Moved getStaticLoopRanges and getStaticShape methods to LinalgInterfaces.td to add static shape verification
It is to use the methods in LinalgInterfaces.cpp for additional static shape verification to match the shaped operands and loop on linalgOps. If I used the existing methods, I would face circular dependency linking issue. Now we can use them as methods of LinalgOp.

Reviewed By: hanchung

Differential Revision: https://reviews.llvm.org/D98163
2021-03-10 04:06:22 -08:00
Christian Sigg
4d295cf5b5 [mlir] Add base class for GpuKernelToBlobPass
Instead of configuring kernel-to-cubin/rocdl lowering through callbacks, introduce a base class that target-specific passes can derive from.

Put the base class in GPU/Transforms, according to the discussion in D98203.

The mlir-cuda-runner will go away shortly, and the mlir-rocdl-runner as well at some point. I therefore kept the existing code path working and will remove it in a separate step.

Depends On D98168

Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D98279
2021-03-10 12:14:43 +01:00
Vladislav Vinogradov
f3bf5c053b [mlir] Model MemRef memory space as Attribute
Based on the following discussion:
https://llvm.discourse.group/t/rfc-memref-memory-shape-as-attribute/2229

The goal of the change is to make memory space property to have more
expressive representation, rather then "magic" integer values.

It will allow to have more clean ASM form:

```
gpu.func @test(%arg0: memref<100xf32, "workgroup">)

// instead of

gpu.func @test(%arg0: memref<100xf32, 3>)
```

Explanation for `Attribute` choice instead of plain `string`:

* `Attribute` classes allow to use more type safe API based on RTTI.
* `Attribute` classes provides faster comparison operator based on
  pointer comparison in contrast to generic string comparison.
* `Attribute` allows to store more complex things, like structs or dictionaries.
  It will allows to have more complex memory space hierarchy.

This commit preserve old integer-based API and implements it on top
of the new one.

Depends on D97476

Reviewed By: rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D96145
2021-03-10 12:57:27 +03:00
Hanhan Wang
d5d4fb635e [mlir][linalg] Add support for using scalar attributes in TC ops.
Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97876
2021-03-10 01:51:12 -08:00
Mehdi Amini
75f3f77805 Fix MLIR test post 890afad954 2021-03-09 23:30:51 +00:00
Mehdi Amini
890afad954 Fix Flang build after MLIR API changes around generatedTypeParser 2021-03-09 23:19:30 +00:00
River Riddle
a776ecb6c2 [mlir][IR] Add an Operation::eraseOperands that supports batch erasure
This method allows for removing multiple disjoint operands at once, reducing the need to erase operands individually (which results in shifting the operand list).

Differential Revision: https://reviews.llvm.org/D98290
2021-03-09 15:07:53 -08:00
River Riddle
4a7aed4ee7 [mlir][IR] Add a new SymbolUserMap class
This class provides efficient implementations of symbol queries related to uses, such as collecting the users of a symbol, replacing all uses, etc. This provides similar benefits to use related queries, as SymbolTableCollection did for lookup queries.

Differential Revision: https://reviews.llvm.org/D98071
2021-03-09 15:07:52 -08:00
Mehdi Amini
cd9a69289c Fix LLVM Dialect LoopOptionsAttr round-tripping: the keywords were missing in the output
This indicated some missing test coverage, which are now added to the
roundtrip test.
2021-03-09 22:00:22 +00:00
Mehdi Amini
fe81e8f3b5 Add default LoopOptionsAttrBuilder constructor and method to check if empty() (NFC)
Also move setters out-of-line to make sure the templated helper is
actually instantiated.
2021-03-09 21:12:15 +00:00
Christian Sigg
840ff84d33 [mlir] Default for gpu-binary-annotation option.
Provide default for gpuBinaryAnnotation so that we don't need to specify it in tests.

The annotation likely only needs to be target specific if we want to lower to e.g. both CUDA and ROCDL.

Reviewed By: herhut, bondhugula

Differential Revision: https://reviews.llvm.org/D98168
2021-03-09 21:01:50 +01:00
Mehdi Amini
79f736c150 Switch generatedTypeParser/generatedAttributeParser to return an OptionalParseResult
This allows the caller to distinguish between a parse error or an
unmatched keyword. It fixes the redundant error that was emitted by the
caller when the generated parser would fail.

Differential Revision: https://reviews.llvm.org/D98162
2021-03-09 19:43:45 +00:00
Mehdi Amini
8205c1a90a Rework LLVM Dialect LoopOptions attribute
Instead of storing an array of LoopOpt attributes, which were just
wrapping std::pair<enum, int> anyway, we can have an attribute storing
a sorted ArrayRef<std::pair<enum, int>> as a single unit. This improves
here the textual format and the general API. Note that we're limiting
the options to fit into an int64_t by design, but this isn't a new
constraint.

Building the LoopOptions attribute is likely worth a specific builder
for efficient reason, that'll be the subject of a future patch.

Differential Revision: https://reviews.llvm.org/D98105
2021-03-09 19:43:45 +00:00
Lei Zhang
50000abe3c [mlir] Use affine.apply when distributing to processors
This makes it easy to compose the distribution computation with
other affine computations.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D98171
2021-03-09 08:37:20 -05:00
Alex Zinenko
8184247f0b [mlir] move LLVM target import header and tests
Move Target/LLVMIR.h to target/LLVMIR/Import.h to better reflect the purpose of
this file. Also move all LLVM IR target tests under the LLVMIR directory.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D98178
2021-03-09 09:22:14 +01:00
Alex Zinenko
90fec5ed65 [mlir] make MLIRPresburger depend on MLIRIR
The analysis library uses Location, which is defined in the MLIRIR
library.
2021-03-09 09:19:53 +01:00
Vladislav Vinogradov
2241b3986c [mlir][CMAKE] Fix cross-compilation build
Use `MLIR_LINALG_ODS_GEN` and `MLIR_LINALG_ODS_YAML_GEN` variables
instead of `MLIR_LINALG_ODS_GEN_EXE` and `MLIR_LINALG_ODS_YAML_GEN_EXE`.
The former are defined in PARENT SCOPE only, so the `if` condition
is never evaluates to `TRUE`.

The logic should be the following (taken from tblgen part):

1. `TOOL_NAME` - CACHE variable (default equal to target name).
   User can override it to actual executable path.
2. `TOOL_NAME_EXE` - internal variable, initialized to `${TOOL_NAME}` first.
   In case of cross-compilation (`LLVM_USE_HOST_TOOLS == TRUE`) if user
   didn't set own path to native executable via `TOOL_NAME` variable,
   CMake will create separate targets to build native tool and
   will override `TOOL_NAME_EXE` to the executable produced by this target.
3. `TOOL_NAME_TARGET` - internal variable, which points to tool target name.
   If the native tool is built as described above, it will point to the
   target correspondant to that native tool.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D98025
2021-03-09 10:51:56 +03:00
Tobias Gysi
c1a4cd551f [mlir][linalg] refactor the result handling during vectorization.
Return the vectorization results using a vector passed by reference instead of returning them embedded in a structure.

Differential Revision: https://reviews.llvm.org/D98182
2021-03-09 07:11:57 +00:00
Stella Laurenzo
e31c77b182 [mlir][python] Reorganize MLIR python into namespace packages.
* Only leaf packages are non-namespace packages. This allows most of the top levels to be split into different directories or deployment packages. In the previous state, the presence of __init__.py files at each level meant that the entire tree could only ever exist in one physical directory on the path.
* This changes the API usage slightly: `import mlir` will no longer do a deep import of `mlir.ir`, etc. This may necessitate some client code changes.
* Dialect gen code was restructured so that the user is responsible for providing the `my_dialect.py` file, which then must import its peer `_my_dialect_ops_gen`. This gives complete control of the dialect namespace to the user instead of to tablegen code, allowing further dialect-specific python APIs.
* Correspondingly, the previous extension modules `_my_dialect.py` are now `_my_dialect_ops_ext.py`.
* Now that the `linalg` namespace is open, moved the `linalg_opdsl` tool into it.
* This may require some corresponding downstream adjustments to npcomp, circt, et al:
  * Probably some shallow imports need to be converted to deep imports (i.e. not `import mlir` brings in the world).
  * Each tablegen generated dialect now needs an explicit `foo.py` which does a `from ._foo_ops_gen import *`. This is similar to the way that generated code operates in the C++ world.
  * If providing dialect op extensions, those need to be moved from `_foo.py` -> `_foo_ops_ext.py`.

Differential Revision: https://reviews.llvm.org/D98096
2021-03-08 23:01:34 -08:00
Mehdi Amini
038f2a337d Move LLVM::FMFAttr definition to TableGen (NFC)
This is using the new Attribute storage generation support in
TableGen to define the LLVM FastMathFlags.

Differential Revision: https://reviews.llvm.org/D98007
2021-03-09 05:29:54 +00:00
River Riddle
0d01dfbc37 [mlir][IR][NFC] Move the remaining builtin types to ODS
This will allow for removing the duplicated type documentation from LangRef and instead link to the builtin dialect documentation.

Differential Revision: https://reviews.llvm.org/D98093
2021-03-08 14:32:40 -08:00
River Riddle
a4bb667d83 [mlir][IR][NFC] Define the Location classes in ODS instead of C++
This also removes the need for LocationDetail.h.

Differential Revision: https://reviews.llvm.org/D98092
2021-03-08 14:32:40 -08:00
Rob Suderman
cb3542e1ca [MLIR][TOSA] Added lowerings for Reduce operations to Linalg
Lowerings for min, max, prod, and sum reduction operations on int and float
values. This includes reduction tests for both cases.

Reviewed By: mravishankar

Differential Revision: https://reviews.llvm.org/D97893
2021-03-08 10:57:19 -08:00
Christian Sigg
7cdcb4a3b9 [mlir] NFC: Add #endif comment. 2021-03-08 19:25:24 +01:00
Benjamin Kramer
42c195f0ec [mlir][Shape] Allow shape.split_at to return extent tensors and lower it to std.subtensor
split_at can return an error if the split index is out of bounds. If the
user knows that the index can never be out of bounds it's safe to use
extent tensors. This has a straight-forward lowering to std.subtensor.

Differential Revision: https://reviews.llvm.org/D98177
2021-03-08 16:48:05 +01:00
Frederik Gossen
3b9667a84c Clarify documentation for Elementwise, Scalarizable, Vectorizable, and
`Tensorizable` traits.

Differential Revision: https://reviews.llvm.org/D97841
2021-03-08 10:35:22 +01:00
Mehdi Amini
e94e55712c Forward the LLVM_ENABLE_LIBCXX CMake parameter to the mlir standalone test
This allows to build and test MLIR with `-DLLVM_ENABLE_LIBCXX=ON`.
2021-03-08 05:07:26 +00:00
KareemErgawy-TomTom
3fb384d50e [MLIR][SPIRV] Rename spv.selection to spv.mlir.selection.
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere. For ops that
don't have a SPIR-V spec counterpart, we use spv.mlir.snake_case.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D98014
2021-03-06 16:05:31 +01:00
Lei Zhang
bb6f5c8314 [mlir][spirv] Convert tensor.extract for very small tensors
Normally tensors will be stored in buffers before converting to SPIR-V,
given that is how a large amount of data is sent to the GPU. However,
SPIR-V supports converting from tensors directly too. This is for the
cases where the tensor just contains a small amount of elements and it
makes sense to directly inline them as a small data array in the shader.
To handle this, internally the conversion might create new local
variables. SPIR-V consumers in GPU drivers may or may not optimize that
away. So this has implications over register pressure. Therefore, a
threshold is used to control when the patterns should kick in.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D98052
2021-03-06 08:03:36 -05:00
Mehdi Amini
f8fe6d9f3f Use gen-dialect-doc instead of gen-op-doc for the Builtin dialect
This is fixing the missing title and menu entry on the MLIR website.
2021-03-06 05:32:46 +00:00
Matthias Springer
acce0ea70c [mlir][AVX512] Add mask.compress to AVX512 dialect.
Adds mask.compress to the AVX512 dialect and defines a lowering to the LLVM dialect.

Differential Revision: https://reviews.llvm.org/D97611
2021-03-06 10:02:48 +09:00
Mehdi Amini
a7cac0d9a5 Fix Dialect doc generation to special case for the Builtin dialect empty name
This should fix the issue with an empty entry for the builtin dialect on
the website.

Differential Revision: https://reviews.llvm.org/D98074
2021-03-05 23:47:50 +00:00
Alex Zinenko
6410ee0d09 [mlir] Squash LLVM_ArmNeon dialect into ArmNeon
The two dialects are largely redundant. The former was introduced as a mirror
of the latter operating on LLVM dialect types. This is no longer necessary
since the LLVM dialect operates on built-in types. Combine the two dialects.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D98060
2021-03-05 23:33:32 +01:00
Aart Bik
e5c8fc776f [mlir][vector] canonicalize unmasked gather/scatter/compress/expand directly into l/s
With the new vector.load/store operations, there is no need to go through
unmasked transfer operations (which will canonicalized to l/s anyway).

Reviewed By: dcaballe

Differential Revision: https://reviews.llvm.org/D98056
2021-03-05 14:23:50 -08:00
Diego Caballero
2de6dbda66 [mlir] Add 'Skip' result to Operation visitor
This patch is a follow-up on D97217. It adds a new 'Skip' result to the Operation visitor
so that a callback can stop the ongoing visit of an operation/block/region and
continue visiting the next one without fully interrupting the walk. Skipping is
needed to be able to erase an operation/block in pre-order and do not continue
visiting the internals of that operation/block.

Related to the skipping mechanism, the patch also introduces the following changes:
 * Added new TestIRVisitors pass with basic testing for the IR visitors.
 * Fixed missing early increment ranges in visitor implementation.
 * Updated documentation of walk methods to include erasure information and walk
   order information.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D97820
2021-03-06 00:02:20 +02:00
Diego Caballero
71a86245ca [mlir] Extend Operation visitor with pre-order traversal
This patch extends the Region, Block and Operation visitors to also support pre-order walks.
We introduce a new template argument that dictates the walk order (only pre-order and
post-order are supported for now). The default order for Regions, Blocks and Operations is
post-order. Mixed orders (e.g., Region/Block pre-order + Operation post-order) could easily
be implemented, as shown in NumberOfExecutions.cpp.

Reviewed By: rriddle, frgossen, bondhugula

Differential Revision: https://reviews.llvm.org/D97217
2021-03-06 00:02:20 +02:00
Diego Caballero
b635492c3f [mlir][Affine][NFC] Return BlockArgument in AffineForOp::getInductionVar
This avoids unnecessary casts when a BlockArgument is required.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D97879
2021-03-06 00:02:19 +02:00
KareemErgawy-TomTom
d48ceb45e3 [MLIR][SPIRV] Rename spv.undef to spv.Undef.
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere. For ops that
don't have a SPIR-V spec counterpart, we use spv.mlir.snake_case.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D98016
2021-03-05 15:49:44 -05:00
River Riddle
f175ba4a54 [mlir][AsmPrinter] Don't use string comparison when filtering list attributes
In .mlir modules with larges amounts of attributes, e.g. a function with a larger number of argument attributes, the string comparison filtering greatly affects compile time. This revision switches to using a SmallDenseSet in these situations, resulting in over a 10x speed up in some situations.

Differential Revision: https://reviews.llvm.org/D97980
2021-03-05 12:47:05 -08:00
KareemErgawy-TomTom
29812a6195 [MLIR][SPIRV] Rename spv.loop to spv.mlir.loop.
To unify the naming scheme across all ops in the SPIR-V dialect,
we are moving from spv.camelCase to spv.CamelCase everywhere.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97918
2021-03-05 15:44:30 -05:00
Stella Laurenzo
0b5f1b859f [mlir][linalg] Add linalg_opdsl tool first draft.
* Mostly imported from experimental repo as-is with cosmetic changes.
* Temporarily left out emission code (for building ops at runtime) to keep review size down.
* Documentation and lit tests added fresh.
* Sample op library that represents current Linalg named ops included.

Differential Revision: https://reviews.llvm.org/D97995
2021-03-05 11:45:09 -08:00
Stella Laurenzo
a9ccdfbc7d NFC: Glob all python sources in the MLIR Python bindings.
* Also switches to use symlinks vs copy as that enables edit-and-continue python development.
* Broken out of https://reviews.llvm.org/D97995 per request from reviewer.

Differential Revision: https://reviews.llvm.org/D98005
2021-03-05 10:21:02 -08:00
Aart Bik
adc35b689f [mlir][sparse] mask reduction update
Reduction updates should be masked, just like the load and stores.
Note that alternatively, we could use the fact that masked values are
zero of += updates and mask invariants to get this working but that
would not work for *= updates. Masking the update itself is cleanest.
This change also replaces the constant mask with a broadcast of "true"
since this constant folds much better for various folding patterns.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D98000
2021-03-05 08:56:10 -08:00
Nicolas Vasilache
c86d3c1a38 [mlir][Linalg] Fix order of dimensions in hoistPaddingOnTensors. 2021-03-05 15:11:35 +00:00
Christian Sigg
5fedf30748 [mlir] Make cuInit() call thread-safe.
Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D98024
2021-03-05 16:06:15 +01:00
Nicolas Vasilache
35908406dc [mlir][scf] Canonicalize scf.for last tensor iteration result.
Canonicalize the iter_args of an scf::ForOp that involve a tensor_load and
for which only the last loop iteration is actually visible outside of the
loop. The canonicalization looks for a pattern such as:
```
   %t0 = ... : tensor_type
   %0 = scf.for ... iter_args(%bb0 : %t0) -> (tensor_type) {
     ...
     // %m is either tensor_to_memref(%bb00) or defined above the loop
     %m... : memref_type
     ... // uses of %m with potential inplace updates
     %new_tensor = tensor_load %m : memref_type
     ...
     scf.yield %new_tensor : tensor_type
   }
```

`%bb0` may have either 0 or 1 use. If it has 1 use it must be exactly a
`%m = tensor_to_memref %bb0` op that feeds into the yielded `tensor_load`
op.

If no aliasing write of `%new_tensor` occurs between tensor_load and yield
then the value %0 visible outside of the loop is the last `tensor_load`
produced in the loop.

For now, we approximate the absence of aliasing by only supporting the case
when the tensor_load is the operation immediately preceding the yield.

The canonicalization rewrites the pattern as:
```
   // %m is either a tensor_to_memref or defined above
   %m... : memref_type
   scf.for ... { // no iter_args
     ... // uses of %m with potential inplace updates
   }
   %0 = tensor_load %m : memref_type
```

Differential revision: https://reviews.llvm.org/D97953
2021-03-05 09:42:19 +00:00
KareemErgawy-TomTom
c74eb466d2 [MLIR][SPIRV] Rename spv.globalVariable to spv.GlobalVariable.
To unify the naming scheme across all ops in the SPIR-V dialect, we are
moving from spv.camelCase to spv.CamelCase everywhere.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D97919
2021-03-04 16:24:59 -05:00