3415 Commits

Author SHA1 Message Date
Stella Stamenova
ed98676fa4 Support multi-configuration generators correctly in several config files
Multi-configuration generators (such as Visual Studio and Xcode) allow the specification of a build flavor at build time instead of config time, so the lit configuration files need to support that - and they do for the most part. There are several places that had one of two issues (or both!):

1) Paths had %(build_mode)s set up, but then not configured, resulting in values that would not work correctly e.g. D:/llvm-build/%(build_mode)s/bin/dsymutil.exe
2) Paths did not have %(build_mode)s set up, but instead contained $(Configuration) (which is the value for Visual Studio at configuration time, for Xcode they would have had the equivalent) e.g. "D:/llvm-build/$(Configuration)/lib".

This seems to indicate that we still have a lot of fragility in the configurations, but also that a number of these paths are never used (at least on Windows) since the errors appear to have been there a while.

This patch fixes the configurations and it has been tested with Ninja and Visual Studio to generate the correct paths. We should consider removing some of these settings altogether.

Reviewed By: JDevlieghere, mehdi_amini

Differential Revision: https://reviews.llvm.org/D96427
2021-02-11 09:32:20 -08:00
Hanhan Wang
9325b8da17 [mlir][Linalg] Add conv ops with TF definition.
The dimension order of a filter in tensorflow is
[filter_height, filter_width, in_channels, out_channels], which is different
from current definition. The current definition follows TOSA spec. Add TF
version conv ops to .tc, so we do not have to insert a transpose op around a
conv op.

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D96038
2021-02-10 22:59:38 -08:00
Aart Bik
11bec2a81c [mlir][sparse] reduce tensor dimensions in sparse test
Rationale:
BuiltinTypes.cpp observed overflow when computing size of
tensor<100x200x300x400x500x600x700x800xf32>.

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D96475
2021-02-10 17:59:19 -08:00
Mehdi Amini
09cfec6243 Fix CMake configuration for MLIR unittests
The CMake changes in 2aa1af9b1da to make it possible to build MLIR as a
standalone project unfortunately disabled all unit-tests from the
regular in-tree build.
2021-02-11 01:17:49 +00:00
Rob Suderman
c19a412809 [MLIR][TOSA] Tosa elementwise broadcasting
Added support for broadcasting size-1 dimensions for TOSA elemtnwise
operations.

Differential Revision: https://reviews.llvm.org/D96190
2021-02-10 15:28:18 -08:00
Jing Pu
544cebd619 Change type constraint of the "index" in "shape.split_at" to Shape_SizeOrIndexType
Make the type contraint consistent with other shape dialect operations.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D96377
2021-02-10 11:58:19 -08:00
Aart Bik
0b1764a3d7 [mlir][sparse] sparse tensor storage implementation
This revision connects the generated sparse code with an actual
sparse storage scheme, which can be initialized from a test file.
Lacking a first-class citizen SparseTensor type (with buffer),
the storage is hidden behind an opaque pointer with some "glue"
to bring the pointer back to tensor land. Rather than generating
sparse setup code for each different annotated tensor (viz. the
"pack" methods in TACO), a single "one-size-fits-all" implementation
has been added to the runtime support library.  Many details and
abstractions need to be refined in the future, but this revision
allows full end-to-end integration testing and performance
benchmarking (with on one end, an annotated Lingalg
op and, on the other end, a JIT/AOT executable).

Reviewed By: nicolasvasilache, bixia

Differential Revision: https://reviews.llvm.org/D95847
2021-02-10 11:57:24 -08:00
Nicolas Vasilache
0ac3d97bf4 [mlir][Linalg] Fix pad hoisting.
This revision fixes the indexing logic into the packed tensor that result from hoisting padding. Previously, the index was incorrectly set to the loop induction variable when in fact we need to compute the iteration count (i.e. `(iv - lb).ceilDiv(step)`).

Differential Revision: https://reviews.llvm.org/D96417
2021-02-10 16:49:38 +00:00
Nicolas Vasilache
bb69de3f41 [mlir][Linalg] Add a vectorization pattern for linalg::PadTensorOp
The new pattern is exercised from the TestLinalgTransforms pass.

Differential Revision: https://reviews.llvm.org/D96410
2021-02-10 14:13:49 +00:00
Tres Popp
f30f347da1 [mlir][shape] Generalize broadcast to a variadic number of shapes
Previously broadcast was a binary op. Now it can support more inputs.
This has been changed in such a way that for now, this is an NFC for
all broadcast operations that were previously legal.

Differential Revision: https://reviews.llvm.org/D95777
2021-02-10 08:31:28 +01:00
Uday Bondhugula
5400f602cd [MLIR] Update affine.for unroll utility for iter_args support
Update affine.for loop unroll utility for iteration arguments support.
Fix promoteIfSingleIteration as well.

Fixes PR49084: https://bugs.llvm.org/show_bug.cgi?id=49084

Differential Revision: https://reviews.llvm.org/D96383
2021-02-10 10:38:47 +05:30
Andrew Pritchard
74c3615997 Add LLVMIR Dialect counterparts of @llvm.maximum and @llvm.minimum.
These are similar to maxnum and minnum, but they're defined to treat -0
as less than +0.  This behavior can't be expressed using float
comparisons and selects, since comparisons are defined to treat
different-signed zeros as equal.  So, the only way to communicate this
behavior into LLVM IR without defining target-specific intrinsics is to
add the corresponding ops.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D96373
2021-02-10 00:57:43 +00:00
Andrew Pritchard
018645b81b Fix side-effect detection in LLVMIRIntrinsicGen.
Previously it reported an op had side-effects iff it declared that it
didn't have any side-effects.  This had the undesirable result that
canonicalization would always delete any intrinsic calls that did memory
stores and returned void.

Reviewed By: ftynse, mehdi_amini

Differential Revision: https://reviews.llvm.org/D96369
2021-02-10 00:48:16 +00:00
River Riddle
6e3292b0b7 [mlir][OpFormatGen] Refactor type_ref into a more general ref directive
This allows for referencing nearly every component of an operation from within a custom directive.

It also fixes a bug with the current type_ref implementation, PR48478

Differential Revision: https://reviews.llvm.org/D96189
2021-02-09 14:33:48 -08:00
River Riddle
b9c876bd7e [mlir] Add initial support for an alias analysis framework in MLIR
This revision adds a new `AliasAnalysis` class that represents the main alias analysis interface in MLIR. The purpose of this class is not to hold the aliasing logic itself, but to provide an interface into various different alias analysis implementations. As it evolves this should allow for users to plug in specialized alias analysis implementations for their own needs, and have them immediately usable by other analyses and transformations.

This revision also adds an initial simple generic alias, LocalAliasAnalysis, that provides support for performing stateless local alias queries between values. This class is similar in scope to LLVM's BasicAA.

Differential Revision: https://reviews.llvm.org/D92343
2021-02-09 14:21:27 -08:00
George
6962bd68f1 [MLIR] Add context accessor to identifier
I knew I would miss one...

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D96321
2021-02-09 13:21:30 -08:00
River Riddle
fe7c0d90b2 [mlir][IR] Remove the concept of OperationProperties
These properties were useful for a few things before traits had a better integration story, but don't really carry their weight well these days. Most of these properties are already checked via traits in most of the code. It is better to align the system around traits, and improve the performance/cost of traits in general.

Differential Revision: https://reviews.llvm.org/D96088
2021-02-09 12:00:15 -08:00
Weiwei Li
2ef24139fc [mlir][spirv] Add support for sampled image type
co-authored-by: Alan Liu <alanliu.yf@gmail.com>

Reviewed By: antiagainst

Differential Revision: https://reviews.llvm.org/D96169
2021-02-09 14:14:07 -05:00
George
5099a48a3b [MLIR] Replace dialect registration hooks with dialect handle
Replace MlirDialectRegistrationHooks with MlirDialectHandle, which under-the-hood is an opaque pointer to MlirDialectRegistrationHooks. Then we expose the functionality previously directly on MlirDialectRegistrationHooks, as functions which take the opaque MlirDialectHandle struct. This makes the actual structure of the registration hooks an implementation detail, and happens to avoid this issue: https://llvm.discourse.group/t/strange-swift-issues-with-dialect-registration-hooks/2759/3

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D96229
2021-02-09 09:02:16 -08:00
Thomas Raoux
bfa508efd5 [mlir][linalg] Fix one more missing NoSideEffect in linalg tensor op
Differential Revision: https://reviews.llvm.org/D96314
2021-02-09 07:04:50 -08:00
Denys Shabalin
fa581f9438 [mlir] Add stacksave, stackrestore to llvm dialect
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D96333
2021-02-09 15:13:16 +01:00
Lei Zhang
068bf9e802 [mlir][linalg] Define a depthwise 2-D convolution op
This commit defines linalg.depthwise_conv_2d_nhwc for depthwise
2-D convolution with NHWC input/output data format.

This op right now only support channel multiplier == 1, which is
the most common case.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D94966
2021-02-09 08:55:20 -05:00
Lei Zhang
4c640e49c9 [mlir][linalg] Verify indexing map required attributes
Indexing maps for named ops can reference attributes so that
we can synthesize the indexing map dynamically. This supports
cases like strides for convolution ops. However, it does cause
an issue: now the indexing_maps() function call is dependent
on those attributes.

Linalg ops inherit LinalgOpInterfaceTraits, which calls
verifyStructuredOpInterface() to verify the interface.
verifyStructuredOpInterface() further calls indexing_maps().
Note that trait verification is done before the op itself,
where ODS generates the verification for those attributes.
So we can have indexing_maps() referencing non-existing or
invalid attribute, before the ODS-generated verification
kick in.

There isn't a dependency handling mechansim for traits.
This commit adds new interface methods to query whether an
op hasDynamicIndexingMaps() and then perform
verifyIndexingMapRequiredAttributes() in
verifyStructuredOpInterface() to handle the dependency issue.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96297
2021-02-09 08:48:29 -05:00
George
8f130f108f [MLIR] Add C API for navigating up the IR tree
Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D96301
2021-02-08 19:54:38 -08:00
Nicolas Vasilache
d57a305fdf [mlir][Linalg] Fix padding related bugs.
This revision fixes the fact that the padding transformation did not have enough information to set the proper type for the padding value.
Additionally, the verifier for Yield in the presence of PadTensorOp is fixed to properly report incorrect number of results or operands. Previously, the error would be silently ignored which made the core issue difficult to debug.

Differential Revision: https://reviews.llvm.org/D96264
2021-02-08 18:59:24 +00:00
Alex Zinenko
2b92f21c6e [mlir] Drop deprecated syntax for LLVM dialect types
After the LLVM dialect types were ported to use built-in types, the parser kept
supporting the old syntax for LLVM dialect types to produce built-in types for
compatibility. Drop this support.

Reviewed By: mehdi_amini

Differential Revision: https://reviews.llvm.org/D96275
2021-02-08 19:26:21 +01:00
Vladislav Vinogradov
035abe30c9 [mlir][ODS] Allow to specify custom namespace for NativeOpTrait
This will allow to use `NativeOpTrait` and Operations
declared outside of `mlir` namespace.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D96128
2021-02-08 10:25:45 +03:00
Tung D. Le
05c6c648ec [MLIR] [affine-loop-fusion] Fix a bug about non-result ops in affine-loop-fusion
This patch fixes the following bug when calling --affine-loop-fusion

Input program:
 ```mlir
func @should_not_fuse_since_top_level_non_affine_non_result_users(
    %in0 : memref<32xf32>, %in1 : memref<32xf32>) {
  %c0 = constant 0 : index
  %cst_0 = constant 0.000000e+00 : f32

  affine.for %d = 0 to 32 {
    %lhs = affine.load %in0[%d] : memref<32xf32>
    %rhs = affine.load %in1[%d] : memref<32xf32>
    %add = addf %lhs, %rhs : f32
    affine.store %add, %in0[%d] : memref<32xf32>
  }
  store %cst_0, %in0[%c0] : memref<32xf32>
  affine.for %d = 0 to 32 {
    %lhs = affine.load %in0[%d] : memref<32xf32>
    %rhs = affine.load %in1[%d] : memref<32xf32>
    %add = addf %lhs, %rhs: f32
    affine.store %add, %in0[%d] : memref<32xf32>
  }
  return
}
```

call --affine-loop-fusion, we got an incorrect output:

```mlir
func @should_not_fuse_since_top_level_non_affine_non_result_users(%arg0: memref<32xf32>, %arg1: memref<32xf32>) {
  %c0 = constant 0 : index
  %cst = constant 0.000000e+00 : f32
  store %cst, %arg0[%c0] : memref<32xf32>
  affine.for %arg2 = 0 to 32 {
    %0 = affine.load %arg0[%arg2] : memref<32xf32>
    %1 = affine.load %arg1[%arg2] : memref<32xf32>
    %2 = addf %0, %1 : f32
    affine.store %2, %arg0[%arg2] : memref<32xf32>
    %3 = affine.load %arg0[%arg2] : memref<32xf32>
    %4 = affine.load %arg1[%arg2] : memref<32xf32>
    %5 = addf %3, %4 : f32
    affine.store %5, %arg0[%arg2] : memref<32xf32>
  }
  return
}
```

This happened because when analyzing the source and destination nodes,
affine loop fusion ignored non-result ops sandwitched between them. In
other words, the MemRefDependencyGraph in the affine loop fusion ignored
these non-result ops.

This patch solves the issue by adding these non-result ops to the
MemRefDependencyGraph.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D95668
2021-02-06 13:30:16 +05:30
Lei Zhang
7630520ae3 [mlir][vector] Add pattern to shuffle bitcast ops
These patterns move vector.bitcast ops to be before
insert ops or after extract ops where suitable.
With them, bitcast will happen on smaller vectors
and there are more chances to share extract/insert
ops.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D96040
2021-02-05 17:52:49 -05:00
Lei Zhang
8dae90997a [mlir][vector] Add constant folding for fp16 to fp32 bitcast
Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D96041
2021-02-05 09:12:50 -05:00
Lei Zhang
9f622b3d5d [mlir][spirv] Add more vector conversion patterns
This patch introduces a few more straightforward patterns
to convert vector ops operating on 1-4 element vectors
to their corresponding SPIR-V counterparts.

This patch also enables converting vector<1xT> to T.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D96042
2021-02-05 09:11:16 -05:00
Lei Zhang
874ce9b80f [mlir][vector] Add patterns to cast away leading 1-dim
This patch adds patterns to use vector.shape_cast to cast
away leading 1-dimensions from a few vector operations.
It allows exposing more canonical forms of vector.transfer_read,
vector.transfer_write, vector_extract_strided_slice, and
vector.insert_strided_slice. With this, we can have more
opportunity to cancelling extract/insert ops or forwarding
write/read ops.

Reviewed By: ThomasRaoux

Differential Revision: https://reviews.llvm.org/D95873
2021-02-05 09:02:15 -05:00
Alex Zinenko
1b101038dc [mlir] Turn Linalg to LLVM into a partial conversion
Historically, Linalg To LLVM conversion subsumed numerous other conversions,
including (affine) loop lowerings to CFG and conversions from the Standard and
Vector dialects to the LLVM dialect. This was due to the insufficient support
for partial conversions in the infrastructure that essentially required
conversions that involve type change (in this case, !linalg.range to
!llvm.struct) to be performed in a single conversion sweep. This is no longer
the case so remove the subsumed conversions and run them as separate passes
when necessary.

Depends On D95317

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96008
2021-02-05 14:31:19 +01:00
Nicolas Vasilache
b40f9fb61d [mlir][Linalg] Fix spurious test change 2021-02-05 12:18:35 +00:00
Nicolas Vasilache
0fcbbde2c7 [mlir][Linalg] NFC - Refactor vectorization to be more composable
Differential Revision: https://reviews.llvm.org/D96116
2021-02-05 12:03:14 +00:00
Nicolas Vasilache
ef9e1e5a59 [mlir][Linalg] Add option to anchor on func name in TestLinalgCodegenStrategy 2021-02-05 11:39:48 +00:00
Nicolas Vasilache
7f58196ec7 [mlir][linalg] Linalg.fill on tensor should not have side-effects
Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D96094
2021-02-05 08:22:14 +00:00
River Riddle
e21adfa32d [mlir] Mark LogicalResult as LLVM_NODISCARD
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way.

Differential Revision: https://reviews.llvm.org/D95841
2021-02-04 15:10:10 -08:00
Lei Zhang
63dc26450b Revert "[mlir][ODS] Use StringLiteral instead of StringRef when applicable"
This reverts commit 953086ddbb593289fafcf0e7cc6e74847f1635af because
it breaks GCC 5 build:

  error: could not convert '(const char*)""' from 'const char*' to 'llvm::StringLiteral'
     static ::llvm::StringLiteral getDialectNamespace() { return ""; }
2021-02-04 13:59:37 -05:00
Vladislav Vinogradov
953086ddbb [mlir][ODS] Use StringLiteral instead of StringRef when applicable
Use `StringLiteral` for function return type if it is known to return
constant string literals only.

This will make it visible to API users, that such values can be safely
stored, since they refers to constant data, which will never be deallocated.

`StringRef` is general is not safe to store for a long term,
since it might refer to temporal data allocated in heap.

Reviewed By: mehdi_amini, bkramer

Differential Revision: https://reviews.llvm.org/D95945
2021-02-04 17:35:15 +00:00
Nicolas Vasilache
e4a503a26d [mlir][Linalg] Introduce a ContractionOpInterface
This revision takes advantage of recent extensions to vectorization to refactor contraction detection into a bona fide Linalg interface.
The mlit-linalg-ods-gen parser is extended to support adding such interfaces.
The detection that was originally enabling vectorization is refactored to serve as both a test on a generic LinalgOp as well as to verify ops that declare to conform to that interface.

This is plugged through Linalg transforms and strategies but it quickly becomes evident that the complexity and rigidity of the C++ class based templating does not pay for itself.
Therefore, this revision changes the API for vectorization patterns to get rid of templates as much as possible.
Variadic templates are relegated to the internals of LinalgTransformationFilter as much as possible and away from the user-facing APIs.

It is expected other patterns / transformations will follow the same path and drop as much C++ templating as possible from the class definition.

Differential revision: https://reviews.llvm.org/D95973
2021-02-04 16:53:24 +00:00
Nicolas Vasilache
f4ac9f0334 [mlir][Linalg] Drop SliceOp
This op is subsumed by rank-reducing SubViewOp and has become useless.

Differential revision: https://reviews.llvm.org/D95317
2021-02-04 11:22:01 +00:00
Alex Zinenko
ba87f99168 [mlir] make vector to llvm conversion truly partial
Historically, the Vector to LLVM dialect conversion subsumed the Standard to
LLVM dialect conversion patterns. This was necessary because the conversion
infrastructure did not have sufficient support for reconciling type
conversions. This support is now available. Only keep the patterns related to
the Vector dialect in the Vector to LLVM conversion and require type casts
operations to be inserted if necessary. These casts will be removed by
following conversions if possible. Update integration tests to also run the
Standard to LLVM conversion.

There is a significant amount of test churn, which is due to (a) unnecessarily
strict tests in VectorToLLVM and (b) many patterns actually targeting Standard
dialect ops instead of LLVM dialect ops leading to tests actually exercising a
Vector->Standard->LLVM conversion. This churn is a good illustration of the
reason to make the conversion partial: now the tests only check the code in the
Vector to LLVM conversion and will not be randomly broken by changes in
Standard to LLVM conversion.

Arguably, it may be possible to extract Vector to Standard patterns into a
separate pass, but given the ongoing splitting of the Standard dialect, such
pass will be short-lived and will require further refactoring.

Depends On D95626

Reviewed By: nicolasvasilache, aartbik

Differential Revision: https://reviews.llvm.org/D95685
2021-02-04 11:33:24 +01:00
Alex Zinenko
5b91060dcc [mlir] Apply source materialization in case of transitive conversion
In dialect conversion infrastructure, source materialization applies as part of
the finalization procedure to results of the newly produced operations that
replace previously existing values with values having a different type.
However, such operations may be created to replace operations created in other
patterns. At this point, it is possible that the results of the _original_
operation are still in use and have mismatching types, but the results of the
_intermediate_ operation that performed the type change are not in use leading
to the absence of source materialization. For example,

  %0 = dialect.produce : !dialect.A
  dialect.use %0 : !dialect.A

can be replaced with

  %0 = dialect.other : !dialect.A
  %1 = dialect.produce : !dialect.A  // replaced, scheduled for removal
  dialect.use %1 : !dialect.A

and then with

  %0 = dialect.final : !dialect.B
  %1 = dialect.other : !dialect.A    // replaced, scheduled for removal
  %2 = dialect.produce : !dialect.A  // replaced, scheduled for removal
  dialect.use %2 : !dialect.A

in the same rewriting, but only the %1->%0 replacement is currently considered.

Change the logic in dialect conversion to look up all values that were replaced
by the given value and performing source materialization if any of those values
is still in use with mismatching types. This is performed by computing the
inverse value replacement mapping. This arguably expensive manipulation is
performed only if there were some type-changing replacements. An alternative
could be to consider all replaced operations and not only those that resulted
in type changes, but it would harm pattern-level composability: the pattern
that performed the non-type-changing replacement would have to be made aware of
the type converter in order to call the materialization hook.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D95626
2021-02-04 11:15:11 +01:00
Nicolas Vasilache
f245b7ad36 [mlir][Linalg] Generalize the definition of a Linalg contraction.
This revision defines a Linalg contraction in general terms:

  1. Has 2 input and 1 output shapes.
  2. Has at least one reduction dimension.
  3. Has only projected permutation indexing maps.
  4. its body computes `u5(u1(c) + u2(u3(a) * u4(b)))` on some field
    (AddOpType, MulOpType), where u1, u2, u3, u4 and u5 represent scalar unary
    operations that may change the type (e.g. for mixed-precision).

As a consequence, when vectorization of such an op occurs, the only special
behavior is that the (unique) MulOpType is vectorized into a
`vector.contract`. All other ops are handled in a generic fashion.

 In the future, we may wish to allow more input arguments and elementwise and
 constant operations that do not involve the reduction dimension(s).

A test is added to demonstrate the proper vectorization of matmul_i8_i8_i32.

Differential revision: https://reviews.llvm.org/D95939
2021-02-04 07:50:44 +00:00
Mehdi Amini
a1d5bdf819 Make the folder more robust against op fold() methods that generate a type mismatch
We could extend this with an interface to allow dialect to perform a type
conversion, but that would make the folder creating operation which isn't
the case at the moment, and isn't necessarily always desirable.

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D95991
2021-02-04 01:58:56 +00:00
George
9db6114296 Add API for adding arguments to blocks
This just exposes a missing API

Differential Revision: https://reviews.llvm.org/D95968
2021-02-03 13:14:48 -08:00
Christian Sigg
8d73bee4ed [mlir] Add gpu async integration test.
Reviewed By: herhut

Differential Revision: https://reviews.llvm.org/D94421
2021-02-03 21:45:23 +01:00
Lei Zhang
5b7619c90b [mlir] Fix scf.for single iteration canonicalization check
We should be check whether lb + step >= ub to determine
whether this is a single iteration. Previously we were
checking lb + lb >= ub.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D95440
2021-02-02 18:30:50 -05:00
Diego Caballero
cf5c517c05 [mlir][Vector] Add lowering to LLVM for vector.bitcast
Add the conversion pattern for vector.bitcast to lower it to
the LLVM Dialect.

Reviewed By: ThomasRaoux, aartbik

Differential Revision: https://reviews.llvm.org/D95579
2021-02-03 01:19:20 +02:00