Multi-configuration generators (such as Visual Studio and Xcode) allow the specification of a build flavor at build time instead of config time, so the lit configuration files need to support that - and they do for the most part. There are several places that had one of two issues (or both!):
1) Paths had %(build_mode)s set up, but then not configured, resulting in values that would not work correctly e.g. D:/llvm-build/%(build_mode)s/bin/dsymutil.exe
2) Paths did not have %(build_mode)s set up, but instead contained $(Configuration) (which is the value for Visual Studio at configuration time, for Xcode they would have had the equivalent) e.g. "D:/llvm-build/$(Configuration)/lib".
This seems to indicate that we still have a lot of fragility in the configurations, but also that a number of these paths are never used (at least on Windows) since the errors appear to have been there a while.
This patch fixes the configurations and it has been tested with Ninja and Visual Studio to generate the correct paths. We should consider removing some of these settings altogether.
Reviewed By: JDevlieghere, mehdi_amini
Differential Revision: https://reviews.llvm.org/D96427
The dimension order of a filter in tensorflow is
[filter_height, filter_width, in_channels, out_channels], which is different
from current definition. The current definition follows TOSA spec. Add TF
version conv ops to .tc, so we do not have to insert a transpose op around a
conv op.
Reviewed By: antiagainst
Differential Revision: https://reviews.llvm.org/D96038
The CMake changes in 2aa1af9b1da to make it possible to build MLIR as a
standalone project unfortunately disabled all unit-tests from the
regular in-tree build.
This revision connects the generated sparse code with an actual
sparse storage scheme, which can be initialized from a test file.
Lacking a first-class citizen SparseTensor type (with buffer),
the storage is hidden behind an opaque pointer with some "glue"
to bring the pointer back to tensor land. Rather than generating
sparse setup code for each different annotated tensor (viz. the
"pack" methods in TACO), a single "one-size-fits-all" implementation
has been added to the runtime support library. Many details and
abstractions need to be refined in the future, but this revision
allows full end-to-end integration testing and performance
benchmarking (with on one end, an annotated Lingalg
op and, on the other end, a JIT/AOT executable).
Reviewed By: nicolasvasilache, bixia
Differential Revision: https://reviews.llvm.org/D95847
This revision fixes the indexing logic into the packed tensor that result from hoisting padding. Previously, the index was incorrectly set to the loop induction variable when in fact we need to compute the iteration count (i.e. `(iv - lb).ceilDiv(step)`).
Differential Revision: https://reviews.llvm.org/D96417
Previously broadcast was a binary op. Now it can support more inputs.
This has been changed in such a way that for now, this is an NFC for
all broadcast operations that were previously legal.
Differential Revision: https://reviews.llvm.org/D95777
These are similar to maxnum and minnum, but they're defined to treat -0
as less than +0. This behavior can't be expressed using float
comparisons and selects, since comparisons are defined to treat
different-signed zeros as equal. So, the only way to communicate this
behavior into LLVM IR without defining target-specific intrinsics is to
add the corresponding ops.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D96373
Previously it reported an op had side-effects iff it declared that it
didn't have any side-effects. This had the undesirable result that
canonicalization would always delete any intrinsic calls that did memory
stores and returned void.
Reviewed By: ftynse, mehdi_amini
Differential Revision: https://reviews.llvm.org/D96369
This allows for referencing nearly every component of an operation from within a custom directive.
It also fixes a bug with the current type_ref implementation, PR48478
Differential Revision: https://reviews.llvm.org/D96189
This revision adds a new `AliasAnalysis` class that represents the main alias analysis interface in MLIR. The purpose of this class is not to hold the aliasing logic itself, but to provide an interface into various different alias analysis implementations. As it evolves this should allow for users to plug in specialized alias analysis implementations for their own needs, and have them immediately usable by other analyses and transformations.
This revision also adds an initial simple generic alias, LocalAliasAnalysis, that provides support for performing stateless local alias queries between values. This class is similar in scope to LLVM's BasicAA.
Differential Revision: https://reviews.llvm.org/D92343
These properties were useful for a few things before traits had a better integration story, but don't really carry their weight well these days. Most of these properties are already checked via traits in most of the code. It is better to align the system around traits, and improve the performance/cost of traits in general.
Differential Revision: https://reviews.llvm.org/D96088
Replace MlirDialectRegistrationHooks with MlirDialectHandle, which under-the-hood is an opaque pointer to MlirDialectRegistrationHooks. Then we expose the functionality previously directly on MlirDialectRegistrationHooks, as functions which take the opaque MlirDialectHandle struct. This makes the actual structure of the registration hooks an implementation detail, and happens to avoid this issue: https://llvm.discourse.group/t/strange-swift-issues-with-dialect-registration-hooks/2759/3
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D96229
This commit defines linalg.depthwise_conv_2d_nhwc for depthwise
2-D convolution with NHWC input/output data format.
This op right now only support channel multiplier == 1, which is
the most common case.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D94966
Indexing maps for named ops can reference attributes so that
we can synthesize the indexing map dynamically. This supports
cases like strides for convolution ops. However, it does cause
an issue: now the indexing_maps() function call is dependent
on those attributes.
Linalg ops inherit LinalgOpInterfaceTraits, which calls
verifyStructuredOpInterface() to verify the interface.
verifyStructuredOpInterface() further calls indexing_maps().
Note that trait verification is done before the op itself,
where ODS generates the verification for those attributes.
So we can have indexing_maps() referencing non-existing or
invalid attribute, before the ODS-generated verification
kick in.
There isn't a dependency handling mechansim for traits.
This commit adds new interface methods to query whether an
op hasDynamicIndexingMaps() and then perform
verifyIndexingMapRequiredAttributes() in
verifyStructuredOpInterface() to handle the dependency issue.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D96297
This revision fixes the fact that the padding transformation did not have enough information to set the proper type for the padding value.
Additionally, the verifier for Yield in the presence of PadTensorOp is fixed to properly report incorrect number of results or operands. Previously, the error would be silently ignored which made the core issue difficult to debug.
Differential Revision: https://reviews.llvm.org/D96264
After the LLVM dialect types were ported to use built-in types, the parser kept
supporting the old syntax for LLVM dialect types to produce built-in types for
compatibility. Drop this support.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D96275
This will allow to use `NativeOpTrait` and Operations
declared outside of `mlir` namespace.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D96128
These patterns move vector.bitcast ops to be before
insert ops or after extract ops where suitable.
With them, bitcast will happen on smaller vectors
and there are more chances to share extract/insert
ops.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D96040
This patch introduces a few more straightforward patterns
to convert vector ops operating on 1-4 element vectors
to their corresponding SPIR-V counterparts.
This patch also enables converting vector<1xT> to T.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D96042
This patch adds patterns to use vector.shape_cast to cast
away leading 1-dimensions from a few vector operations.
It allows exposing more canonical forms of vector.transfer_read,
vector.transfer_write, vector_extract_strided_slice, and
vector.insert_strided_slice. With this, we can have more
opportunity to cancelling extract/insert ops or forwarding
write/read ops.
Reviewed By: ThomasRaoux
Differential Revision: https://reviews.llvm.org/D95873
Historically, Linalg To LLVM conversion subsumed numerous other conversions,
including (affine) loop lowerings to CFG and conversions from the Standard and
Vector dialects to the LLVM dialect. This was due to the insufficient support
for partial conversions in the infrastructure that essentially required
conversions that involve type change (in this case, !linalg.range to
!llvm.struct) to be performed in a single conversion sweep. This is no longer
the case so remove the subsumed conversions and run them as separate passes
when necessary.
Depends On D95317
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D96008
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way.
Differential Revision: https://reviews.llvm.org/D95841
This reverts commit 953086ddbb593289fafcf0e7cc6e74847f1635af because
it breaks GCC 5 build:
error: could not convert '(const char*)""' from 'const char*' to 'llvm::StringLiteral'
static ::llvm::StringLiteral getDialectNamespace() { return ""; }
Use `StringLiteral` for function return type if it is known to return
constant string literals only.
This will make it visible to API users, that such values can be safely
stored, since they refers to constant data, which will never be deallocated.
`StringRef` is general is not safe to store for a long term,
since it might refer to temporal data allocated in heap.
Reviewed By: mehdi_amini, bkramer
Differential Revision: https://reviews.llvm.org/D95945
This revision takes advantage of recent extensions to vectorization to refactor contraction detection into a bona fide Linalg interface.
The mlit-linalg-ods-gen parser is extended to support adding such interfaces.
The detection that was originally enabling vectorization is refactored to serve as both a test on a generic LinalgOp as well as to verify ops that declare to conform to that interface.
This is plugged through Linalg transforms and strategies but it quickly becomes evident that the complexity and rigidity of the C++ class based templating does not pay for itself.
Therefore, this revision changes the API for vectorization patterns to get rid of templates as much as possible.
Variadic templates are relegated to the internals of LinalgTransformationFilter as much as possible and away from the user-facing APIs.
It is expected other patterns / transformations will follow the same path and drop as much C++ templating as possible from the class definition.
Differential revision: https://reviews.llvm.org/D95973
Historically, the Vector to LLVM dialect conversion subsumed the Standard to
LLVM dialect conversion patterns. This was necessary because the conversion
infrastructure did not have sufficient support for reconciling type
conversions. This support is now available. Only keep the patterns related to
the Vector dialect in the Vector to LLVM conversion and require type casts
operations to be inserted if necessary. These casts will be removed by
following conversions if possible. Update integration tests to also run the
Standard to LLVM conversion.
There is a significant amount of test churn, which is due to (a) unnecessarily
strict tests in VectorToLLVM and (b) many patterns actually targeting Standard
dialect ops instead of LLVM dialect ops leading to tests actually exercising a
Vector->Standard->LLVM conversion. This churn is a good illustration of the
reason to make the conversion partial: now the tests only check the code in the
Vector to LLVM conversion and will not be randomly broken by changes in
Standard to LLVM conversion.
Arguably, it may be possible to extract Vector to Standard patterns into a
separate pass, but given the ongoing splitting of the Standard dialect, such
pass will be short-lived and will require further refactoring.
Depends On D95626
Reviewed By: nicolasvasilache, aartbik
Differential Revision: https://reviews.llvm.org/D95685
In dialect conversion infrastructure, source materialization applies as part of
the finalization procedure to results of the newly produced operations that
replace previously existing values with values having a different type.
However, such operations may be created to replace operations created in other
patterns. At this point, it is possible that the results of the _original_
operation are still in use and have mismatching types, but the results of the
_intermediate_ operation that performed the type change are not in use leading
to the absence of source materialization. For example,
%0 = dialect.produce : !dialect.A
dialect.use %0 : !dialect.A
can be replaced with
%0 = dialect.other : !dialect.A
%1 = dialect.produce : !dialect.A // replaced, scheduled for removal
dialect.use %1 : !dialect.A
and then with
%0 = dialect.final : !dialect.B
%1 = dialect.other : !dialect.A // replaced, scheduled for removal
%2 = dialect.produce : !dialect.A // replaced, scheduled for removal
dialect.use %2 : !dialect.A
in the same rewriting, but only the %1->%0 replacement is currently considered.
Change the logic in dialect conversion to look up all values that were replaced
by the given value and performing source materialization if any of those values
is still in use with mismatching types. This is performed by computing the
inverse value replacement mapping. This arguably expensive manipulation is
performed only if there were some type-changing replacements. An alternative
could be to consider all replaced operations and not only those that resulted
in type changes, but it would harm pattern-level composability: the pattern
that performed the non-type-changing replacement would have to be made aware of
the type converter in order to call the materialization hook.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95626
This revision defines a Linalg contraction in general terms:
1. Has 2 input and 1 output shapes.
2. Has at least one reduction dimension.
3. Has only projected permutation indexing maps.
4. its body computes `u5(u1(c) + u2(u3(a) * u4(b)))` on some field
(AddOpType, MulOpType), where u1, u2, u3, u4 and u5 represent scalar unary
operations that may change the type (e.g. for mixed-precision).
As a consequence, when vectorization of such an op occurs, the only special
behavior is that the (unique) MulOpType is vectorized into a
`vector.contract`. All other ops are handled in a generic fashion.
In the future, we may wish to allow more input arguments and elementwise and
constant operations that do not involve the reduction dimension(s).
A test is added to demonstrate the proper vectorization of matmul_i8_i8_i32.
Differential revision: https://reviews.llvm.org/D95939
We could extend this with an interface to allow dialect to perform a type
conversion, but that would make the folder creating operation which isn't
the case at the moment, and isn't necessarily always desirable.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95991
We should be check whether lb + step >= ub to determine
whether this is a single iteration. Previously we were
checking lb + lb >= ub.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D95440
Add the conversion pattern for vector.bitcast to lower it to
the LLVM Dialect.
Reviewed By: ThomasRaoux, aartbik
Differential Revision: https://reviews.llvm.org/D95579