If the SRL for Hi constant folds, but we don't remoe those bits from
the Lo, we can end up with strange constant folding through DAGCombine later.
I've only seen this with constants being lowered to constant pools
during lowering on RISC-V.
On the first split we create two i32 trunc stores and a srl to shift
the high part down. The srl gets constant folded, but to produce
a new i32 constant. But the truncstore for the low store still uses
the original constant.
This original constant then gets converted to a constant pool
before we revisit the stores to further split them. The constant
pool prevents further constant folding of the additional srls.
After legalization is done, we run DAGCombiner and get some constant
folding of srl via computeKnownBits which can peek through the constant
pool load. This can create new constants that also need a constant pool.
When running multiple shards in parallel, one shard might write to the
cache while another one is reading this cache. Instead of updating the
file in place, write to a temporary file and swap the cache file using
os.replace(). This is an atomic operation and means shards will either
see the old state or the new one.
This is a minor step towards deprecating bufferization.alloc_tensor().
It replaces the examples with tensor.empty() and adjusts the underlying
rewriting logic to prepare for this upcoming change.
The intrinsic uses ImmArg so TargetConstant would be consistent
with how other intrinsics are handled.
This hides the constants from type legalization so we can remove
the promotion support.
isel patterns are updated accordingly.
Fold references to the (relatively new) intrinsic function NORM2 at
compilation time when the argument(s) are all constants. (Getting this
done right involved some changes to the API of the accumulator function
objects used by the DoReduction<> template, which rippled through some
other reduction function folding code.)
Use the recently introduced llvm.mlir.zero operation for values with
LLVM target extension type. Replaces the previous workaround that uses a
single zero-valued integer attribute constant operation.
Signed-off-by: Lukas Sommer <lukas.sommer@codeplay.com>
This is a fix for a subset of legalization problems around 64 bit indices on
rv32 targets. For RV32+V, we were using the wrong mask type for the manual
truncation lowering for fixed length vectors. Instead, just use the generic
TRUNCATE node, and let it be lowered as needed.
Note that legalization is still broken for rv32+zve32. That appears to be
a different issue.
This reverts commit 47324cfd7d8ca1a2a5cbb9f948ecff66a28ee6bc.
This exposed incorrect debuginfo in rustc. Revert the verification
until this has been fixed.
The POINTER= and TARGET= arguments to the intrinsic function
ASSOCIATED() can be the results of references to functions that return
object pointers or procedure pointers. NULL() was working well but not
program-defined pointer-valued functions. Correct the validation of
ASSOCIATED() and extend the infrastructure used to detect and
characterize procedures and pointers.
Some VP intrinsic definitions were missing the
VP_PROPERTY_FUNCTIONAL_INTRINSIC property. This patch fills them in, and
adds a static_assert that all VP intrinsics have an equivalent opcode or
intrinsic defined so we don't forget them in future.
Some VP intrinsics don't have an equivalent, namely merge and strided
load/store. For those, a new property was added to mark that they don't
have a non-VP equivalent.
This adds a helper method to get the ID of the functionally equivalent
intrinsic, similar to the existing getFunctionalOpcodeForVP and
getConstrainedIntrinsicIDForVP method.
Summary:
The GPU build has a lot of magic around how we package the output.
Generally, the GPU needs to exist as a secondary fatbinary image for
offloading languages. This is because offloading languages pretend like
offloading to an accelerator is a single file. This then needs to be put
into a single file to make it mesh with the existing build
infrastructure. To work with this, the `libc` makes an installed version
of the library that simply embeds the GPU code into an empty stub file.
This wasn't being updated correctly, which lead to the installed `libc`
static library not being updated correctly when the underlying file was
changed. The previous behaviour only updated when the entrypoint itself
was modified, but not any of its headers. By adding a dependcy on the
actual *object* file we should now capture the regular CMake semantics.
This commit removes the deallocation capabilities of
one-shot-bufferization. One-shot-bufferization should never deallocate
any memrefs as this should be entirely handled by the
ownership-based-buffer-deallocation pass going forward. This means the
`allow-return-allocs` pass option will default to true now,
`create-deallocs` defaults to false and they, as well as the escape
attribute indicating whether a memref escapes the current region, will
be removed. A new `allow-return-allocs-from-loops` option is added as a
temporary workaround for some bufferization limitations.
We'll need to generalize this fold to check for any zero upperbits to address some of the D155472 regressions, but this exposes a number of issues. For now, just use the general MaskedValueIsZero test instead of the assertzext.
Thanks to Giuseppe D'Angelo for pointing this out on the cpplang Slack!
The example implementation in https://eel.is/c++draft/string.view.comparison#example-1
was necessary when it was written, in C++17, but in C++20 we don't need that
complexity anymore, because of the reversed candidates that are
synthesized by the compiler.
Update LiveIntervals after rewriting:
%reg = INSERT_SUBREG undef %reg, %subreg, subidx
to:
undef %reg:subidx = COPY %subreg
D113044 implemented this for the non-undef case.
This gets rid of the separate parameter enable_modules_lsv in favor of
adding a named option to the enable_modules parameter. The patch also
removes the getModuleFlag helper, which was just a really complicated
way of hardcoding "none".
This revision adds support for empty tensor elimination to
"bufferization.materialize_in_destination" by implementing the
`SubsetInsertionOpInterface`.
Furthermore, the One-Shot Bufferize conflict detection is improved for
"bufferization.materialize_in_destination".
This patch simplifies the pattern `icmp X and/or C1, X and/or C2` when
one constant mask is the subset of the other.
If `C1 & C2 == C1`, `A = X and/or C1`, `B = X and/or C2`, we can do the
following folds:
`icmp ule A, B -> true`
`icmp ugt A, B -> false`
We can apply similar folds for signed predicates when `C1` and `C2` are
the same sign:
`icmp sle A, B -> true`
`icmp sgt A, B -> false`
Alive2: https://alive2.llvm.org/ce/z/Q4ekP5Fixes#65833.
This patch moves the group of OpenMP MLIR passes using after lowering of
Fortran to MLIR into a pipeline to be shared by `flang-new` and `bbc`.
Currently, the `bbc` tool does not produce the expected FIR for offloading-
enabled OpenMP codes due to not running these passes.
Unit tests exercising these passes are updated to check `bbc` output as well.
…(trunci) expansion
This revision adds a rewrite for sequences of vector `bitcast(trunci)`
to use a more efficient sequence of vector operations comprising
`shuffle` and `bitwise` ops.
Such patterns appear naturally when writing quantization /
dequantization functionality with the vector dialect.
The rewrite performs a simple enumeration of each of the bits in the
result vector and determines its provenance in the pre-trunci vector.
The enumeration is used to generate the proper sequence of `shuffle`,
`andi`, `ori` followed by an optional final `trunci`/`extui`.
The rewrite currently only applies to 1-D non-scalable vectors and bails
out if the final vector element type is not a multiple of 8. This is a
failsafe heuristic determined empirically: if the resulting type is not
an even number of bytes, further complexities arise that are not
improved by this pattern: the heavy lifting still needs to be done by
LLVM.
Summary:
This patch copies a config file for the GPU similar to the
baremetal/embedded implementation. This will configure the
implementations of functions like `sprintf` and `snprintf` to be
compiled into more simple versions that can be run on the GPU. These
functions cannot be enabled yet as Vararg support hasn't landed, but it
will be used then.
It is possible for a derived type extending a type with private
components to define components with the same name as the private
components.
This was not properly handled by lowering where several fir.record type
component names could end-up being the same, leading to bad generated
code (only the first component was accessed via fir.field_index, leading
to bad generated code).
This patch handles the situation by adding the derived type mangled name
to private component.
Stack-move optimization, the optimization that merges src and dest
alloca of the full-size copy, replaces all uses of the dest alloca with
src alloca. For safety, we needed to check all uses of the dest alloca
locations are dominated by src alloca, to be replaced. This PR adds the
check for that.
Fixes#65225
If the R_AARCH64_CALL26 against a symbol that has a lower address, then
encodeValueAArch64 will return a wrong value.
Reviewed By: Kepontry, yota9
Differential Revision: https://reviews.llvm.org/D159513
Following on from D139466 which added support for dead vreg defs, this
patch adds support for "undef" defs of subregs.
Use this to regenerate checks for amx-greedy-ra-spill-shape.ll which
previously required manual tweaks to the autogenerated checks to fix an
EXPENSIVE_CHECKS failure; see commit
8b7c1fbd9647a5a6ef246a6b5b2543ea0f5a2337
Use the new ownership based deallocation pass pipeline in the regression
and integration tests. Some one-shot bufferization tests tested one-shot
bufferize and deallocation at the same time. I removed the deallocation
pass there because the deallocation pass is already thoroughly tested by
itself.
Fixed version of #66471
BEFORE this patch, when clang handles constraints like C1 || C2 where C1 evaluates to false and C2 evaluates to true, it emitted irrelevant diagnostics about the falsity of C1.
This patch removes the irrelevant diagnostic information generated during the evaluation of C1 if C2 evaluates to true.
Fixes https://github.com/llvm/llvm-project/issues/54678
Reviewed By: erichkeane
Differential Revision: https://reviews.llvm.org/D157526
So that the values that are accessed via such accessors can be analyzed
as a limited version of context-sensitive analysis. We can potentially
do this only when some option is set, but doing additional modeling like
this won't be expensive and intrusive, so we do it by default for now.