Replace the dotest command line options and various cmake variables,
which are used for passing the locations of llvm tools to the API tests
with a single variable, which points to the directory these tools are
placed in. Besides reducing repetition, this also makes things more
similar to how "normal" llvm tests are configured.
Differential Revision: https://reviews.llvm.org/D95261
These tests demonstrate that LSR does not insert IV increment
into the latch block (as it supposes to) when it can use an
existing Phi as IV rather than creating a new LSR IV.
Switch to new logging api added in [[ https://developer.apple.com/documentation/os/os_log_error | macOS 10.12 ]] that is more memory safe and enables us to label the log messages in the future. Falls back to old API if ran on older OS versions.
Commited by Dan Liew on behalf of Emily Shi.
rdar://25181524
Reviewed By: delcypher, yln
Differential Revision: https://reviews.llvm.org/D95977
The vrgather.vv instruction uses a vector of indices with the same
SEW as operand 0. The vrgather.vx instructions use a scalar index
operand of XLen bits.
By splitting this into 2 intrinsics we are able to use LLVMatchType
in the definition to avoid specifying the type for the index operand
when creating the IR for the intrinsic. For .vv it will match the
operand 0 type. And for .vx it will match the type of the vl operand
we already needed to specify a type for.
I'm considering splitting more intrinsics. This was a somewhat
odd one because the .vx doesn't use the element type, it always
use XLen.
Reviewed By: HsiangKai
Differential Revision: https://reviews.llvm.org/D95979
loadRegFromStackSlot()/storeRegToStackSlot() can generate aligned access
instructions for stack slots even if the stack is unaligned, based on the
assumption that the stack can be realigned.
However, this doesn't work for fixed slots, which are e.g. used for
spilling XMM registers in a non-leaf function with
`__attribute__((preserve_all))`.
When compiling such code with `-mstack-alignment=8`, this causes general
protection faults.
Fix it by only considering stack realignment for non-fixed slots.
Note that this changes the output of three existing tests which spill AVX
registers, since AVX requires higher alignment than the ABI provides on
stack frame entry.
Reviewed By: rnk, jyknight
Differential Revision: https://reviews.llvm.org/D73126
When the -matomics feature is not enabled, disable POSIXThreads
mode and set the thread model to Single, so that we don't predefine
macros like `__STDCPP_THREADS__`.
Differential Revision: https://reviews.llvm.org/D96091
As mentioned in TODO comment, casting double to float causes NaNs to change bits.
To avoid the change, this patch adds support for single-floating-point immediate value on MachineCode.
Patch by Yuta Saito.
Differential Revision: https://reviews.llvm.org/D77384
`-flto -gsplit-dwarf -g -O[123]` may create .debug_gnu_pubnames with 0 DIE
offset entries. llvm-dwarfdump -debug-gnu-pubnames/ld.lld --gdb-index errors for that.
```
.section .debug_gnu_pubnames,"",@progbits
.long .LpubNames_end2-.LpubNames_begin2 # Length of Public Names Info
.LpubNames_begin2:
.short 2 # DWARF Version
.long .Lcu_begin2 # Offset of Compilation Unit Info
.long 57 # Compilation Unit Length
.long 0 # DIE offset
.byte 16 # Attributes: TYPE, EXTERNAL
.asciz "absl" # External Name
.long 0 # DIE offset
.byte 16 # Attributes: TYPE, EXTERNAL
.asciz "absl::base_internal" # External Name
.long 0 # End Mark
```
If we know that the scalar epilogue is required to run, modify the CFG to end the middle block with an unconditional branch to scalar preheader. This is instead of a conditional branch to either the preheader or the exit block.
The motivation to do this is to support multiple exit blocks. Specifically, the current structure forces us to identify immediate dominators and *which* exit block to branch from in the middle terminator. For the multiple exit case - where we know require scalar will hold - these questions are ill formed.
This is the last change needed to support multiple exit loops, but since the diffs are already large enough, I'm going to land this, and then enable separately. You can think of this as being NFCI-ish prep work, but the changes are a bit too involved for me to feel comfortable tagging the change that way.
Differential Revision: https://reviews.llvm.org/D94892
Currently if there is not kernel argument, device synchronization will
be skipped. This can lead to two issues:
1. If there is any device error, it will not be captured;
2. The target region might end before the kernel is done, which is not spec
conformant.
The test added in this patch only runs on NVPTX platform, although it will not
be executed by Phab at all. It also requires `not` which is not available on most
systems.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D96067
This improves our coverage of these operations and shows that we
use really large constants for division by constant on i8/i16
especially on RV64. The issue is that BuildSDIV/BuildUDIV are
limited to legal types so we have to promote to i64 before it
kicks in. At that point we've lost the range information for the
original type.
when we skip the call stack starting with an external address, we should also skip the bottom LBR entry, otherwise it will cause a truncated context issue.
Reviewed By: hoy, wenlei
Differential Revision: https://reviews.llvm.org/D95480
We want way to set a path to llvm-symbolizer that isn't relative
to the current working directory; this change adds a variable that
expands to the path relative to the current binary.
This approach came from comments in https://reviews.llvm.org/D93070
Differential Revision: https://reviews.llvm.org/D94563
This makes ignoring a result explicit by the user, and helps to prevent accidental errors with dropped results. Marking LogicalResult as no discard was always the intention from the beginning, but got lost along the way.
Differential Revision: https://reviews.llvm.org/D95841
Defer constant checking of dependent initializer to template instantiation
since it cannot be done for dependent values.
Reviewed by: Artem Belevich
Differential Revision: https://reviews.llvm.org/D95840
These attributes were all incorrect or inappropriate for LLVM to infer:
- inaccessiblememonly is generally wrong; user replacement operator new
can access memory that's visible to the caller, as can a new_handler
function.
- willreturn is generally wrong; a custom new_handler is not guaranteed
to terminate.
- noalias is inappropriate: Clang has a flag to determine whether this
attribute should be present and adds it itself when appropriate.
- noundef and nonnull on the return value should be specified by the
frontend on all 'operator new' functions if we want them, not here.
In any case, inferring attributes on functions declared 'nobuiltin' (as
these are when Clang emits them) seems questionable.
Several of the new attributes here were incorrect, and even the ones
that are generally correct were being added even to nobuiltin calls.
This reverts commit bb3f169b59.
- attribute-dict production is redundant with dictionary-attribute
- definitions of attribute aliases were part of the same production as
uses of attribute aliases
- `std.dim` now accepts the dimension number as an operand, so the
example is out of date. Use the predicate of std.cmpi as a better
example.
Differential Revision: https://reviews.llvm.org/D96076
explicitly qualified as members of the current instantiation.
Despite the nested name specifier being fully-dependent in this case,
the elaborated type might only be instantiation-dependent, because the
type is a member of the current instantiation.
Follow-up on D95925: adds better detection for function arguments and also
checks for conflicts in muli-variable init statements in ForStmt.
Reviewed By: hokein
Differential Revision: https://reviews.llvm.org/D96009
Note: contrary to what I said previously, I didn't change .clang-format nor utils/generate_feature_test_macro_components.py script.
Reviewed By: ldionne, #libc
Differential Revision: https://reviews.llvm.org/D92229
MemorySSA currently treats lifetime.end intrinsics as not aliasing
anything. This breaks MemorySSA-based MemCpyOpt, because we'll happily
move a read of a pointer below a lifetime.end intrinsic, as no clobber
is reported.
I think the MemorySSA modelling here isn't correct: lifetime.end(p)
has approximately the same effect as doing a memcpy(p, undef), and
should be treated as a clobber.
This patch removes the special handling of lifetime.end, leaving
alias analysis to handle it appropriately.
Differential Revision: https://reviews.llvm.org/D95763