The existing lowering has lower precision for certain use cases, e.g.
tanh. Improved version should demonstrate an overall higher level of precision.
Reviewed By: cota, jpienaar
Differential Revision: https://reviews.llvm.org/D153592
Ensure the thread that refills freelist will get the Batch without
contending the lock in SizeClassAllocator64.
Reviewed By: cferris
Differential Revision: https://reviews.llvm.org/D152419
The patch in D152592 changed the logic for this. We could never check if
we were on the GPU as this was before the variable was defined so I
moved it later. Secondly, we cannot use the `LLVM_BINARY_DIR` here, and
I do not know if that works in general. The problem is that it will
isntall the headers under a normal path outside of the
`LLVM_ENABLE_RUNTIMES` build. I don't know if that's correct for the
other targets, but for the GPU I need to set it back to the
CMAKE_BINARY_DIR so it works.
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D153637
With `-fsanitize=kcfi` (Kernel Control-Flow Integrity), Clang emits
"kcfi" operand bundles to indirect call instructions. Similarly to
the target-specific lowering added in D119296, implement KCFI operand
bundle lowering for RISC-V.
This patch disables the generic KCFI pass for RISC-V in Clang, and
adds the KCFI machine function pass in `RISCVPassConfig::addPreSched`
to emit target-specific `KCFI_CHECK` pseudo instructions before calls
that have KCFI operand bundles. The machine function pass also bundles
the instructions to ensure we emit the checks immediately before the
calls, which is not possible with the generic pass.
`KCFI_CHECK` instructions are lowered in `RISCVAsmPrinter` to a
contiguous code sequence that traps if the expected hash in the
operand bundle doesn't match the hash before the target function
address. This patch emits an `ebreak` instruction for error handling
to match the Linux kernel's `BUG()` implementation. Just like for X86,
we also emit trap locations to a `.kcfi_traps` section to support
error handling, as we cannot embed additional information to the trap
instruction itself.
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D148385
When specifying the C-string format for dumping memory, we treat
unprintable characters as signed. Whether a character is signed or not
is implementation defined, but all printable characters are signed.
Therefore it's fair to assume that unprintable characters are unsigned.
Before this patch, "\xcf\xfa\xed\xfe\f" would be printed as
"\xffffffcf\xfffffffa\xffffffed\xfffffffe\f". Now we correctly print the
original string.
rdar://111126134
Differential revision: https://reviews.llvm.org/D153644
Fat LTO objects contain both LTO compatible IR, as well as generated
object code. This allows users to defer the choice of whether to use LTO
or not to link-time. This is a feature available in GCC for some time,
and makes the existing -ffat-lto-objects flag functional in the same
way as GCC's.
Within LLVM, we add a new EmbedBitcodePass that serializes the module to
the object file, and expose a new pass pipeline for compiling fat
objects. The new pipeline initially clones the module and runs the
selected (Thin)LTOPrelink pipeline, after which it will serialize the
module into a `.llvm.lto` section of an ELF file. When compiling for
(Thin)LTO, this normally the point at which the compiler would emit a
object file containing the bitcode and metadata.
After that point we compile the original module using the
PerModuleDefaultPipeline used for non-LTO compilation. We generate
standard object files at the end of this pipeline, which contain machine
code and the new `.llvm.lto` section containing bitcode.
Since the two pipelines operate on different copies of the module, we
can be sure that the bitcode in the `.llvm.lto` section and object code
in `.text` are congruent with the existing output produced by the
default and LTO pipelines.
Original RFC: https://discourse.llvm.org/t/rfc-ffat-lto-objects-support/63977
Reviewed By: tejohnson, MaskRay, nikic
Differential Revision: https://reviews.llvm.org/D146776
The use of ConstString in StructuredDataPlugin is unneccessary as fast
comparisons are not neeeded for StructuredDataPlugins.
Differential Revision: https://reviews.llvm.org/D153482
acc.reduction operation is used as data entry operation for the reduction
operands.
Reviewed By: jeanPerier
Differential Revision: https://reviews.llvm.org/D153367
Llvm-profgen internally uses the llvm libraries and the MCDesc interface to do disassembling and symblization and it never checks against target-specific instruction operators. This makes it quite transparent to targets and a first attempt for an aarch64 binary just works. Therefore I'm removing the unnecessary triple check to unblock for new targets.
Reviewed By: wenlei
Differential Revision: https://reviews.llvm.org/D153449
GNU addr2line exits immediately if -e (default to a.out) specifies a file that
cannot be open or a directory. llvm-addr2line used to wait for input on if the
input file cannot be open and addresses are not specified in command line.
Replace the D147652 checkFileExists with getOrCreateModuleInfo to avoid
a separate `sys::fs::status` operation.
Reviewed By: sepavloff
Differential Revision: https://reviews.llvm.org/D153595
This change allows sinking defs from loop preheader with PHI-use into loop body. Loop sink can now see through PHI-use and select incoming blocks of value being used as candidate sink destination.
It makes loop sink more effective so more LICM can be undone if proven unprofitable with profile info. It addresses the motivating case in D87551, without resorting to profile guided LICM which breaks canonicalization.
This is the 2nd attempt after D152772.
Add support for RecordType in getTypeAsString
Depends on D153461
Reviewed By: razvanlupusoru, jeanPerier
Differential Revision: https://reviews.llvm.org/D153467
`__xray_customevent` and `__xray_typedevent` are built-in functions in Clang.
With -fxray-instrument, they are lowered to intrinsics llvm.xray.customevent and
llvm.xray.typedevent, respectively. These intrinsics are then lowered to
TargetOpcode::{PATCHABLE_EVENT_CALL,PATCHABLE_TYPED_EVENT_CALL}. The target is
responsible for generating a code sequence that calls either
`__xray_CustomEvent` (with 2 arguments) or `__xray_TypedEvent` (with 3
arguments).
Before patching, the code sequence is prefixed by a branch instruction that
skips the rest of the code sequence. After patching
(compiler-rt/lib/xray/xray_AArch64.cpp), the branch instruction becomes a NOP
and the function call will take effects.
This patch implements the lowering process for
{PATCHABLE_EVENT_CALL,PATCHABLE_TYPED_EVENT_CALL} and implements the runtime.
```
// Lowering of PATCHABLE_EVENT_CALL
.Lxray_sled_N:
b #24
stp x0, x1, [sp, #-16]!
x0 = reg of op0
x1 = reg of op1
bl __xray_CustomEvent
ldrp x0, x1, [sp], #16
```
As a result, two updated tests in compiler-rt/test/xray/TestCases/Posix/ now
pass on AArch64.
Reviewed By: peter.smith
Differential Revision: https://reviews.llvm.org/D153320
Temporarily disabling the execute-only tests. We recently added codegen for
armv6-m, which is still in heavy development (D152795).
Disabling the tests while we're figuring out what's going on is probably the
least disruptive option, as a patch dependent on it also already landed.
If we have an excessive number of stores in a single chain then the candidate WideVT may exceed the maximum width of an EVT integer type (and will assert) - but since mergeTruncStores doesn't support anything wider than a i64 store we should just early-out if we've collected more than stores than that.
Fixes#63306
The template was created in D151396 but was not aware of the change in
D153067. This commit adds the operand and keep similar templates
aligned.
Reviewed By: reames, craig.topper
Differential Revision: https://reviews.llvm.org/D153506
Previously, using ColumnLimit: 0 with extended inline asm with the
BreakBeforeInlineASMColon: OnlyMultiline option (the default style),
the formatter would act as if in Always mode, meaning a line break was
added before every colon in an extended inline assembly block.
This patch respects the already existing line breaks, and doesn't add
any new ones, if in ColumnLimit 0 mode.
Behaviour with Always stays as expected, with a break before every colon
regardless of any existing line breaks.
Behaviour with Never was broken before, and remains broken with this patch,
it is just never respected in ColumnLimit 0 mode.
Fixes https://github.com/llvm/llvm-project/issues/62754
Reviewed By: HazardyKnusperkeks, owenpan
Differential Revision: https://reviews.llvm.org/D150848
This call to reassociateReduction is used by both fminnum/fmaxnum and
fminimum/fmaximum. In adding support for fminimum/fmaximum we appear to be
fixing the use of an incorrect reduction type, which should have only applied
to minnum/maxnum.
I also believe that it doesn't need nsz and reassoc to perform the
reassociation. For float min/max it should always be valid.
Differential Revision: https://reviews.llvm.org/D153247
The inserted instructions can usually be simplified. Make sure this
happens in the same InstCombine iteration by adding them to the
worklist.
We happen to get some better optimization in two cases, but this is
just a lucky accident. https://github.com/llvm/llvm-project/issues/63472
tracks implementing a fold for that case.
This doesn't track all inserted instructions yet, for that we would
also have to include those created by ObjectSizeOffsetEvaluator.
My MC layer support patches missed adding these to RISCVUsage. Also
update the link to the most recent spec PDF (including the recently
committed encoding fix for vfwmaccbf16.
For the same reasons as D151284, this requires custom lowering of the
truncate libcall on hard float ABIs (the normal libcall code path is
used on soft ABIs).
The extend operation is implemented by a shift just as in the standard
legalisation, but needs to be custom lowered because i32 isn't a legal
type on RV64.
This patch aims to make the minimal changes that result in correct
codegen for the bfloat.ll tests.
Differential Revision: https://reviews.llvm.org/D151663
We previously directly codegened to v_log_f32, which is broken for
denormals. The lowering isn't complicated, you simply need to scale
denormal inputs and adjust the result. Note log and log10 are still
not accurate enough, and will be fixed separately.
This adds operations for binary additive operators to EmitC. The input
arguments to these ops can be EmitC pointers and thus the operations can
be used for pointer arithmetic.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D149963