I think it follows from the HSA spec that a write to the first byte is
deemed significant to the GPU in which case writing to the second short
and reading back from it later would be safe. However, the examples for
this all involve an atomic write to the first 32 bits and it seems a
credible risk that the occasional CI errors abound invalid packets have
as their root cause that the firmware notices the early write to
packet->setup and treats that as a sign that the packet is ready to go.
That was overly-paranoid, however in passing noticed the code in libc is
genuinely invalid. The memset writes a zero to the header byte, changing
it from type_invalid (1) to type_vendor (0), at which point the GPU is
free to read the 64 byte packet and interpret it as a vendor packet,
which is probably why libc CI periodically errors about invalid packets.
Also a drive by change to do the atomic store on a uint32_t
consistently. I'm not sure offhand what __atomic_store_n on a uint16_t*
and an int resolves to, seems better to be unambiguous there.
This patch seeks to add initial lowering of OpenMP array sections within
target region map clauses from MLIR to LLVM IR.
This patch seeks to support fixed sized contiguous (don't think OpenMP
supports anything other than contiguous sections from my reading but i
could be wrong) arrays initially, before looking toward assumed size and
shaped arrays. The patch also currently does not include stride, it's
left for future work.
Although, assumed size works in some fashion (dummy arguments) with some
minor alterations to the OMPEarlyOutliner, so it is possible changes
made in the IsolatedFromAbove series may allow this to work with no
further required patches.
It utilises the generated omp.bounds to calculate the size of the mapped
OpenMP array (both for sectioned and un-sectioned arrays) as well as the
offset to be passed to the kernel argument structure.
Alongside these changes some refactoring of how map data is handled is
attempted, using a new MapData structure to keep track of information
utilised in the lowering of mapped values.
The initial addition of a more complex createDeviceArgumentAccessor that
utilises capture kinds similarly to (and loosely based on) Clang to
generate different kernel argument accesses is also added.
A similar function for altering how the kernel argument is passed to the
kernel argument structure on the host is also utilised
(createAlteredByCaptureMap), which allows modification of the
pointer/basePointer based on their capture (and bounds information).
It's of note ByRef, is the default for explicit mappings and ByCopy will
be the default for implicit captures, so the former is currently tested
in this patch and the latter is not for the moment.
The commit was discussed in phabricator
(https://reviews.llvm.org/D157186).
Record replay currently fails on AMD as it conflicts with the heap
memory allocator introduced in #69806. The workaround is setting
`LIBOMPTARGET_HEAP_SIZE=0` during both record and replay run.
By associating the kernel environment with the generic kernel we can
access middle-end information easily, including the launch bounds ranges
that are acceptable. By constraining the number of threads accordingly,
we now obey the user-provided bounds that were passed via attributes.
This patch adds basic tests for map clause on target construct for
commonblocks. There will be more tests to add, which will be added in
future patches. Currently failing tests are added in a separate folder
with XFAIL. They should be moved as they are fixed.
The runtime needs to know about the acceptable launch bounds, especially
if the compiler (middle- or backend) assumed those bounds. While this
patch does not yet inform the runtime, it stores the bounds in a place
that can/will be accessed and is associated with the kernel.
The workaround code ensure we always call __kmpc_kernel_parallel, but it
did so in a racy manner as the initialization might not have been
completed yet. To avoid introducing a sync, we move the workaround into
the deinit function for now.
Summary:
For the other tests we pass `-nogpulib` to ensure that we set up the
needed libraries correctly. However, this caused problems for the
non-LTO build and test of Nvidia systems. In general this is because we
would do a separate compile of the libomptarget device runtime and then
link in that cubin. This exercised the runtime in a lot of ways it's not
used to, since doing things this way was hardly expected or tested. This
patch disables it only for the Nvidia non-LTO build so that we still get
the effect of `--liboimptarget-nvptx-bc-path` rather than ignoring it.
Summary:
While this is technically a no-op for AMDGPU hardware, in cases where
the user would see fit to add an explicit wavefront sync on Nvidia
hardware, we should also inform the LLVM optimizer that this control
flow is convergent so we do not reorder blocks.
Summary:
This version is required to support the 'activemask' feature which is
used for certain features, such as reductions. This ties the
implementation of the DeviceRTL roughly to the features provided by the
CUDA 9.0 release, which should be sufficienly old as to not cause
problems since this is a minor version jump that corresponds to the
release of `sm_53`.
Summary:
This function needs `void` as the arguments to be ABI compatbile with
what is actually defined. This is enforced when doing CUDA separable
linking of the runtime.
Summary:
The `llvm_omp_target_dynamic_shared_alloc` prototype in `omp.h`
accidentally left the void argument unspecified. This created unintended
code when called from the C language, causing some `nvlink` failures in
certain scenarios.
TSan can show only line numbers on some platforms, e.g., SystemZ. Skip
checking the column numbers; line numbers should be enough to verify
that race detection is working.
struct DEP defined in multiple testcases must correspond to runtime's
struct kmp_depend_info. The former defines flags as int, and the latter
as kmp_uint8_t. This discrepancy goes unnoticed on little-endian
systems, but breaks big-endian ones.
Make flags in struct DEP unsigned char.
structs kmp_depend_info.flags and kmp_tasking_flags contain bitfields,
which overlay integer flag values. The current bitfield definitions
target little-endian machines. On big-endian machines bitfields are laid
out in the opposite order, so the current definitions do not work there.
There are two ways to fix this: either provide big-endian bitfield
definitions, or bit-swap integer flag values. Go with the former, since
it's localized to one place and therefore is more maintainable.
hsa_amd_memory_async_copy can handle device to device copies if passed
the corresponding parameters.
No functional change - currently D2D copy goes through a fallback in
libomptarget that stages through a host malloc, after this it goes
directly through HSA.
Works under exactly the situations that HSA works. Verified locally on a
performance benchmark. Hoping to attract further testing from internal
developers after it lands.
The patch contains a basic BumpAllocator for (AMD)GPUs to allow us to
run more tests. The allocator implements `malloc`, both internally and
externally, while we continue to default to the NVIDIA `malloc` when we
target NVIDIA GPUs. Once we have smarter or customizable allocators we
should consider this choice, for now, this allocator is better than
none. It traps if it is out of memory, making it easy to debug. Heap
size is configured via `LIBOMPTARGET_HEAP_SIZE` and defaults to 512MB.
It allows to track allocation statistics via
`LIBOMPTARGET_DEVICE_RTL_DEBUG=8` (together with
`-fopenmp-target-debug=8`). Two tests were added, and one was enabled.
This is the next step towards fixing
https://github.com/llvm/llvm-project/issues/66708
Summary:
We should not rely on a VLA in C++ for the handling of this string. The
size is a true runtime value so we cannot rely on constexpr handling. We
simply use a small vector, whose default size is most likely large
enough to handle whatever size gets output within the stack, but is safe
in cases where it is not.
Implement a slow-path version of omp_target_memset*()
There is a TODO to implement a fast path that uses an on-device
kernel instead of the host-based memory fill operation. This may
require some additional plumbing to have kernels in libomptarget.so
Summary:
The `libcgpu.a` file provides its own implementation of `__assert_fail`.
This adds a test to make sure it's usable in OpenMP offloading as
expected. Currently this requires linking `libcgpu.a` before the OpenMP
device RTL however. We also disable the test on the CPU as the format of
the string will be different.
Summary:
We use `malloc` internally in the DeviceRTL to handle data
globalization. If this is undefined it will map to the Nvidia
implementation of `malloc` for NVPTX and return `nullptr` for AMDGPU.
This is somewhat problematic, because when using this as a shared
library it causes us to always extract the GPU libc implementation,
which uses RPC and thus requires an RPC server. Making this `weak`
allows us to implement this internally without worrying about binding to
the GPU `libc` implementation.
Summary:
The `libcgpu.a` library was added to support certain libc functions. A
recent patch made us pass its location directly on the commandline,
however it used `find_library`. This doesn't work because the ordering
of CMake might run `fine_library` before it builds the library we're
trying to find. This patch changes this to just use the destimation we
know it will end up in and checks it manually.
Summary:
The recent patch added `-nogpulib` to make these tests only pick up what
was intentionally put into them. This had the effect of removing the
dependency on the ROCm device libs which are needed for math. This test
disables the complex math test, which is the only one that needed it,
for the time being. In the future we will implement these and provide it
via the GPU `libm` and pass it in the same way as the GPU `libc`.
Summary:
We have tests that depend on two static libraries
`libomptarget.devicertl.a` and `libcgpu.a`. These are currently
implicitly picked up and searched through the standard path. This patch
changes that to pass `-nogpulib` to disable implicit runtime path
searches. We then explicitly passed the built libraries to the
compilations so that we know exactly which libraries are being used.
Depends on: https://github.com/llvm/llvm-project/pull/68220
Fixes https://github.com/llvm/llvm-project/issues/68141
This patch applies weak linkage to the config globals by the name
`__omp_rtl...`. This is because when passing `-nogpulib` we will not
link in or create these globals. This allows the OpenMP device RTL to be
self contained without requiring the additional definitions from the
`clang` compiler. In the standard case, this should not affect the
current behavior, this is because the strong definition coming from the
compiler should always override the weak definition we default to here.
In the case that these are not defined by the compiler, these will
remain weak. This will impact optimizations somewhat, but the previous
behavior was that it would not link so that is an improvement.
Depends on: https://github.com/llvm/llvm-project/pull/68215
Summary:
Weak symbols are supposed to have the semantics that they can be
overriden by a strong (i.e. global) definition. This wasn't being
respected by the LTO pass because we simply used the first definition
that was available. This patch fixes that logic by doing a first pass
over the symbols to check for strong resolutions that could override a
weak one.
A lot of fake linker logic is ending up in the linker wrapper. If there
were an option to handle this in `lld` it would be a lot cleaner, but
unfortunately supporting NVPTX is a big restriction as their binaries
require the `nvlink` tool.
At the moment, for device a reference pointer is generated in place of
the original declare target global value, this reference pointer is the
pointer that actually receives the data. In Clang the original global
value isn't generated for device, just the reference pointer.
Unfortunately for Flang/MLIR this is currently not the case, as the
declare target attribute is processed after the creation of the global
so we end up with a dead global on device effectively after rewriting
its uses to the new device reference pointer.
It appears I was a little overzealous with the deletion of the declare
target globals for device. The current method breaks in-cases where the
same declare target global is used across two target regions (added a
runtime reproduced in the patch). As it'll effectively delete it before
the second target gets a chance to be written to LLVM IR and have it's
uses rewritten .
I'd like to remove this deletion as the dead global isn't breaking any
code and will likely be removed in later dead code elimination passes,
perhaps a little too heavy handed with the original approach.
Summary:
Previously this test hanged indefinitely on NVPTX. This was due to an
issue fixed previously where we would wait indefinitely inside the CUDA
runtime waiting for the kernel to complete if it was blocked on the RPC
server. This patch enables this test again now that it can run without
deadlocking, at least on CUDA 12.2.
Summary:
The RPC server is responsible for providing host services from the GPU.
Generally, the client running on the GPU will spin in place until the
host checks the server. Inside the runtime, we elected to have the user
thread do this checking while it would be otherwise waiting for the
kernel to finish. However, for Nvidia this caused problems when
offloading to a target region that requires a copy back.
This is caused by the implementation of `dataRetrieve` on Nvidia. We
initialize an asynchronous copy-back on the same stream that the kernel
is running on. This creates an implicit sync on the kernel to finish
before we issue the D2H copy, which we then wait on. This implicit sync
happens inside of the CUDA runtime. This is problematic when running the
RPC server because we need someone to check the RPC server. If no one
checks the RPC server then the kernel will never finish, meaning that
the memcpy will never be issued and the program hangs. This patch adds
an explicit check for unfinished work on the stream and waits for it to
complete.