Summary:
These tests are currently failing, disable them so we can keep the bots
green until we find a better solution. The x64 tests are not the core
target so this is low priority.
Summary:
This option used to be passed manually by the `-fPIC` option that was
always enabled by the LLVM flags. Since we now do this manually we want
to specify that these are supposed for use fPIC code.
Summary:
We used to inherit these flags from the LLVM options in a runtimes
build. This patch adds them back in manually as they are helpful for
diagnostics and optimizing the created binary.
There is no point in keeping GenericDeviceTy objects alive longer than
the associated GenericPluginTy. Instead of the old API we now tear them
down with the plugin, avoiding ordering issues.
The order in which we deinit things, especially when shared libraries
are involved, is complicated. To simplify our lives the nextgen plugin
deinitializes the GenericPluginTy and subclasses automatically. The old
__tgt_rtl_deinit_plugin is not needed anymore.
Most of the tests were failing with the following in their logs..
| /usr/bin/ld: /home/brad/llvm-build/runtimes/runtimes-bins/openmp/runtime/src/libomp.so:
warning: Warning: reference to the libc supplied alloca(3); this most likely will not
work. Please use the compiler provided version of alloca(3), by supplying the appropriate
compiler flags (e.g. -std=gnu99).
By making use of __builtin_alloca..
before:
Total Discovered Tests: 353
Unsupported: 59 (16.71%)
Passed : 51 (14.45%)
Failed : 243 (68.84%)
after:
Total Discovered Tests: 353
Unsupported: 59 (16.71%)
Passed : 290 (82.15%)
Failed : 4 (1.13%)
This patch removes the val field from the `MapInfoOp`.
Previously when lowering `TargetOp`, the bounds information for the
`BoxValues` were also being mapped. Instead these ops are now cloned
inside the target region to prevent mapping of non reference typed
values.
From "3.1 Reducing the number of edges" of this [[ https://hal.science/hal-04136674v1/ | paper ]] - Optimization (b)
Task (dependency) nodes have a `successors` list built upon passed dependency.
Given the following code, B will be added to A's successors list building the graph `A` -> `B`
```
// A
# pragma omp task depend(out: x)
{}
// B
# pragma omp task depend(in: x)
{}
```
In the following code, B is currently added twice to A's successor list
```
// A
# pragma omp task depend(out: x, y)
{}
// B
# pragma omp task depend(in: x, y)
{}
```
This patch removes such dupplicates by checking lastly inserted task in `A` successor list.
Authored by: Romain Pereira (rpereira-dev)
Differential Revision: https://reviews.llvm.org/D158544
When we record and replay kernels we should not error out early if there
is a chance the program might still run fine. This patch will:
1) Fallback to the allocation heuristic if the VAMap doesn't work.
2) Adjust the memory start to match the required address if possible.
3) Adjust the (guessed) pointer arguments if the memory start adjustment
is impossible. This will allow kernels without indirect accesses to
work while indirect accesses will most likely fail.
Summary:
One of the changes in the AMD code-object version five was that kernels
that use an unknown amount of private stack memory now no longer default
to 16 KBs. Instead it emits a flag that indicates the runtime must
provide a value. This patch checks if we must provide such a stack, and
uses the existing handling of the stack environment variable to
configure it.
Currently there's an edge cases where constant indexing in target
regions can lead to incorrect results as we do not correctly replace
uses of mapped variables in generated target functions with the target
arguments (and accessor instructions) that replace them. This patch
seeks to fix that by extending the current logic in the OMPIRBuilder.
Things like GEP's can come in the form of Constants/ConstantExprs,
Constants and ConstantExpr's do not have access to the knowledge of what
they're contained in, so we must dig a little to find an instruction so
we can tell if they're used inside of the function we're outlining so we
can be sure they are replaceable and we are not accidentally replacing a
usage somewhere else in the module that's still necessary.
This patch handles these by replacing the original constant expression
with a new instruction equivalent; an instruction as it allows easy
modification in the following loop, as we can now know the constant
(instruction) is owned by our target function (as it holds this
knowledge) and replaceUsesOfWith can now be invoked on it (cannot do
this with constants it seems), a brand new one also allows us to be
cautious as it is perhaps possible the old expression was used inside of
the function but exists and is used externally (unlikely by the nature
of a Constant, but still a positive side affect).
…A engines (#71801)
This enables the AMDGPU plugin to use a new ROCm 5.7 interface to
dispatch asynchronous data transfers across SDMA engines.
The default functionality stays unchanged, meaning that all data
transfers are enqueued into a H2D queue or an D2H queue, depending on
transfer direction, via the HSA interface used previously.
The new interface can be enabled via the environment variable
`LIBOMPTARGET_AMDGPU_USE_MULTIPLE_SDMA_ENGINES=true` when libomptarget
is built against a recent ROCm version (5.7 and later). As of now,
requests are distributed in a round-robin fashion across available SDMA
engines.
This causes the tests to fail because the bots were not updated in time.
Revert until we update the bots to a valid version.
This reverts commit e876250b63.
This enables the AMDGPU plugin to use a new ROCm 5.7 interface to
dispatch asynchronous data transfers across SDMA engines.
The default functionality stays unchanged, meaning that all data
transfers are enqueued into a H2D queue or an D2H queue, depending on
transfer direction, via the HSA interface used previously.
The new interface can be enabled via the environment variable
`LIBOMPTARGET_AMDGPU_USE_MULTIPLE_SDMA_ENGINES=true` when libomptarget
is built against a recent ROCm version (5.7 and later).
As of now, requests are distributed in a round-robin fashion across
available SDMA engines.
If we have more than one reduction variable we need to be consistent
wrt. indexing. In 3de645efe3 we broke this
as the buffer type was reduced to a singleton but the index computation
was not adjusted to account for that offset. This fixes it by
interleaving the reduction variables properly in a array-of-struct
style. We can revert it back to struct-of-array in a follow up if turns
out to be a problem. I doubt it since half the accesses should benefit
from the locallity this layout offers and only the other half were
consecutive before.
Summary:
This patch reworks how we handle global constructors in OpenMP.
Previously, we emitted individual kernels that were all registered and
called individually. In order to provide more generic support, this
patch moves all handling of this to the target backend and the runtime
plugin. This has the benefit of supporting the GNU extensions for
constructors an destructors, removing a class of failures related to
shared library destruction order, and allows targets other than OpenMP
to use the same support without needing to change the frontend.
This is primarily done by calling kernels that the backend emits to
iterate a list of ctor / dtor functions. For x64, this is automatic and
we get it for free with the standard `dlopen` handling. For AMDGPU, we
emit `amdgcn.device.init` and `amdgcn.device.fini` functions which
handle everything atuomatically and simply need to be called. For NVPTX,
a patch https://github.com/llvm/llvm-project/pull/71549 provides the
kernels to call, but the runtime needs to set up the array manually by
pulling out all the known constructor / destructor functions.
One concession that this patch requires is the change that for GPU
targets in OpenMP offloading we will use `llvm.global_dtors` instead of
using `atexit`. This is because `atexit` is a separate runtime function
that does not mesh well with the handling we're trying to do here. This
should be equivalent in all cases except for cases where we would need
to destruct manually such as:
```
struct S { ~S() { foo(); } };
void foo() {
static S s;
}
```
However this is broken in many other ways on the GPU, so it is not
regressing any support, simply increasing the scope of what we can
handle.
This changes the handling of ctors / dtors. This patch now outputs a
information message regarding the deprecation if the old format is used.
This will be completely removed in a later release.
Depends on: https://github.com/llvm/llvm-project/pull/71549
OpenMP runtime fails to build on SystemZ with clang with the following
error message:
LLVM ERROR: Unsupported stack frame traversal count
__kmpc_omp_task_begin_if0() uses OMPT_GET_FRAME_ADDRESS(1), which
delegates to __builtin_frame_address(), which in turn works with nonzero
values on SystemZ only if backchain is in use. If backchain is not in
use, the above error is emitted.
Compile __kmpc_omp_task_begin_if0() with backchain. Note that this only
resolves the build error. If at runtime its caller is compiled without
backchain, __builtin_frame_address() will produce an incorrect value,
but will not cause a crash. Since the value is relevant only for OMPT,
this is acceptable.
Line 75 of `z_Linux_util.cpp` checks `#ifdef KMP_OS_SOLARIS` which is
always true regardless of the building platform because macro
`KMP_OS_SOLARIS` is always defined in line 23 of `kmp_platform.h`:
`define KMP_OS_SOLARIS 0`.
Fixes the DeviceRTL compilation to ensure it is ABI agnostic. Uses
already available global variable "oclc_ABI_version" instead of
"llvm.amdgcn.abi.verion".
It also adds some minor fields in ImplicitArg structure.
This commit adds skewed distribution of iterations in
nonmonotonic:dynamic schedule (static steal) for hybrid systems when
thread affinity is assigned. Currently, it distributes the iterations at
60:40 ratio. Consider this loop with dynamic schedule type,
for (int i = 0; i < 100; ++i). In a hybrid system with 20 hardware
threads (16 CORE and 4 ATOM core), 88 iterations will be assigned to
performance cores and 12 iterations will be assigned to efficient cores.
Each thread with CORE core will process 5 iterations + extras and with
ATOM core will process 3 iterations.
Differential Revision: https://reviews.llvm.org/D152955
Based on https://github.com/llvm/llvm-project/pull/70766 I think it
would be good to have a few more offloading reduction tests, so we do
not accidentally break minimum and maximum reductions another time.
We already have all the information to automatically map function
pointers that have been declared as `indirect` declare target by the
user. This is just enabling and testing the functionality by looking
through the one level of indirection.
Before we tracked the size of the teams reduction buffer in order to
allocate it at runtime per kernel launch. This patch splits the number
into two parts, the size of the reduction data (=all reduction
variables) and the (maximal) length of the buffer. This will allow us to
allocate less if we need less, e.g., if we have less teams than the
maximal length. It also allows us to move code from clangs codegen into
the runtime as we now know how large the reduction data is.
target_map_common_block2.f90
- Fix the extra space in the print message.
- #67164 fixes this. So moving it outside of failing and also removing XFAIL marker.
basic-target-region-3D-array.f90
- Corrected the check to account for the new lines printed.
Depends on #67319
This reverts commit e9a48f9e05 because it breaks
3 sollve 5.0 tests:
test_loop_reduction_and_device.c
test_loop_reduction_bitand_device.c
test_loop_reduction_multiply_device.c
* openmp/README.rst
- Add s390x to those platforms supported
* openmp/libomptarget/plugins-nextgen/CMakeLists.txt
- Add s390x subdirectory
* openmp/libomptarget/plugins-nextgen/s390x/CMakeLists.txt
- Add s390x definitions
* openmp/runtime/CMakeLists.txt
- Add s390x to those platforms supported
* openmp/runtime/cmake/LibompGetArchitecture.cmake
- Define s390x ARCHITECTURE
* openmp/runtime/cmake/LibompMicroTests.cmake
- Add dependencies for System z (aka s390x)
* openmp/runtime/cmake/LibompUtils.cmake
- Add S390X to the mix
* openmp/runtime/cmake/config-ix.cmake
- Add s390x as a supported LIPOMP_ARCH
* openmp/runtime/src/kmp_affinity.h
- Define __NR_sched_[get|set]addinity for s390x
* openmp/runtime/src/kmp_config.h.cmake
- Define CACHE_LINE for s390x
* openmp/runtime/src/kmp_os.h
- Add KMP_ARCH_S390X to support checks
* openmp/runtime/src/kmp_platform.h
- Define KMP_ARCH_S390X
* openmp/runtime/src/kmp_runtime.cpp
- Generate code when KMP_ARCH_S390X is defined
* openmp/runtime/src/kmp_tasking.cpp
- Generate code when KMP_ARCH_S390X is defined
* openmp/runtime/src/thirdparty/ittnotify/ittnotify_config.h
- Define ITT_ARCH_S390X
* openmp/runtime/src/z_Linux_asm.S
- Instantiate __kmp_invoke_microtask for s390x
* openmp/runtime/src/z_Linux_util.cpp
- Generate code when KMP_ARCH_S390X is defined
* openmp/runtime/test/ompt/callback.h
- Define print_possible_return_addresses for s390x
* openmp/runtime/tools/lib/Platform.pm
- Return s390x as platform and host architecture
* openmp/runtime/tools/lib/Uname.pm
- Set hardware platform value for s390x
A lot of the code was from a time when we had multiple parallel levels.
The new runtime is much simpler, the code can be simplified a lot which
should speed up reductions too.
We default to < 1024 teams if the user did not specify otherwise. As
such we can avoid the extra logic in the teams reduction that handles
more than num_of_records (default 1024) teams. This is a stopgap but
still shaves of 33% of the runtime in some simple reduction examples.
If you build with dynamic_hsa, the symbol is known and compilation
succeeds. If you then run with a slightly older libhsa, this argument is
not recognised and an error returned. I'd rather the program runs with a
misleading omp wtime than refuses to run at all.
We used to perform team reduction on global memory allocated in the
runtime and by clang. This was racy as multiple instances of a kernel,
or different kernels with team reductions, would use the same locations.
Since we now have the kernel launch environment, we can allocate dynamic
memory per-launch, allowing us to move all the state into a non-racy
place.
Fixes: https://github.com/llvm/llvm-project/issues/70249
The KernelEnvironment is for compile time information about a kernel. It
allows the compiler to feed information to the runtime. The
KernelLaunchEnvironment is for dynamic information *per* kernel launch.
It allows the rutime to feed information to the kernel that is not
shared with other invocations of the kernel. The first use case is to
replace the globals that synchronize teams reductions with per-launch
versions. This allows concurrent teams reductions. More uses cases will
follow, e.g., per launch memory pools.
Fixes: https://github.com/llvm/llvm-project/issues/70249
This was a regression introduced by myself in:
6a62707c04
where I too hastily removed the basic handling of implicit captures
we have currently. This will be superseded by all implicit captures
being added to target operations map_info entries in a soon landing
series of patches, however, that is currently not the case so we must
continue to do some basic handling of these captures for the time
being. This patch re-adds that behaviour to avoid regressions.
Unfortunately this means some test changes as well as
getUsedValuesDefinedAbove grabs constants used outside
of the target region which aren't handled particularly
well currently.
I think it follows from the HSA spec that a write to the first byte is
deemed significant to the GPU in which case writing to the second short
and reading back from it later would be safe. However, the examples for
this all involve an atomic write to the first 32 bits and it seems a
credible risk that the occasional CI errors abound invalid packets have
as their root cause that the firmware notices the early write to
packet->setup and treats that as a sign that the packet is ready to go.
That was overly-paranoid, however in passing noticed the code in libc is
genuinely invalid. The memset writes a zero to the header byte, changing
it from type_invalid (1) to type_vendor (0), at which point the GPU is
free to read the 64 byte packet and interpret it as a vendor packet,
which is probably why libc CI periodically errors about invalid packets.
Also a drive by change to do the atomic store on a uint32_t
consistently. I'm not sure offhand what __atomic_store_n on a uint16_t*
and an int resolves to, seems better to be unambiguous there.
This patch seeks to add initial lowering of OpenMP array sections within
target region map clauses from MLIR to LLVM IR.
This patch seeks to support fixed sized contiguous (don't think OpenMP
supports anything other than contiguous sections from my reading but i
could be wrong) arrays initially, before looking toward assumed size and
shaped arrays. The patch also currently does not include stride, it's
left for future work.
Although, assumed size works in some fashion (dummy arguments) with some
minor alterations to the OMPEarlyOutliner, so it is possible changes
made in the IsolatedFromAbove series may allow this to work with no
further required patches.
It utilises the generated omp.bounds to calculate the size of the mapped
OpenMP array (both for sectioned and un-sectioned arrays) as well as the
offset to be passed to the kernel argument structure.
Alongside these changes some refactoring of how map data is handled is
attempted, using a new MapData structure to keep track of information
utilised in the lowering of mapped values.
The initial addition of a more complex createDeviceArgumentAccessor that
utilises capture kinds similarly to (and loosely based on) Clang to
generate different kernel argument accesses is also added.
A similar function for altering how the kernel argument is passed to the
kernel argument structure on the host is also utilised
(createAlteredByCaptureMap), which allows modification of the
pointer/basePointer based on their capture (and bounds information).
It's of note ByRef, is the default for explicit mappings and ByCopy will
be the default for implicit captures, so the former is currently tested
in this patch and the latter is not for the moment.
The commit was discussed in phabricator
(https://reviews.llvm.org/D157186).
Record replay currently fails on AMD as it conflicts with the heap
memory allocator introduced in #69806. The workaround is setting
`LIBOMPTARGET_HEAP_SIZE=0` during both record and replay run.
By associating the kernel environment with the generic kernel we can
access middle-end information easily, including the launch bounds ranges
that are acceptable. By constraining the number of threads accordingly,
we now obey the user-provided bounds that were passed via attributes.
This patch adds basic tests for map clause on target construct for
commonblocks. There will be more tests to add, which will be added in
future patches. Currently failing tests are added in a separate folder
with XFAIL. They should be moved as they are fixed.
The runtime needs to know about the acceptable launch bounds, especially
if the compiler (middle- or backend) assumed those bounds. While this
patch does not yet inform the runtime, it stores the bounds in a place
that can/will be accessed and is associated with the kernel.
The workaround code ensure we always call __kmpc_kernel_parallel, but it
did so in a racy manner as the initialization might not have been
completed yet. To avoid introducing a sync, we move the workaround into
the deinit function for now.
Summary:
For the other tests we pass `-nogpulib` to ensure that we set up the
needed libraries correctly. However, this caused problems for the
non-LTO build and test of Nvidia systems. In general this is because we
would do a separate compile of the libomptarget device runtime and then
link in that cubin. This exercised the runtime in a lot of ways it's not
used to, since doing things this way was hardly expected or tested. This
patch disables it only for the Nvidia non-LTO build so that we still get
the effect of `--liboimptarget-nvptx-bc-path` rather than ignoring it.
Summary:
While this is technically a no-op for AMDGPU hardware, in cases where
the user would see fit to add an explicit wavefront sync on Nvidia
hardware, we should also inform the LLVM optimizer that this control
flow is convergent so we do not reorder blocks.
Summary:
This version is required to support the 'activemask' feature which is
used for certain features, such as reductions. This ties the
implementation of the DeviceRTL roughly to the features provided by the
CUDA 9.0 release, which should be sufficienly old as to not cause
problems since this is a minor version jump that corresponds to the
release of `sm_53`.
Summary:
This function needs `void` as the arguments to be ABI compatbile with
what is actually defined. This is enforced when doing CUDA separable
linking of the runtime.
Summary:
The `llvm_omp_target_dynamic_shared_alloc` prototype in `omp.h`
accidentally left the void argument unspecified. This created unintended
code when called from the C language, causing some `nvlink` failures in
certain scenarios.
TSan can show only line numbers on some platforms, e.g., SystemZ. Skip
checking the column numbers; line numbers should be enough to verify
that race detection is working.
struct DEP defined in multiple testcases must correspond to runtime's
struct kmp_depend_info. The former defines flags as int, and the latter
as kmp_uint8_t. This discrepancy goes unnoticed on little-endian
systems, but breaks big-endian ones.
Make flags in struct DEP unsigned char.
structs kmp_depend_info.flags and kmp_tasking_flags contain bitfields,
which overlay integer flag values. The current bitfield definitions
target little-endian machines. On big-endian machines bitfields are laid
out in the opposite order, so the current definitions do not work there.
There are two ways to fix this: either provide big-endian bitfield
definitions, or bit-swap integer flag values. Go with the former, since
it's localized to one place and therefore is more maintainable.
hsa_amd_memory_async_copy can handle device to device copies if passed
the corresponding parameters.
No functional change - currently D2D copy goes through a fallback in
libomptarget that stages through a host malloc, after this it goes
directly through HSA.
Works under exactly the situations that HSA works. Verified locally on a
performance benchmark. Hoping to attract further testing from internal
developers after it lands.
The patch contains a basic BumpAllocator for (AMD)GPUs to allow us to
run more tests. The allocator implements `malloc`, both internally and
externally, while we continue to default to the NVIDIA `malloc` when we
target NVIDIA GPUs. Once we have smarter or customizable allocators we
should consider this choice, for now, this allocator is better than
none. It traps if it is out of memory, making it easy to debug. Heap
size is configured via `LIBOMPTARGET_HEAP_SIZE` and defaults to 512MB.
It allows to track allocation statistics via
`LIBOMPTARGET_DEVICE_RTL_DEBUG=8` (together with
`-fopenmp-target-debug=8`). Two tests were added, and one was enabled.
This is the next step towards fixing
https://github.com/llvm/llvm-project/issues/66708
Summary:
We should not rely on a VLA in C++ for the handling of this string. The
size is a true runtime value so we cannot rely on constexpr handling. We
simply use a small vector, whose default size is most likely large
enough to handle whatever size gets output within the stack, but is safe
in cases where it is not.
Implement a slow-path version of omp_target_memset*()
There is a TODO to implement a fast path that uses an on-device
kernel instead of the host-based memory fill operation. This may
require some additional plumbing to have kernels in libomptarget.so
Summary:
The `libcgpu.a` file provides its own implementation of `__assert_fail`.
This adds a test to make sure it's usable in OpenMP offloading as
expected. Currently this requires linking `libcgpu.a` before the OpenMP
device RTL however. We also disable the test on the CPU as the format of
the string will be different.
Summary:
We use `malloc` internally in the DeviceRTL to handle data
globalization. If this is undefined it will map to the Nvidia
implementation of `malloc` for NVPTX and return `nullptr` for AMDGPU.
This is somewhat problematic, because when using this as a shared
library it causes us to always extract the GPU libc implementation,
which uses RPC and thus requires an RPC server. Making this `weak`
allows us to implement this internally without worrying about binding to
the GPU `libc` implementation.
Summary:
The `libcgpu.a` library was added to support certain libc functions. A
recent patch made us pass its location directly on the commandline,
however it used `find_library`. This doesn't work because the ordering
of CMake might run `fine_library` before it builds the library we're
trying to find. This patch changes this to just use the destimation we
know it will end up in and checks it manually.
Summary:
The recent patch added `-nogpulib` to make these tests only pick up what
was intentionally put into them. This had the effect of removing the
dependency on the ROCm device libs which are needed for math. This test
disables the complex math test, which is the only one that needed it,
for the time being. In the future we will implement these and provide it
via the GPU `libm` and pass it in the same way as the GPU `libc`.
Summary:
We have tests that depend on two static libraries
`libomptarget.devicertl.a` and `libcgpu.a`. These are currently
implicitly picked up and searched through the standard path. This patch
changes that to pass `-nogpulib` to disable implicit runtime path
searches. We then explicitly passed the built libraries to the
compilations so that we know exactly which libraries are being used.
Depends on: https://github.com/llvm/llvm-project/pull/68220
Fixes https://github.com/llvm/llvm-project/issues/68141
This patch applies weak linkage to the config globals by the name
`__omp_rtl...`. This is because when passing `-nogpulib` we will not
link in or create these globals. This allows the OpenMP device RTL to be
self contained without requiring the additional definitions from the
`clang` compiler. In the standard case, this should not affect the
current behavior, this is because the strong definition coming from the
compiler should always override the weak definition we default to here.
In the case that these are not defined by the compiler, these will
remain weak. This will impact optimizations somewhat, but the previous
behavior was that it would not link so that is an improvement.
Depends on: https://github.com/llvm/llvm-project/pull/68215
Summary:
Weak symbols are supposed to have the semantics that they can be
overriden by a strong (i.e. global) definition. This wasn't being
respected by the LTO pass because we simply used the first definition
that was available. This patch fixes that logic by doing a first pass
over the symbols to check for strong resolutions that could override a
weak one.
A lot of fake linker logic is ending up in the linker wrapper. If there
were an option to handle this in `lld` it would be a lot cleaner, but
unfortunately supporting NVPTX is a big restriction as their binaries
require the `nvlink` tool.
At the moment, for device a reference pointer is generated in place of
the original declare target global value, this reference pointer is the
pointer that actually receives the data. In Clang the original global
value isn't generated for device, just the reference pointer.
Unfortunately for Flang/MLIR this is currently not the case, as the
declare target attribute is processed after the creation of the global
so we end up with a dead global on device effectively after rewriting
its uses to the new device reference pointer.
It appears I was a little overzealous with the deletion of the declare
target globals for device. The current method breaks in-cases where the
same declare target global is used across two target regions (added a
runtime reproduced in the patch). As it'll effectively delete it before
the second target gets a chance to be written to LLVM IR and have it's
uses rewritten .
I'd like to remove this deletion as the dead global isn't breaking any
code and will likely be removed in later dead code elimination passes,
perhaps a little too heavy handed with the original approach.
Summary:
Previously this test hanged indefinitely on NVPTX. This was due to an
issue fixed previously where we would wait indefinitely inside the CUDA
runtime waiting for the kernel to complete if it was blocked on the RPC
server. This patch enables this test again now that it can run without
deadlocking, at least on CUDA 12.2.
Summary:
The RPC server is responsible for providing host services from the GPU.
Generally, the client running on the GPU will spin in place until the
host checks the server. Inside the runtime, we elected to have the user
thread do this checking while it would be otherwise waiting for the
kernel to finish. However, for Nvidia this caused problems when
offloading to a target region that requires a copy back.
This is caused by the implementation of `dataRetrieve` on Nvidia. We
initialize an asynchronous copy-back on the same stream that the kernel
is running on. This creates an implicit sync on the kernel to finish
before we issue the D2H copy, which we then wait on. This implicit sync
happens inside of the CUDA runtime. This is problematic when running the
RPC server because we need someone to check the RPC server. If no one
checks the RPC server then the kernel will never finish, meaning that
the memcpy will never be issued and the program hangs. This patch adds
an explicit check for unfinished work on the stream and waits for it to
complete.
Summary:
Normally, the implementation of `puts` simply writes a second newline
charcter after printing the first string. However, because the GPU does
everything in batches of the SIMT group size, this will end up with very
poor output where you get the strings printed and then 1-64 newline
characters all in a row. Optimizations like to turn `printf` calls into
`puts` so it's a good idea to make this produce the expected output.
The least invasive way I could do this was to add a new opcode. It's a
little bloated, but it avoids an unneccessary and slow send operation to
configure this.
This patch adds initial lowering for DeclareTargetAttr on
GlobalOp's utilising registerTargetGlobalVariable
and getAddrOfDeclareTargetVar from the
OMPIRBuilder.
It also adds initial processing of declare target map
operands, populating the combinedInfo that the
OMPIRBuilder requires to generate kernels and
it's kernel argument structure.
The combination of these additions allows simple mapping
of declare target globals to Target regions, as such a simple
runtime test showcasing this and testing it has been added.
The patch currently does not factor in filtering
based on device_type clauses (e.g. no emission of
globals for device if host specified), this will come in
a future iteration. And for the moment it's only been
tested with 1-D arrays and basic fortran data types,
more complex types (such as user defined derived
types from Fortran, allocatables or Fortran pointers)
may need further work.
reviewers: kiranchandramohan, skatrak
Differential Revision: https://reviews.llvm.org/D149368
VE supports up to 64 threads per a VE process. So, we limit the number
of threads defined by KMP_MAX_NTH. We also modify the __kmp_sys_max_nth
initialization to use KMP_MAX_NTH as a limit.
This will make it easy for callers to see issues with and fix up calls
to createTargetMachine after a future change to the params of
TargetMachine.
This matches other nearby enums.
For downstream users, this should be a fairly straightforward
replacement,
e.g. s/CodeGenOpt::Aggressive/CodeGenOptLevel::Aggressive
or s/CGFT_/CodeGenFileType::
The /tmp fallback for /dev/shm did not write to a fixed filename, so multiple instances of the runtime would not be able to detect each other. Now, we create the /tmp file in much the same way as the /dev/shm file was created, since mkstemp approach would not work to create a file that other instances of the runtime would detect. Also, add the environment variable method as a third fallback to /dev/shm and /tmp for library registration, as some systems do not have either. Also, add ability to fallback to a subsequent method should a failure occur during any part of the registration process. When unregistering, it is assumed that the method chosen during registration should work, so errors at that point are ignored. This also avoids a problem with multiple threads trying to unregister the library.
This commit removes an optimization that skips the initialization of the
reduction struct if the number of threads in a team is 1. This
optimization
caused a bug with Hidden Helper Threads. When the task group is
initially
initialized by the master thread but a Hidden Helper Thread executes a
target
nowait region, it requires the reduction struct initialization to
properly
accumulate the data.
This commit also adds a LIT test for issue #57522 to ensure that the
issue is
properly addressed and that the optimization removal does not introduce
any
regressions.
Fixes: #57522
Fixes: https://github.com/llvm/llvm-project/issues/65104
When a user assigns devices to target regions it may happen that
different identifiers will map onto the same id within different
plugins. This will lead to situations where callbacks will become much
harder to read, as ambiguous identifiers are reported.
We fix this by collecting the index-offset upon general RTL
initialization. Which in turn, allows to calculate the unique,
user-observable device id.
In omptarget.cpp, there is a targetDataMapper which does mapper related
operations.
In interface.cpp, targetDataMapper function template is simply needed
for handling TargetAsyncInfoTy. Thus it is better to name it as simple
as targetData similar to targetKernel.
Change to use VE_LD_LIBRARY_PATH for VE instead of LD_LIBRARY_PATH. The
VE is connected to the host, and compiled test programs for VE is
invoked on the host and transferred to the VE. If programs are compiled
for the host, we use LD_LIBRARY_PATH. Otherwise, we use
VE_LD_LIBRARY_PATH.
Support OpenMP runtime library on VE. This patch makes OpenMP compilable
for VE architecture. Almost all tests run correctly on VE.
Reviewed By: tianshilei1992
Differential Revision: https://reviews.llvm.org/D159401
Based on https://en.cppreference.com/w/c/memory/aligned_alloc, the
`size` is supposed
to be a multiple of `alignment`, and it is implementation defined
behavior if not.
We have a non-conformant use in `kmp_barrier.h` when allocating
distribute barrier.
The size of the barrier is 576 and the alignment is `4*CACHE_LINE`,
which is 256
on most systems. Apparently it works perfectly fine for Linux and
Intel-based Mac,
but not for Apple Silicon based Mac.
Fix#63194.
Summary:
The `omp_get_num_procs()` function should return the amount of
parallelism availible. On the GPU, this was not defined. We have elected
to define this function as the maximum amount of wavefronts / warps that
can be simultaneously resident on the device. For AMDGPU this is the
number of CUs multiplied byth CU's per wave. For NVPTX this is the
maximum threads per SM divided by the warp size and multiplied by the
number of SMs.
The function assumes that `__kmp_gtid_get_specific` always returns a valid gtid.
That is not always true, because when creating the key for thread-specific data,
a destructor is assigned. The dtor will be called at thread exit. However, before
the dtor is called, the thread-specific data will be reset to NULL first
(https://pubs.opengroup.org/onlinepubs/009695399/functions/pthread_key_create.html):
> At thread exit, if a key value has a non-NULL destructor pointer, and the thread
> has a non-NULL value associated with that key, the value of the key is set to NULL.
This will lead to that `__kmp_gtid_get_specific` returns `KMP_GTID_DNE`.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D159369
The outlined function is typically invoked by using
`__kmp_invoke_microtask`,
which is written in asm. D138495 introduces a new interface function for
parallel
region for OpenMPIRBuilder, where the outlined function is called via
the function
pointer. For some reason, it works perfectly well on x86 and x86-64
system, but
doesn't work on Apple Silicon. The 3rd argument in the callee is always
`nullptr`, even
if it is not in caller. It appears `x2` always contains `0x0`. This
patch adopts
the typical method to invoke the function pointer. It works on my M2
Ultra Mac.
Fix#63194.
This patch ensures that the locally built version of flang when building in-tree. `find_program` sometimes used the wrong executable if a different copy of flang was installed.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D159161
Previously, the test ran a section with
#pragma omp target thread_limit(4)
and expected it to execute exactly 4 times, even though it would
in practice execute min(cores, 4) times.
Increment a counter and check that it executed 1-4 times.
Differential Revision: https://reviews.llvm.org/D159311
This patch adds a test that uses a target region to set a scalar value. It also
adds rules in lit.cfg to handle fortran testing.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D159216
In `__kmp_get_global_thread_id`, if the gtid mode is 1, after getting the gtid
from TLS, it will store the gtid value to the thread stack maintained in the thread
descriptor. However, `__kmp_get_global_thread_id` can be called when the library
is destructed, after the corresponding thread info has been release. This will
cause a segment fault. This can happen on an Intel-based Mac.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D159324
This patch fixes the flang detection in the openmp fortran offloading test.
Reviewed By: jsjodin
Differential Revision: https://reviews.llvm.org/D158546
Update DeviceRTL and the AMDGPU plugin to support code
object version 5. Default is code object version 4.
CodeGen for __builtin_amdgpu_workgroup_size generates code
for cov4 as well as cov5 if -mcode-object-version=none
is specified. DeviceRTL compilation passes this argument
via Xclang option to generate abi-agnostic code.
Generated code for the above builtin uses a clang
control constant "llvm.amdgcn.abi.version" to branch on
the abi version, which is available during linking of
user's OpenMP code. Load of this constant gets eliminated
during linking.
AMDGPU plugin queries the ELF for code object version
and then prepares various implicitargs accordingly.
Differential Revision: https://reviews.llvm.org/D139730
Reviewed By: jhuber6, yaxunl
As discussed on the weekly OpenMP meeting on the second of August 2023, the default version
in the OpenMP documentation shoud be changed from OpenMP 5.0 to 5.1.
Differential Revision: https://reviews.llvm.org/D156901
At the moment Archer segfaults due to a null-pointer access, if an application
uses taskwait with depend clause as used in the two new tests.
This patch cleans up the task_schedule function, moves semantic blocks into
functions and replaces the if blocks by a single switch statement. The switch
statement will warn, when new enum values are added in OMPT and makes clear
what code is executed for the different cases.
With free-agent tasks coming up in OpenMP 6.0, we should expect more
null-pointer task_data, so additional null-pointer checks were added.
We also cannot rely on having an implicit task on the stack, so the
BarrierIndex is stored during task creation.
Differential Revision: https://reviews.llvm.org/D158072
Since td_allow_completion_event is a member of the taskdata struct, not all
firstprivate/shared variables are stored at the end of the task memory
allocation. Simply report the whole allocation instead.
Furthermore, the function should always return 0 since in no case there is
another block to report.
Differential Review: https://reviews.llvm.org/D158080
offloading
- This patch adds support for thread_limit clause on target directive according to OpenMP 51 [2.14.5]
- The idea is to create an outer task for target region, when there is a thread_limit clause, and manipulate the thread_limit of task instead. This way, thread_limit will be applied to all the relevant constructs enclosed by the target region.
Differential Revision: https://reviews.llvm.org/D152054
D152014 introduced an optimization that favors more smaller blocks over
fewer larger blocks, even if user sets `thread_limit` explicitly. This patch changes
the behavior to honor user value.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D158802
The changes in D157738 allowed for us to emit stub globals on the device
in the offloading entry section. These globals contain the addresses of
device functions and allow us to map host functions to their
corresponding device equivalent. This patch provides the initial support
required to build a table on the device to lookup the associated value.
This is done by finding these entries and creating a global table on the
device that can be searched with a simple binary search.
This requires an allocation, which supposedly should be automatically
freed at plugin shutdown. This includes a basic test which looks up device
pointers via a host pointer using the added function. This will need to be built
upon to provide full support for these calls in the runtime.
To support reverse offloading it would also be useful to provide a reverse table
that allows us to get host functions from device stubs.
Depends on D157738
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D157918
Before, we checked the parallel region only once, and ignored updates in
the KernelInfo for the parallel region that happened later. This caused
us to think nested parallel sections are not present even if they are,
among other things.
Make the generic-plugin report a corresponding CU kind -- instead of 'unknown'.
Reviewed By: tianshilei1992
Differential Revision: https://reviews.llvm.org/D158542
The old code did not account for new queries during an update, which
caused us to leave stack RQIs in the map. We are now explicit about
temporary vs non-temporary RQIs.
Fixes: https://github.com/llvm/llvm-project/issues/64959
Using LIBOMPTARGET_PROFILER, duplicates are created from timing both Kernel functions and Data update functions.
I commented out the duplicate timescope and left them in the targetkernel and the targetdataupdate functions. This
way the timescope calls will be closer to the launching of the kernel and the data moving.
Reviewed By: jdoerfert, tianshilei1992
Differential Revision: https://reviews.llvm.org/D157725
When we used to treat the kernel end as as aligned barrier, assertions
at the end made sense. Now, they actually cause problems as the "writes"
are not ordered with regards to reads within the kernel. We can simply
get rid of them.
We are having a hard time optimizing some vectorized loads/stores later
on which causes this optimization to degrade performance.
Differential Revision: https://reviews.llvm.org/D158656
We used to have two separate implementations to derive the number of
threads used in a target region. This lead us to sometimes miss out on
user provided thread bounds (num_threads, or thread_limit) when we
looked for "constant default values". If we might miss out on the
presence of those bounds, we cannot set the thread_limit statically
since the runtime will try to honor user input rather than cap it at the
"preferred default". This patch replaces the secondary implementation
with the primary in a mode that will not emit code but just look for the
presence, and potentially upper bounds, of thread limiting clauses.
The runtime test would not pass without this rewrite as we missed some
clauses, set the static limit on the device to the preferred value, but
then violated that value at runtime.
Fixes: https://github.com/llvm/llvm-project/issues/64845
Differential Revision: https://reviews.llvm.org/D158381
A few places in the loop collapse support code make small dynamic allocations
that introduce a noticeable performance overhead when made on the heap.
This change moves allocations up to 32 bytes to the stack instead of the heap.
Differential Revision: https://reviews.llvm.org/D158220
This change has the primary thread create each thread's initial mask
and topology information so it is available immediately after
forking. The setting of mask/topology information is decoupled from the
actual binding. Also add this setting of topology information inside the
__kmp_partition_places mechanism for OMP_PLACES+OMP_PROC_BIND.
Without this, there could be a timing window after the primary
thread signals the workers to fork where worker threads have not yet
established their affinity mask or topology information.
Each worker thread will then bind to the location the primary thread
sets.
Differential Revision: https://reviews.llvm.org/D156727
This patch fixes: https://github.com/llvm/llvm-project/issues/64738
We observed multiple issues, primarily that the `DeviceId` was reported as -1
in certain scenarios. The reason for this is simply that the device is not
initialized at that point. Hence, we need to move the RAII object creation just
after the `checkDeviceAndCtors`, closer to the actual call we want to observe.
This also solves an odering issue where one `target enter data` callback would
be executed before the `Init` callback.
Additionally, this change will also fix that the callbacks corresponding to
`enter / exit data` and `update` in conjunction with `nowait` would not result
in the emission of an OMPT callback.
Added a testcase to cover initialized device number and `omp target` constructs.
Reviewed By: dhruvachak
Differential Revision: https://reviews.llvm.org/D157605
AAIndirectCallInfo will collect information and specialize indirect call
sites. It is similar to our IndirectCallPromotion but runs as part of
the Attributor (so with assumed callee information). It also expands
more calls and let's the rest of the pipeline figure out what is UB, for
now. We use existing call promotion logic to improve the result,
otherwise we rely on the (implicit) function pointer cast.
This effectively "fixes" #60327 as it will undo the type punning early
enough for the inliner to work with the (now specialized, thus direct)
call.
Fixes: https://github.com/llvm/llvm-project/issues/60327
This change adds the option of using different units for blocktimes specified via the KMP_BLOCKTIME environment variable. The parsing of the environment now recognizes units suffixes: ms and us. If a units suffix is not specified, the default unit is ms. Thus default behavior is still the same, and any previous usage still works the same. Internally, blocktime is now converted to microseconds everywhere, so settings that exceed INT_MAX in microseconds are considered "infinite".
kmp_set/get_blocktime are updated to use the units the user specified with KMP_BLOCKTIME, and if not specified, ms are used.
Added better range checking and inform messages for the two time units. Large values of blocktime for default (ms) case (beyond INT_MAX/1000) are no longer allowed, but will autocorrect with an INFORM message.
The delay for determining ticks per usec was lowered. It is now 1 million ticks which was calculated as ~450us based on 2.2GHz clock which is pretty typical base clock frequency on X86:
(1e6 Ticks) / (2.2e9 Ticks/sec) * (1e6 usec/sec) = 454 usec
Really short benchmarks can be affected by longer delay.
Update KMP_BLOCKTIME docs.
Portions of this commit were authored by Johnny Peyton.
Differential Revision: https://reviews.llvm.org/D157646
This patch fixes: https://github.com/llvm/llvm-project/issues/64671
DataOp EMI callbacks would not report the correct target pointer.
This is now alleviated by passing a `void**` into the function which
emits the actual callback, then evaluating that pointer.
Note: Since this is only done after the pointer has been properly
updated, only `endpoint=2` callbacks will show a non-null value.
Reviewed By: dhruvachak, jdoerfert
Differential Revision: https://reviews.llvm.org/D157996
This patch allows us to configure the port count to what the specific
card would desire for parallelism. For AMDGPU we need to use the maximum
number of hardware parallelism to avoid deadlocks. For NVPTX we don't
have this problem due to the friendlier scheduler, so we use the number
of warps active on an SM times the number of SMs as a good guess.
Note that the max ports currently is going to be smaller than these
numbers. That will be improved in the future.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D155903
If the Envar is set to true (default), busy HSA queues will be
actively avoided when assigning a queue to a Stream.
Otherwise, we will initialize a new HSA queue for each requested
Stream, then default to round robin once the set maximum has been
reached.
Reviewed By: jdoerfert, kevinsala
Differential Revision: https://reviews.llvm.org/D156996
This patch modifies the plugins so that the initialization of KernelTy objects
is done in the init method. Part of the initialization was done in the
constructKernelEntry method. Now this method is called constructKernel
and only allocates and constructs a KernelTy object.
This patch prepares the kernel class for the new implementation of device
reductions.
Differential Revision: https://reviews.llvm.org/D156917
Enable record-replay for OpenMP offload kernels. On recording the initialization
is performed on device initialization by reading env variables. (This is similar to
the way rr used to operate). The primary change takes place in the replay phase
with the replay tool explicitly initializing the record-replay functionality.
Differential Revision: https://reviews.llvm.org/D156174
Fix
There is no `threadDim` in CUDA. Instead, it is `blockDim`. Then the current
`blockDim` is `gridDim` in CUDA.
Reviewed By: jhuber6
Differential Revision: https://reviews.llvm.org/D157051
This feature was supposed to allow you to trace execution inside of
Libomptarget. However, this never really worked properly. The printing
was always reoganized, only worked for single threads, and pretty much
only told you a handful of things about a runtime library that's an
implementation detail to all users. Despite this, it contributed about
40% of the total filesize of the deviceRTL. This patch simply removes
this functionalit which I think was past due.
Reviewed By: tianshilei1992
Differential Revision: https://reviews.llvm.org/D157001
The virtual functions getDefaultNumBlocks and getDefaultNumThreads from the kernels are
only forwarding the call to the generic device's ones. This patch removes those two
functions from the kernels (and their derived ones). Now calls are made to the device's
functions directly.
Differential Revision: https://reviews.llvm.org/D156905
This patch lazily initializes queues/streams/events since their initialization
might come at a cost even if we do not use them.
To further benefit from this, AMDGPU/HSA queue management is moved into the
AMDGPUStreamManager of an AMDGPUDevice. Streams may now use different HSA queues
during their lifetime and identify busy queues.
When a Stream is requested from the resource manager, it will search for and
try to assign an idle queue. During the search for an idle queue the manager
may initialize more queues, up to the set maximum (default: 4).
When no idle queue could be found: resort to round robin selection.
With contributions from Johannes Doerfert <johannes@jdoerfert.de>
Depends on D156245
Reviewed By: kevinsala
Differential Revision: https://reviews.llvm.org/D154523
The test run fine on my AMD GPU machine, we should verify them on others
too and put them into our regular testing. Not testing O1/2/3 is really
bad and not testing all architecturs is similarly problematic.
Differential Revision: https://reviews.llvm.org/D148576
The new ompx.h header will give us a place to put extensions. The first
are 3D getters for the common cuda values:
`{threadId,threadDim,blockId,blockDim}.{x,y,z}`
Differential Revision: https://reviews.llvm.org/D156501