mirror of
https://github.com/capstone-engine/llvm-capstone.git
synced 2024-12-15 12:09:51 +00:00
2f086f265b
X. Sun et al. (https://dl.acm.org/doi/10.5555/3454287.3454728) published a paper showing that an FP format with 4 bits of exponent, 3 bits of significand and an exponent bias of 11 would work quite well for ML applications. Google hardware supports a variant of this format where 0x80 is used to represent NaN, as in the Float8E4M3FNUZ format. Just like the Float8E4M3FNUZ format, this format does not support -0 and values which would map to it will become +0. This format is proposed for inclusion in OpenXLA's StableHLO dialect: https://github.com/openxla/stablehlo/pull/1308 As part of inclusion in that dialect, APFloat needs to know how to handle this format. Differential Revision: https://reviews.llvm.org/D146441 |
||
---|---|---|
.. | ||
mlir | ||
.style.yapf | ||
CMakeLists.txt | ||
requirements.txt |