llvm-capstone/mlir/python
David Majnemer 2f086f265b [APFloat] Add E4M3B11FNUZ
X. Sun et al. (https://dl.acm.org/doi/10.5555/3454287.3454728) published
a paper showing that an FP format with 4 bits of exponent, 3 bits of
significand and an exponent bias of 11 would work quite well for ML
applications.

Google hardware supports a variant of this format where 0x80 is used to
represent NaN, as in the Float8E4M3FNUZ format. Just like the
Float8E4M3FNUZ format, this format does not support -0 and values which
would map to it will become +0.

This format is proposed for inclusion in OpenXLA's StableHLO dialect: https://github.com/openxla/stablehlo/pull/1308

As part of inclusion in that dialect, APFloat needs to know how to
handle this format.

Differential Revision: https://reviews.llvm.org/D146441
2023-03-24 20:06:40 +00:00
..
mlir [APFloat] Add E4M3B11FNUZ 2023-03-24 20:06:40 +00:00
.style.yapf
CMakeLists.txt [mlir] add OperationType to the Transform dialect 2022-10-11 09:55:19 +00:00
requirements.txt [mlir] Relax version requirement for PyYAML in mlir 2023-02-07 14:24:55 -08:00