Commit Graph

1405 Commits

Author SHA1 Message Date
Ryan Houdek
17dc03d414
AVX128: Implement support for vpack{s,u}{wb,dw} 2024-06-21 08:11:21 -07:00
Ryan Houdek
baf699c6e1
AVX128: Implements support for vandnps and vpandn
This can't use the previous binary operator handler since the register
sources need to be swapped.
2024-06-21 08:11:21 -07:00
Ryan Houdek
1431af1ff5
AVX128: Implements support for vcvt{t,}s{s,d}2si 2024-06-21 08:11:21 -07:00
Ryan Houdek
775a41b903
AVX128: Implement support for vcvtsi2s{s,d} 2024-06-21 08:11:21 -07:00
Ryan Houdek
283c2861c9
AVX128: Implement suppor for vlddqu 2024-06-21 00:56:36 -07:00
Ryan Houdek
757dc95116
AVX128: Implement support for the punpckh instructions 2024-06-21 00:56:32 -07:00
Ryan Houdek
6192250b8a
AVX128: Implement support for the punpckl instructions 2024-06-21 00:56:28 -07:00
Ryan Houdek
f489135b1d
Merge pull request #3734 from Sonicadvance1/avx_8
AVX128: Move moves!
2024-06-21 00:53:41 -07:00
Ryan Houdek
3f232e631e
Merge pull request #3730 from Sonicadvance1/avx_4
Vector: Helper refactorings
2024-06-21 00:31:14 -07:00
Ryan Houdek
6e3643c3ef
Merge pull request #3714 from pmatos/FSTstiTagSet
Set tag properly in X87 FST(reg)
2024-06-21 00:27:24 -07:00
Ryan Houdek
c28824f94d
AVX128: Implements support for vbroadcast* 2024-06-20 09:43:10 -07:00
Ryan Houdek
664d766b45
AVX128: Implement support for vmovshdup 2024-06-20 09:43:10 -07:00
Ryan Houdek
fce694ed92
AVX128: Implement support for vmovsldup 2024-06-20 09:43:10 -07:00
Ryan Houdek
96aafb4f07
AVX128: Implement support for vmovddup
This instruction is a little weird.
When accessing memory, the 128-bit operating size of the instruction
only loads 64-bits.
Meanwhile the 256-bit operating size of the instruction fetches a full
256-bits.

Theoretically the hardware could get away with two 64-bit loads or a
wacky 24-byte load, but it looks like to simplify hardware they just
spec'd it that the 256-bit version will always load the full range.
2024-06-20 09:43:10 -07:00
Ryan Houdek
dbaf95a8f3
AVX128: Implement support for vmovhps/d 2024-06-20 06:53:21 -07:00
Ryan Houdek
e67df96ad9
AVX128: Implement support for movlps/d 2024-06-20 06:53:17 -07:00
Ryan Houdek
56de94578d
AVX128: Implement support for vmovq 2024-06-20 06:53:13 -07:00
Ryan Houdek
06fc2f5ef0
AVX128: Implement support for non-temporal moves. 2024-06-20 06:53:09 -07:00
Ryan Houdek
b3ba315cbd
AVX128: Implements unary/binary lambda helper 2024-06-20 06:53:05 -07:00
Ryan Houdek
e5a531e683
Vector: Refactor MPSADBWOpImpl so AVX128 can use it. 2024-06-20 06:43:57 -07:00
Ryan Houdek
e2de57bd04
Vector: Refactor PSADBWOpImpl so AVX128 can use it. 2024-06-20 06:43:57 -07:00
Ryan Houdek
4eebca93e3
Vector: Refactor PSHUFBOpImpl. This will be reused for AVX128 2024-06-20 06:33:27 -07:00
Ryan Houdek
3919ec9692
Vector: Expose VBLENDOpImpl in the OpcodeDispatcher. It will be reused by AVX128 2024-06-20 06:33:21 -07:00
Ryan Houdek
02aeb0ac1a
Vector: Restructure PMADDWDOpImpl. It's going to get reused for AVX128 2024-06-20 06:33:15 -07:00
Ryan Houdek
206544ad09
Vector: Reconfigure PMADDUBSWOpImpl, it's going to get reused for AVX128 2024-06-20 06:33:08 -07:00
Ryan Houdek
3854cd2b2f
Vector: Restruture SHUFOpImpl. AVX128 is going to reuse it. 2024-06-20 06:32:58 -07:00
Ryan Houdek
acbd920c9a OpcodeDispatcher: Adds initial groundwork for decomposed AVX operations
Only installs the tables if SVE256 isn't supported yet AVX is explicitly
enabled with HostFeatures, to protect accidental enablement early.

- Only implements 85 instructions starting out
- Basic vector moves
- Basic vector unary operations
- Basic vector binary operations
- VZeroUpper/VZeroAll

The bulk of the implementation is currently the handling for loading and
storing the halves of the registers from the context or from memory.

This means the load/store helpers must always return a pair unless only
requesting the bottom half of the register, which occurs with 128-bit
AVX operations. The store side then needing to consume the named zero
register if it occurs since those cases will zero the upper bits.

This implementation approach has a few benefits.
- I can pound this out extremely quickly
- SSE implementations are unaffected and don't need to deal with the
  insert behaviour of SVE256.
- We still keep the SVE256 implementation for the inevitable future when
  hardware vendors actually do implement it (Give it 8 years or
  something).
- We can actually unit test this path in CI once it is complete.
- We can partially optimize some paths with SVE128 (Gathers) and support
  a full ASIMD path if necessary.

One downside is that I can't enable this in CI yet because it can't pass
all unittests. but that's a non-issue since it is going to be in heavy
flux as I'm hammering out the implementation. It'll get switched on at
the end when it's passing all 1265 AVX unittests. Currently at 1001 on
this.
2024-06-20 08:44:14 -04:00
Alyssa Rosenzweig
db0bdd48e5
Merge pull request #3729 from alyssarosenzweig/refactor/address-modes
OpcodeDispatcher: Refactor address modes
2024-06-20 08:18:33 -04:00
Ryan Houdek
da21ee3cda
Merge pull request #3692 from pmatos/AFP_RPRES_fix
Fixes AFP.NEP handling on scalar insertions
2024-06-19 19:23:49 -07:00
Ryan Houdek
d2baef2b36
Merge pull request #3727 from Sonicadvance1/vaes
VAES support
2024-06-19 19:22:56 -07:00
Ryan Houdek
df96bc83cc
Merge pull request #3726 from Sonicadvance1/oryon_errata
HostFeatures: Work around Qualcomm Oryon RNG errata
2024-06-19 19:21:14 -07:00
Alyssa Rosenzweig
ec03831a21 OpcodeDispatcher: plumb A.NonTSO deeper
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-06-19 08:52:07 -04:00
Alyssa Rosenzweig
9ca821316a OpcodeDispatcher: factor out DecodeAddress
this is the common guts of the load/store routines.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-06-19 08:52:07 -04:00
Alyssa Rosenzweig
025a060337 OpcodeDispatcher: extract IsNonTSOReg
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-06-19 08:52:07 -04:00
Alyssa Rosenzweig
371d6f0730 OpcodeDispatcher: extract IsOperandMem
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-06-19 08:52:07 -04:00
Ryan Houdek
643bc10d52
CPUID: Expose VAES if supported 2024-06-19 05:51:47 -07:00
Ryan Houdek
542ed8b6ad
Implement support for querying AES256 support
This is a different feature flag than regular AES as the default AES+AVX
only operates on 128-bit wide vectors.

With the newer `VAES` extension this is expanded to 256-bit.
2024-06-19 05:51:47 -07:00
Paulo Matos
2483329ef6 Fixes AFP.NEP handling on scalar insertions
Fixes #3690

When doing scalar insertions, upper bits come from different arguments
depending on the operation. These are listed in the ARM spec under the
NEP bit documentation.
2024-06-19 10:02:54 +02:00
Paulo Matos
359221b379 Set tag properly in X87 FST(reg) 2024-06-19 10:02:05 +02:00
Paulo Matos
f9b38a1de7 FXCH should set C1 to zero 2024-06-19 08:57:48 +02:00
Ryan Houdek
67e1ac0442
Merge pull request #3725 from alyssarosenzweig/ir/vbic
IR: rename _VBic -> _VAndn
2024-06-18 16:34:26 -07:00
Ryan Houdek
c57e9e008f
Merge pull request #3723 from alyssarosenzweig/fexcore/zero-helper
OpcodeDispatcher: refactor zero vector loads
2024-06-18 16:34:15 -07:00
Ryan Houdek
b34c23fe3d
HostFeatures: Work around Qualcomm Oryon RNG errata
The Oryon is the first CPU we know of that implemented support for the
RNG extension. It also has an errata where reading the RNDRRS register
never returns success. X86's RDSEED guarantees forward progress with
enough retries.

When an x86 processor messed this up at one point, some Linux systems
would infinite loop (presumably when something in boot was filling an
entropy pool). This required a microcode change to fix that processor.

The rdseed unittest infinite loops on this platform if RNG was exposed.
2024-06-18 16:29:53 -07:00
Alyssa Rosenzweig
01da5972fc IR: rename _VBic -> _VAndn
to be consistent with the scalar _Andn opcode, which is specifically named _Andn
and not _Bic.

noticed while reviewing AVX patches

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-06-18 14:00:01 -04:00
Ryan Houdek
bf812aae8f CoreState: Adds avx_high structure for tracking decoupled AVX halves.
Needed something inbetween the `InlineJITBlockHeader` and `avx_high` in
order to match alignment requirements of 16-byte for avx_high. Chose the
`DeferredSignalRefCount` because we hit it quite frequently and it is
basically the only 64-bit variable that we end up touching
significantly.

In the future the CPUState object is going to need to change its view of
the object depending on if the device supports SVE256 or not, but we
don't need to frontload the work right now. It'll become significantly
easier to support that path once the RCLSE pass gets deleted.
2024-06-18 12:00:45 -04:00
Ryan Houdek
9a71443005 CoreState: Adds a gregs offset check
This is required to be less than the maximum range for LDP and STP in
the Arm64 Dispatcher otherwise it breaks. Necessary to ensure this when
reorganizing the CoreState.
2024-06-18 12:00:45 -04:00
Ryan Houdek
ee165249bc Dispatcher: Fix ARM64EC
We don't have CI for this and was missed.
2024-06-18 12:00:45 -04:00
Alyssa Rosenzweig
af8cfb79e5 OpcodeDispatcher: refactor zero vector loads
AVX128 is going to slam this, so make it more ergonomic.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-06-18 11:44:46 -04:00
Ryan Houdek
13ebfb1a49
Merge pull request #3711 from Sonicadvance1/avx128_2
FEXCore: Disentangle the SVE256 feature from AVX
2024-06-17 17:35:15 -07:00
Ryan Houdek
f863b30951
Merge pull request #3716 from alyssarosenzweig/ir-dump/unrecoverable
json_ir_generator: don't print unrecoverable temps
2024-06-17 17:25:27 -07:00