1633 Commits

Author SHA1 Message Date
Alyssa Rosenzweig
72d41d70b6 OpcodeDispatcher: introduce GPR-only reg cache
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-10 11:36:18 -04:00
Alyssa Rosenzweig
42b5b1f64c Core: partially flush register cache per instruction
This will mitigate problems later.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-10 11:36:18 -04:00
Alyssa Rosenzweig
2949bc211d OpcodeDispatcher: thunk through FlushRegisterCache
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-10 11:36:18 -04:00
Ryan Houdek
72d6c8ebd6
Merge pull request #3820 from alyssarosenzweig/ir/drop-deferred
Drop deferred flag infrastructure
2024-07-09 17:06:25 -07:00
Ryan Houdek
991c6941c1
Merge pull request #3849 from alyssarosenzweig/ir/drop-parser-2
Scripts: drop remnant of IR parser
2024-07-09 16:48:36 -07:00
Alyssa Rosenzweig
f974696e34 Scripts: drop remnant of IR parser
unused.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-09 16:08:38 -04:00
Mai
af6a0be832
Merge pull request #3842 from Sonicadvance1/fix_f64_to_i32
VCVT{T,}PD2DQ fixes and optimization
2024-07-09 03:49:31 -04:00
Ryan Houdek
b9c214e6e8
OpcodeDispatcher: Use new IR op for vcvt{t,}pd2dq
Also fixes a bug where it was failing to zero the upper bits of the
destination register in the AVX128 implementation. Which the updated
unit tests now check against.

Fixes a minor precision issue that was reported in #2995. We still don't
return correct values for overflow. x86 always returns maximum negative
int32_t on overflow, ARM will return maximum negative or positive
depending on sign of the double.
2024-07-09 00:38:47 -07:00
Ryan Houdek
d3d76aa8ce
IR: Adds new F64 -> I32 operation that changes behaviour depending on SVE
SVE added the ability to do F64 -> I32 conversions directly without an
fcvtn inbetween. So maybe sure to support them.
2024-07-09 00:38:47 -07:00
Ryan Houdek
3bea08da5f
Merge pull request #3843 from Sonicadvance1/remove_half_moves_fma3
Arm64: Remove one move if possible in FMA operations
2024-07-09 00:25:07 -07:00
Ryan Houdek
b3a7a973a1
AVX128: Extends 32-bit indexes path for 128-bit operations
The codepath from #3826 was only targeting 256-bit sized operations.
This missed the vpgatherdq/vgatherdpd 128-bit operations. By extending
the codepath to understand 128-bit operations, we now hit these
instruction variants.

With this PR, we now have SVE128 codepaths that handle ALL variants of
x86 gather instructions! There are zero ASIMD fallbacks used in this
case!

Of course depending on the instruction, the performance still leaves a
lot to be desired, and there is no way to emulate x86 TSO behaviour
without an ASIMD fallback, which we will likely need to add as a
fallback at some point.

Based on #3836 until that is merged.
2024-07-08 18:44:07 -07:00
Ryan Houdek
4afbfcae17
AVX128: Optimize the vpgatherdd/vgatherdps cases that would fall back to ASIMD
With the introduction of the wide gathers in #3828 this has opened new
avenues for optimizing these cases that would typically fall back to
ASIMD. In the cases that 32-bit SVE scaling doesn't fit, we can instead
sign extend the elements in to double-width address registers.

This then feeds naturally in to the SVE path even though we end up
needing to allocate 512-bits worth of address registers. This ends up
being significantly better than the ASIMD path still.

Relies on #3828 to be merged first
Fixes #3829
2024-07-08 18:12:28 -07:00
Ryan Houdek
ec7c8fd922
AVX128: Optimize QPS/QD variant of gather loads!
SVE has a special version of their gather instruction that gets similar
behaviour to x86's VGATHERQPS/VPGATHERQD instructions.

The quirk of these instructions that the previous SVE implementation
didn't handle and required ASIMD fallback, was that most gather
instructions require the data element size and address element size to
match. This x86 instruction uses a 64-bit address size while loading 32-bit
elements. This matches this specific variant of the SVE instruction, but
the data is zero-extended once loaded, requiring us to shuffle the data
after it is loaded.

This isn't the worst but the implementation is different enough that
stuffing it in to the other gather load will cause headaches.

Basically gets 32 instruction variants to use the SVE version!

Fixes #3827
2024-07-08 17:19:18 -07:00
Ryan Houdek
c5a0ae7b34
IR: Adds new QPS gather load variant! 2024-07-08 17:19:18 -07:00
Ryan Houdek
4bd207ebf3
Arm64: Moves 128Bit gather ASIMD emulation to its own helper
It is going to get reused.
2024-07-08 17:19:18 -07:00
Mai
aad7656b38
Merge pull request #3826 from Sonicadvance1/scale_32bit_gather
AVX128: Extend 32-bit address indices when possible
2024-07-08 15:29:44 -04:00
Ryan Houdek
62cec7b6b2
Arm64: Remove one move if possible in FMA operations
If the destination isn't any of the incoming sources then we can avoid
one of the moves at the end. This half works around the problem proposed
in #3794, but doesn't solve the entire problem.

To solve the other half of the moving problem means we need to solve the
SRA allocation problem for this temporary register with addsub/subadd, so it gets allocated
for both the FMA operation and the XOR operation.
2024-07-08 04:44:40 -07:00
Ryan Houdek
0653b346e0
CPUID: Adds a few missing CPU names for new CPU cores
These should be making their way to the market sooner rather than later
so make sure we have the descriptor text for them.
2024-07-07 02:40:19 -07:00
Ryan Houdek
df40515087
AVX128: Extend 32-bit address indices when possible
When loading 256-bits of data with only 128-bits of address indices, we
can sign extend the source indices to be 64-bit. Thus falling down the
ideal path for SVE where each 128-bit lane is loading the data to
addresses in a 1:1 element ratio.

This means we use the SVE path more often because of this.

Based on top of #3825 because the prescaling behaviour was introduced
there. This implements its own prescaling when the sign extension occurs
because ARM's SSHLL{,2} instruction gives us that for free.

This additionally fixes a bug where we were accidentally loading the top
128-bit half of the addresses for gathers when it was unnecessary, and
on the AVX256 side it was duplicating and doing some additional work
when it shouldn't have.

It'll be good to walk the commits when looking at this one, as there are
a couple of incremental changes that are easier to follow that way.

Fixes #3806
2024-07-06 18:32:35 -07:00
Ryan Houdek
0f9abe68b9
AVX128: Fixes accidentally loading high addr register when unnnecessary
Was missing a clamp on the high half when encounting a 128-bit gather
instruction. Was causing us to unconditionally load the top half when it
was unncessary.
2024-07-06 18:32:35 -07:00
Ryan Houdek
c168ee6940
Arm64: Implements VSSHLL{,2} IR ops 2024-07-06 18:32:35 -07:00
Ryan Houdek
0d4414fdd0
AVX128: Removes templated AddrElementSize and add as argument
NFC
2024-07-06 18:32:35 -07:00
Billy Laws
e45e631199 AllocatorHooks: Allocate from the top down on windows
FEX allocations can get in the way of allocations that are 4gb-limited
even in 65-bit mode (i.e. those from LuaJIT), so allocate starting from
the top of the AS to prevent conflicts.
2024-07-06 20:35:38 +00:00
Ryan Houdek
9bad09c45f
Merge pull request #3823 from alyssarosenzweig/bug/shl-var-small
Fix CF with small shifts
2024-07-06 01:33:57 -07:00
Ryan Houdek
47d077ff22
Merge pull request #3825 from Sonicadvance1/scale_64bit_gather
AVX128: Prescale addresses in gathers if possible
2024-07-05 19:10:43 -07:00
Ryan Houdek
11a494d7b3
AVX128: Prescale addresses in gathers if possible
If the host supports SVE128, if the address element size and data size is 64-bit, and the scale is not one of the two that is supported by SVE; Then prescale the addresses.
64-bit address overflow masks the top bits so is well defined that we
can scale the vector elements and still execute the SVE code path in
that case. Removing the ASIMD code paths from a lot of gathers.

Fixes #3805
2024-07-05 16:47:11 -07:00
Alyssa Rosenzweig
5a3c0eb83c OpcodeDispatcher: fix shl with 8/16-bit variable
the special case here lines up with the special case of using a larger shift for
a smaller result, so we can just grab CF from the larger result.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 18:38:12 -04:00
Alyssa Rosenzweig
05e4678e65 OpcodeDispatcher: fix missing masking on smaller RCR
I probably broke this when working on eliminating crossblock liveness.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 18:34:18 -04:00
Alyssa Rosenzweig
0f0e402db4 OpcodeDispatcher: fix CF with 8/16-bit immediate
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 18:24:34 -04:00
Alyssa Rosenzweig
adc709db2f OpcodeDispatcher: drop remnants of deferred flags
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 17:22:41 -04:00
Alyssa Rosenzweig
395573720d OpcodeDispatcher: drop pointless flag defers for shifts
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 16:24:54 -04:00
Alyssa Rosenzweig
0e62759d24 OpcodeDispatcher: stop deferring logical
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 16:24:54 -04:00
Alyssa Rosenzweig
926b6c3117 OpcodeDispatcher: don't defer mul flags
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 16:24:54 -04:00
Alyssa Rosenzweig
c9f9304ba5 OpcodeDispatcher: stop deferring obscure bitwise
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 16:24:54 -04:00
Alyssa Rosenzweig
fabd6be5af OpcodeDispatcher: drop SUB defer
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 16:24:54 -04:00
Alyssa Rosenzweig
1bf31d20b6 OpcodeDispatcher: switch to CalculateFlags_SUB
most of these are deferred only to be calculated immediately anyway.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 16:24:53 -04:00
Ryan Houdek
653bf04db0
Merge pull request #3819 from alyssarosenzweig/bug/rcr-smol
Fix 8/16-bit RCR
2024-07-05 12:49:23 -07:00
Ryan Houdek
b77a25b21a
Merge pull request #3818 from alyssarosenzweig/jit/shiftbymaskstozero
JIT: fix ShiftFlags masking
2024-07-05 12:49:16 -07:00
Alyssa Rosenzweig
94bd79b2bf OpcodeDispatcher: fix 8/16-bit RCR
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 10:49:02 -04:00
Alyssa Rosenzweig
1b552a6f62 JIT: fix ShiftFlags masking
we don't update flags for a nonzero shift that masks to zero.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-05 09:57:42 -04:00
Alyssa Rosenzweig
97329ccc7a
Merge pull request #3812 from Sonicadvance1/fix_rotates_with_zero
OpcodeDispatcher: Fixes rotates with zero not zero extending 32-bit result
2024-07-05 09:48:01 -04:00
Mai
f2d1f2de56
Merge pull request #3817 from Sonicadvance1/fix_x87_integer_indefinite
Softfloat: Fixes Integer indefinite return for 16-bit signed values
2024-07-04 23:11:44 -04:00
Ryan Houdek
692c2fae96
Merge pull request #3813 from alyssarosenzweig/bug/fix-sbb
Fix 16-bit SBB
2024-07-04 19:52:37 -07:00
Ryan Houdek
8955f83ef6
Softfloat: Fixes Integer indefinite return for 16-bit signed values
Regardless of positive or negative value, if the converted integer
doesn't fit in to the converted int16_t then it returns INT16_MIN.
2024-07-04 17:43:28 -07:00
Ryan Houdek
38a823cc54
Arm64: Fixes long signed divide
The two halves are provided as two uint64_t values that shouldn't be
sign extended between them. Treat them as uint64_t until combined in to
a single int128_t. Fixes long signed divide.
2024-07-04 16:42:23 -07:00
Ryan Houdek
f6ec99bede
OpcodeDispatcher: Fixes rotates with zero not zero extending 32-bit result
For all the 32-bit rotates (except for RORX) we were failing to zero
extend the 32-bit result to the destination register when the rotate was
masked to zero.

Ensure we do this.
2024-07-04 14:35:42 -07:00
Ryan Houdek
90a6647fa4
Merge pull request #3811 from alyssarosenzweig/ra/fix-lsp
RA: fix interaction between SRA & shuffles
2024-07-04 14:20:46 -07:00
Alyssa Rosenzweig
a38205069b OpcodeDispatcher: fix SBB carry flag
do it the naive way, just applying the x86 definitions of SBB.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-04 16:58:45 -04:00
Alyssa Rosenzweig
504511fe7e RA: fix interaction between SRA & shuffles
missed a Map. tricky case hit by the unit test added in the next commit.

Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
2024-07-04 13:37:13 -04:00
Alyssa Rosenzweig
1a0d135201
Merge pull request #3809 from alyssarosenzweig/rm/old-md
FEXCore: remove very out-of-date optimizer docs
2024-07-03 15:46:27 -04:00