These instructions aren't super amazing due to the fact that they have
both a source mask and a destination duplication mask.
Setup a case where we can generate more optimal code in /most/ cases.
There are a few that still fall down a "bad" path for the result
broadcast but in most cases they are optimal. Still to be seen what
games typically use the broadcast mask as.
AVX in its infinite wisdom expanded DPPS to 256-bit, while leaving DPPD
to only support 128-bit still. This leaves the original implementation
alone for 256-bit DPPS since I don't want to break it.
This is another instruction that gets a free optimization when
SVE-128bit is supported!
If no registers alias, then we can move the first source directly into the
destination and then perform the FCADD operation as opposed to using a
temporary.
We can perform less moves by checking for scenarios where aliasing
occurs. Since addition is commutative (usually, general-case anyway),
order of inputs doesn't strictly matter here.
In the event no source vectors alias the destination,
we can just move the first source vector into it and
then perform the divide without needing to move afterword.
When a syscall from the *at series is provided an FD but the path is
absolute then dirfd should be ignored. We weren't correctly doing this.
Now if the path is absolute, but set the argument to the special
AT_FDCWD..
Fixes#3204
We can avoid needing to use movprfx here by moving
directly into the destination when possible and just
doing the UMAX directly.
Also expands the unsigned max tests to test values with
the sign bit set to ensure all behavior is caught.
Since SMAX performs a comparison and returns the max value regardless
of how the operands are provided, we can check for when the second
input aliases the destination.
Removes the truncating move that we perform inside the StoreResult
function and instead delegates the responsibility to the instruction
implementations themselves.
This removes a lot of redundant moves that occur on 128-bit variants
of AVX instructions.
Also fixes a weird case where we were handling 128-bit SVE
in VBroadcastFromMem when we already have AdvSIMD instructions
that will perfom the zero-extension behavior for us.
Allows for easier expansion without needing to expand the function definitons.
Also makes a few usages significantly less verbose and makes specifying
options a little more declarative, rather than having to memorize what
each argument is specifying.
Notably this bugfix version also introduces support for formatting
std::atomic types and std::atomic_flag.
Also, of course keeps our tracked external up to date.
A couple of games were hitting these. Not sure how they were missed in
PR #3159 but adds the missing one.
Small rearrangement to make this easier as well. Hopefully thunk stuff
lands sooner rather than later to automate this for Vulkan.
Maybe `-isystem` instead of `-I` needs to be used unlike what #2076,
might depend on what is installed on the host system.
Some simple tests to showcase instructions that we can optimize.
- Back to back pushes could be optimized
- Back to back scalar vector operations can be optimized
- Show with AFP that back to back scalar is already optimal
- Also ensures we don't break this stage.
We don't need zero in the upper bits for a push.
Makes a couple variants optimal.
Adds missing tests to the 32-bit file, since only 32-bit can push a
32-bit register.