Adds support for binfmt_misc through systemd configuration paths. Their
configuration files are basically the raw kernel interface description
in a .conf file, quite a bit more simple than the legacy debian path.
Default enable this path since systemd is the expected default
arrangement these days.
Fixes#2417
rotate right by less than 8:
ror(________________7654321076543120, ...)
rotate left by less than 8:
rol(76543210________________76543210, ...)
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
avoid masking. apparently even modern compilers will do cute tricks with 8-bit
math in hot loops ...
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
not used. we'll probably rip the whole thing out at some point but for now, no
reason to pollute user systems with this.
Signed-off-by: Alyssa Rosenzweig <alyssa@rosenzweig.io>
Instead of clearing a hardcoded 16 bytes, adjust for the actual number
of instructions modified. The implementation will still only clear a
single cacheline so it doesn't change behaviour.
This used to exist in the FEXCore header since the unaligned handler was
done in the frontend. Once it got moved in to FEXCore it had stayed
there. Move it over now.
In the case of a visibility tear when one thread is backpatching while
another is executing. The executing thread can /potentially/ see the
writing of instructions depending on coherency rules or filling of
cachelines.
By ensuring the DMB instructions are backpatched over the NOP
instructions first, this ensures correct atomic visibility even on tear.
When code buffers are shared between threads, FEX needs to be careful
around backpatching its code buffers, since one thread might have
backpatched the code that another thread was also planning on
backpatching.
To handle this case, when the handler fails to find a backpatchable
instruction, check if it was already backpatched. This can be determined
by atomically reading the instructions back and seeing if they have
turned in to the non-atomic variants.
In most cases we can just return saying that it has been handled, in the
case of a store we need to back the PC up 4 bytes to ensure the DMB is
executed before the non-atomic store.
These handlers don't do any code backpatching so locking the spinlock
futex isn't necessary. Move them before the lock to make them a bit more
efficient once code buffers get shared.
If we query the CPU flags ourselves then vixl is no longer a
compile-time or runtime dependency unless the vixl disassembler or
simulator is built.
A bit spicy from all the feature bits we need to load up.