Rename to parts$N_sint_to_float.
Reimplement int{32,64}_to_float128 with FloatParts128.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Rename to parts$N_float_to_uint. Reimplement
float128_to_uint{32,64}{_round_to_zero} with FloatParts128.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
* Some not-yet-enabled preliminaries for M-profile MVE support
* Consistently use "Cortex-Axx", not "Cortex Axx" in docs, comments
* docs: Fix installation of man pages with Sphinx 4.x
* Mark LDS{MIN,MAX} as signed operations
* Fix missing syndrome value for DAIF and PAC check exceptions
* Implement BFloat16 extensions
* Refactoring of hvf accelerator code in preparation for aarch64 support
* Fix some coverity nits in test code
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmC4/AoZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3gQ0D/9tPOcsqQ78ZQGGkZRPM8LP
HgDsXIgzhkCguNWlOQqBmDppOdEP5yVUw084dEqKbi/0pIqrQMC/Xn6KgZWuQEd5
Ye6a6xvO0ZG/Df4fbhB9afGll+6aDCg1QK8Zm6xPzQUuAbyo8vZfxG5neV5o+QHb
XboXm1Cpn28V9zglZb3izt5Uka/4MRND/AfRviwZzGc07KKRGxuVJZ1jGizWxf61
oX+xhgv6+SCNXuAshW8UwFfyD0OMU1+1q9xLRA0QYaVTsbK0HGkN3gk7Zk+gzViJ
//PVY0BlUx2Li03lxrfE2jinFCNPxTq0WV6K7eU4qsXXq6/mF/a1DLM/PaunRzgd
wodu5RewC9P934d2MsDDRlBZKboa0zmcYaCwMNchoob7AVf9hCvc5VFbmYiF6Z9w
pROxbL9lqP2c1bh+0ID8qB6NFYewyX/FNaPE213hOClwxowtvtyZsST5AkWiZkAe
TlfR8sNTHAjUXvAsApMPX3MJeI9X2kWbo2PI1wTXojNZNLHJ5mBN9U32LzbeULXE
LvLAU8EmuE8T6nZsfG1XW1Z6CL9/9QaTaOX0QQ9s0LRvG+NNNUG1ACkm5Jm5KA8U
U69eS4IZgc6Y/3PbNTfhLQx4IAyzH9YxjC0gzeCbYosumQdNUUEfOfyoxEACDLuH
BbpkRCpN2tZuHXG+JSsssQ==
=hjbH
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20210603' into staging
target-arm queue:
* Some not-yet-enabled preliminaries for M-profile MVE support
* Consistently use "Cortex-Axx", not "Cortex Axx" in docs, comments
* docs: Fix installation of man pages with Sphinx 4.x
* Mark LDS{MIN,MAX} as signed operations
* Fix missing syndrome value for DAIF and PAC check exceptions
* Implement BFloat16 extensions
* Refactoring of hvf accelerator code in preparation for aarch64 support
* Fix some coverity nits in test code
# gpg: Signature made Thu 03 Jun 2021 16:58:02 BST
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20210603: (45 commits)
tests/unit/test-vmstate: Assert that dup() and mkstemp() succeed
tests/qtest/tpm-tests: Remove unnecessary NULL checks
tests/qtest/pflash-cfi02-test: Avoid potential integer overflow
tests/qtest/hd-geo-test: Fix checks on mkstemp() return value
tests/qtest/e1000e-test: Check qemu_recv() succeeded
tests/qtest/bios-tables-test: Check for dup2() failure
hvf: Simplify post reset/init/loadvm hooks
hvf: Introduce hvf vcpu struct
hvf: Remove hvf-accel-ops.h
hvf: Make synchronize functions static
hvf: Use cpu_synchronize_state()
hvf: Split out common code on vcpu init and destroy
hvf: Remove use of hv_uvaddr_t and hv_gpaddr_t
hvf: Make hvf_set_phys_mem() static
hvf: Move hvf internal definitions into common header
hvf: Move cpu functions into common directory
hvf: Move vcpu thread functions into common directory
hvf: Move assert_hvf_ok() into common directory
target/arm: Enable BFloat16 extensions
linux-user/aarch64: Enable hwcap bits for bfloat16
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Coverity complains that we don't check for failures from dup()
and mkstemp(); add asserts that these syscalls succeeded.
Fixes: Coverity CID 1432516, 1432574
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20210525134458.6675-7-peter.maydell@linaro.org
Coverity points out that in tpm_test_swtpm_migration_test() we
assume that src_tpm_addr and dst_tpm_addr are non-NULL (we
pass them to tpm_util_migration_start_qemu() which will
unconditionally dereference them) but then later explicitly
check them for NULL. Remove the pointless checks.
Fixes: Coverity CID 1432367, 1432359
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-id: 20210525134458.6675-6-peter.maydell@linaro.org
Coverity points out that we calculate a 64-bit value using 32-bit
arithmetic; add the cast to force the multiply to be done as 64-bits.
(The overflow will never happen with the current test data.)
Fixes: Coverity CID 1432320
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-id: 20210525134458.6675-5-peter.maydell@linaro.org
Coverity notices that the checks against mkstemp() failing in
create_qcow2_with_mbr() are wrong: mkstemp returns -1 on failure but
the check is just "g_assert(fd)". Fix to use "g_assert(fd >= 0)",
matching the correct check in create_test_img().
Fixes: Coverity CID 1432274
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-id: 20210525134458.6675-4-peter.maydell@linaro.org
The e1000e_send_verify() test calls qemu_recv() but doesn't
check that the call succeeded, which annoys Coverity. Add
an explicit test check for the length of the data.
(This is a test check, not a "we assume this syscall always
succeeds", so we use g_assert_cmpint() rather than g_assert().)
Fixes: Coverity CID 1432324
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-id: 20210525134458.6675-3-peter.maydell@linaro.org
Coverity notes that we don't check for dup2() failing. Add some
assertions so that if it does ever happen we get some indication.
(This is similar to how we handle other "don't expect this syscall to
fail" checks in this test code.)
Fixes: Coverity CID 1432346
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Message-id: 20210525134458.6675-2-peter.maydell@linaro.org
The hooks we have that call us after reset, init and loadvm really all
just want to say "The reference of all register state is in the QEMU
vcpu struct, please push it".
We already have a working pushing mechanism though called cpu->vcpu_dirty,
so we can just reuse that for all of the above, syncing state properly the
next time we actually execute a vCPU.
This fixes PSCI resets on ARM, as they modify CPU state even after the
post init call has completed, but before we execute the vCPU again.
To also make the scheme work for x86, we have to make sure we don't
move stale eflags into our env when the vcpu state is dirty.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-13-agraf@csgraf.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We will need more than a single field for hvf going forward. To keep
the global vcpu struct uncluttered, let's allocate a special hvf vcpu
struct, similar to how hax does it.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>
Tested-by: Roman Bolshakov <r.bolshakov@yadro.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-12-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We can move the definition of hvf_vcpu_exec() into our internal
hvf header, obsoleting the need for hvf-accel-ops.h.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-11-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The hvf accel synchronize functions are only used as input for local
callback functions, so we can make them static.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-10-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is no reason to call the hvf specific hvf_cpu_synchronize_state()
when we can just use the generic cpu_synchronize_state() instead. This
allows us to have less dependency on internal function definitions and
allows us to make hvf_cpu_synchronize_state() static.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-9-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch splits the vcpu init and destroy functions into a generic and
an architecture specific portion. This also allows us to move the generic
functions into the generic hvf code, removing exported functions.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-8-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ARM version of Hypervisor.framework no longer defines these two
types, so let's just revert to standard ones.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-7-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The hvf_set_phys_mem() function is only called within the same file.
Make it static.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-6-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch moves a few internal struct and constant defines over.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-5-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch moves CPU and memory operations over. While at it, make sure
the code is consumable on non-i386 systems.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-4-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch moves the vCPU thread loop over.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-3-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Until now, Hypervisor.framework has only been available on x86_64 systems.
With Apple Silicon shipping now, it extends its reach to aarch64. To
prepare for support for multiple architectures, let's start moving common
code out into its own accel directory.
This patch moves assert_hvf_ok() and introduces generic build infrastructure.
Signed-off-by: Alexander Graf <agraf@csgraf.de>
Reviewed-by: Sergio Lopez <slp@redhat.com>
Message-id: 20210519202253.76782-2-agraf@csgraf.de
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Disable BF16 again for !have_neon and !have_vfp during realize.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
and VFMA{B,T}.BF16 for AArch32 NEON.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is BFMLAL{B,T} for both AArch64 AdvSIMD and SVE,
and VFMA{B,T}.BF16 for AArch32 NEON.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is BFMMLA for both AArch64 AdvSIMD and SVE,
and VMMLA.BF16 for AArch32 NEON.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is BFDOT for both AArch64 AdvSIMD and SVE,
and VDOT.BF16 for AArch32 NEON.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is BFDOT for both AArch64 AdvSIMD and SVE,
and VDOT.BF16 for AArch32 NEON.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For Arm BFDOT and BFMMLA, we need a version of round-to-odd
that overflows to infinity, instead of the max normal number.
Cc: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is BFCVT{N,T} for both AArch64 AdvSIMD and SVE,
and VCVT.BF16.F32 for AArch32 NEON.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is the 64-bit BFCVT and the 32-bit VCVT{B,T}.BF16.F32.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Note that the SVE BFLOAT16 support does not require SVE2,
it is an independent extension.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210525225817.400336-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The sequence cpu_restore_state() + raise_exception() is equivalent to
raise_exception_ra(), so use that instead. (In this case we never
cared about the syndrome value, because M-profile doesn't use the
syndrome; the old code was just written unnecessarily awkwardly.)
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
[PMM: Retain edited version of comment; rewrite commit message]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that raise_exception_ra restores the state before raising the
exception we can use restore_exception_ra to perform the state restore +
exception raising without clobbering the syndrome.
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
[PMM: Keep the one line of the comment that is still relevant]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Now that there are no other users of do_raise_exception, fold it into
raise_exception.
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The DAIF and PAC checks used raise_exception_ra to raise an exception
and unwind CPU state but raise_exception_ra is currently designed for
handling data aborts as the syndrome is partially precomputed and
encoded in the TB and then merged in merge_syn_data_abort when handling
the data abort. Using raise_exception_ra for DAIF and PAC checks
results in an empty syndrome being retrieved from data[2] in
restore_state_to_opc and setting ESR to 0. This manifested as:
kvm [571]: Unknown exception class: esr: 0x000000 –
Unknown/Uncategorized
when launching a KVM guest when the host qemu used a CPU supporting
EL2+pointer authentication and enabling pointer authentication in the
guest.
Rework raise_exception_ra such that the state is restored before raising
the exception so that the exception is not clobbered by
restore_state_to_opc.
Fixes: 0d43e1a2d2 ("target/arm: Add PAuth helpers")
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jamie Iles <jamie@nuviainc.com>
[PMM: added comment]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The operands to tcg_gen_atomic_fetch_s{min,max}_i64 must
be signed, so that the inputs are properly extended.
Zero extend the result afterward, as needed.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/364
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20210602020720.47679-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The 4.x branch of Sphinx introduces a breaking change, as generated man
pages are now written to subdirectories corresponding to the manual
section they belong to. This results in `make install` erroring out when
attempting to install the man pages, because they are not where it
expects to find them.
This patch restores the behavior of Sphinx 3.x regarding man pages.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/256
Signed-off-by: Damien Goutte-Gattat <dgouttegattat@incenp.org>
Message-id: 20210503161422.15028-1-dgouttegattat@incenp.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The official punctuation for Arm CPU names uses a hyphen, like
"Cortex-A9". We mostly follow this, but in a few places usage
without the hyphen has crept in. Fix those so we consistently
use the same way of writing the CPU name.
This commit was created with:
git grep -z -l 'Cortex ' | xargs -0 sed -i 's/Cortex /Cortex-/'
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20210527095152.10968-1-peter.maydell@linaro.org
Currently we allow board models to specify the initial value of the
Secure VTOR register, using an init-svtor property on the TYPE_ARMV7M
object which is plumbed through to the CPU. Allow board models to
also specify the initial value of the Non-secure VTOR via a similar
init-nsvtor property.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520152840.24453-10-peter.maydell@linaro.org
The M-profile FPSCR has an LTPSIZE field, but if MVE is not
implemented it is read-only and always reads as 4; this is how QEMU
currently handles it.
Make the field writable when MVE is implemented.
We can safely add the field to the MVE migration struct because
currently no CPUs enable MVE and so the migration struct is never
used.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520152840.24453-8-peter.maydell@linaro.org
If MVE is implemented for an M-profile CPU then it has a VPR
register, which tracks predication information.
Implement the read and write handling of this register, and
the migration of its state.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520152840.24453-7-peter.maydell@linaro.org
The fp_sysreg_checks() function is supposed to be returning an
FPSysRegCheckResult, which is an enum with three possible values.
However, three places in the function "return false" (a hangover from
a previous iteration of the design where the function just returned a
bool). Make these return FPSysRegCheckFailed instead (for no
functional change, since both false and FPSysRegCheckFailed are
zero).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520152840.24453-6-peter.maydell@linaro.org
Split out the handling of VMOV_reg_sp and VMOV_reg_dp so that we can
permit the insns if either FP or MVE are present.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520152840.24453-5-peter.maydell@linaro.org
The do_vfp_2op_sp() and do_vfp_2op_dp() functions currently check
whether floating point is supported via the aa32_fpdp_v2 and
aa32_fpsp_v2 isar checks. For v8.1M MVE support, the VMOV_reg trans
functions (but not any of the others) need to update this to also
allow the insn if MVE is implemented. Move the check out of the do_
function and into its callsites (which are all implemented via the
DO_VFP_2OP macro), so we have a place to change the check for the
VMOV insns.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520152840.24453-4-peter.maydell@linaro.org
Some v8M instructions are present if either the floating point
extension or MVE is implemented. Update our implementation of them
to check for MVE as well as for FP.
This is all the insns which use CheckDecodeFaults(ExtType_MveOrFp) or
CheckDecodeFaults(ExtType_MveOrDpFp) in their pseudocode, which are
essentially the loads and stores, moves and sysreg accesses, except
for VMOV_reg_sp and VMOV_reg_dp, which we handle in subsequent
patches because they need a refactor to provide a place to put the
new MVE check.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520152840.24453-3-peter.maydell@linaro.org
Add the isar feature check functions we will need for v8.1M MVE:
* a check for MVE present: this corresponds to the pseudocode's
CheckDecodeFaults(ExtType_Mve)
* a check for the optional floating-point part of MVE: this
corresponds to CheckDecodeFaults(ExtType_MveFp)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20210520152840.24453-2-peter.maydell@linaro.org
Next batch of ppc target patches. Highlights are:
* A fix for a regression with single-step mode
* Start of moving ppc to use decodetree
* Implementation of some POWER10 64-bit prefixed instructions
* Several cleanups to softmmu code
* Continued progress towards allowing --disable-tcg
* Fix for the POWER PEF implementation
* Fix for LPCR handling of hotplugged CPUs
* Assorted other bugfixes and cleanups
This patchset does contain a couple of changes to code outside my
normal scope of maintainership, related to the removal of cpu_dump and
cpu_statistics hooks. ppc was the last target arch implementing these
at all, and they didn't really do anything there either. The patches
should have relevant acks.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAmC4kOsACgkQbDjKyiDZ
s5Lmfw/+Pq8EWAUWCi1jED/DzmtCAMGbBD7npWjY/kJR1WluSsvhgJBPXTXKDQmv
x4TDYd2lZueTU7YRXN6bLGKzeJos/6U8FNb4CK6uaL0syc7OsvgNY0lJNUDG/Ght
IcFjLqMAWjImCKGLPIqyOmfPWFv1UU4MiG3bf2gYr4cbunxWHGs0wF+32fcXL/KR
Q4FRKUJuh5a5OjJBkodWsnVm263fSFuKSb3Z2SDSV7Wpk9306rEsDrpcPl81xrTk
pgZ06iY7pDNVVVzRhonfldbpXCwJdENUMPbZJJQ/9kf0qtfuLE4WlVqAtcuAuXZm
ise/6V/GQ0yTTMDuTQxBFOGRoEb/sG+NLAwiaEcPH9denWyhFX8RHK/uKCCxlmcH
arzF2ar6i9PGKFUAnzoH7n5+GQGDOBsFbOmAvSPe0KGuDWYQGKGwT+YifCyGIqnB
7NV2n3y0kqEPlCHN1esjSMohgfKxrrI50WZQAo3XJRp6Z7d7643SrA/CXOVmwnvY
MbUuHoOQa4kA02v8SAiaC2XZ0iXh1OMq4vxNb7jiNgdlGhfsjUi6IarTwnOikF1Y
2cdxc62WTpIdHkSfoEEI1doP51FfD7vjBVrUOEh7K0D1DAVsrbO6SkPiWpMljXN6
QeJLpcZ/4eJR/MN2tndt79GgwrYx2dboD2h8RfzGXsazy1gb8jk=
=4ZDC
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dg-gitlab/tags/ppc-for-6.1-20210603' into staging
ppc patch queue 2021-06-03
Next batch of ppc target patches. Highlights are:
* A fix for a regression with single-step mode
* Start of moving ppc to use decodetree
* Implementation of some POWER10 64-bit prefixed instructions
* Several cleanups to softmmu code
* Continued progress towards allowing --disable-tcg
* Fix for the POWER PEF implementation
* Fix for LPCR handling of hotplugged CPUs
* Assorted other bugfixes and cleanups
This patchset does contain a couple of changes to code outside my
normal scope of maintainership, related to the removal of cpu_dump and
cpu_statistics hooks. ppc was the last target arch implementing these
at all, and they didn't really do anything there either. The patches
should have relevant acks.
# gpg: Signature made Thu 03 Jun 2021 09:20:59 BST
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dg-gitlab/tags/ppc-for-6.1-20210603: (42 commits)
target/ppc: fix single-step exception regression
target/ppc: Move cmp/cmpi/cmpl/cmpli to decodetree
target/ppc: Move addpcis to decodetree
target/ppc: Implement vcfuged instruction
target/ppc: Implement cfuged instruction
target/ppc: Implement setbc/setbcr/stnbc/setnbcr instructions
target/ppc: Implement prefixed integer store instructions
target/ppc: Move D/DS/X-form integer stores to decodetree
target/ppc: Implement prefixed integer load instructions
target/ppc: Move D/DS/X-form integer loads to decodetree
target/ppc: Implement PNOP
target/ppc: Move ADDI, ADDIS to decodetree, implement PADDI
target/ppc: Add infrastructure for prefixed insns
target/ppc: Move page crossing check to ppc_tr_translate_insn
target/ppc: Introduce macros to check isa extensions
target/ppc: powerpc_excp: Consolidade TLB miss code
target/ppc: powerpc_excp: Remove dump_syscall_vectored
target/ppc: powerpc_excp: Move lpes code to where it is used
target/ppc: overhauled and moved logic of storing fpscr
target/ppc: removed all mentions to PPC_DUMP_CPU
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit 6086c75 (target/ppc: Replace POWERPC_EXCP_BRANCH with
DISAS_NORETURN) broke the generation of exceptions when
CPU_SINGLE_STEP or CPU_BRANCH_STEP were set, due to nip always being
reset to the address of the current instruction.
This fix leaves nip untouched when generating the exception.
Signed-off-by: Luis Pires <luis.pires@eldorado.org.br>
Reported-by: Matheus Ferst <matheus.ferst@eldorado.org.br>
Message-Id: <20210602125103.332793-1-luis.pires@eldorado.org.br>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>