We can eliminate an extra TB in this case, which merely
loads a "return address" into rn.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
For priv levels 1 & 2, we were doing so from do_ibranch_priv.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add the kvm_arm_get_max_vm_ipa_size() helper that returns the
number of bits in the IPA address space supported by KVM.
This capability needs to be known to create the VM with a
specific IPA max size (kvm_type passed along KVM_CREATE_VM ioctl.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Message-id: 20190304101339.25970-6-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This will allow sharing code that adjusts rmode beyond
the existing users.
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This decoding more closely matches the ARMv8.4 Table C4-6,
Encoding table for Data Processing - Register Group.
In particular, op2 == 0 is now more than just Add/sub (with carry).
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We do not need an out-of-line helper for manipulating bits in pstate.
While changing things, share the implementation of gen_ss_advance.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The EL0+UMA check is unique to DAIF. While SPSel had avoided the
check by nature of already checking EL >= 1, the other post v8.0
extensions to MSR (imm) allow EL0 and do not require UMA. Avoid
the unconditional write to pc and use raise_exception_ra to unwind.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Minimize the number of places that will need updating when
the virtual host extensions are added.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190301200501.16533-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Found by inspection: Rn is the base register against which the
load began; I is the register within the mask being processed.
The exception return should of course be processed from the loaded PC.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190301202921.21209-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The floating-point extension facility implemented certain changes to
BFP, HFP and DFP instructions.
As we don't implement HFP/DFP, we can ignore those completely. Related
to BFP, the changes include
- SET BFP ROUNDING MODE (SRNMB) instruction
- BFP-rounding-mode field in the FPC register is changed to 3 bits
- CONVERT FROM LOGICAL instructions
- CONVERT TO LOGICAL instructions
- Changes (rounding mode + XxC) added to
-- CONVERT TO FIXED
-- CONVERT FROM FIXED
-- LOAD FP INTEGER
-- LOAD ROUNDED
-- DIVIDE TO INTEGER
For TCG, we don't implement DIVIDE TO INTEGER, and it is harder to
implement, so skip that. Also, as we don't implement PFPO, we can skip
changes to that as well. The other parts are now implemented, we can
indicate the facility.
z14 PoP mentions that "The floating-point extension facility is installed
in the z/Architecture architectural mode. When bit 37 is one, bit 42 is
also one.", meaning that the DFP (decimal-floating-point) facility also
has to be indicated. We can ignore that for now.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-16-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
"round to nearest with ties away from 0" maps to float_round_ties_away.
"round to prepare for shorter precision" maps to float_round_to_odd.
As all instructions properly check for valid rounding modes in translate.c
we can add an assert. Fix one missing empty line.
Cc: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
With the floating-point extension facility, LOAD ROUNDED has
a rounding mode specification and the inexact-exception control (XxC).
Handle them just like e.g. LOAD FP INTEGER.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
With the floating-point extension facility
- CONVERT FROM LOGICAL
- CONVERT TO LOGICAL
- CONVERT TO FIXED
- CONVERT FROM FIXED
- LOAD FP INTEGER
have both, a rounding mode specification and the inexact-exception control
(XxC). Other instructions will be handled separatly.
Check for valid rounding modes and forward also the XxC (via m4). To avoid
a lot of boilerplate code and changes to the helpers, combine both, the
m3 and m4 field in a combined 32 bit TCG variable. Perform checks at
a central place, taking in account if the m3 or m4 field was ignore
before the floating-point extension facility was introduced.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-13-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Some instructions allow to suppress IEEE inexact exceptions.
z14 PoP, 9-23, "Suppression of Certain IEEE Exceptions"
IEEE-inexact-exception control (XxC): Bit 1 of
the M4 field is the XxC bit. If XxC is zero, recogni-
tion of IEEE-inexact exception is not suppressed;
if XxC is one, recognition of IEEE-inexact excep-
tion is suppressed.
Especially, handling for overflow/unerflow remains as is, inexact is
reported along
z14 PoP, 9-23, "Suppression of Certain IEEE Exceptions"
For example, the IEEE-inexact-exception control (XxC)
has no effect on the DXC; that is, the DXC for IEEE-
overflow or IEEE-underflow exceptions along with the
detail for exact, inexact and truncated, or inexact and
incremented, is reported according to the actual con-
dition.
Follow up patches will wire it correctly up for the applicable
instructions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-12-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We want to reuse this in the context of vector instructions. So use
better matching names and introduce s390_restore_bfp_rounding_mode().
While at it, add proper newlines.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-11-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's split handling of BFP/DFP rounding mode configuration. Also,
let's not reuse the sfpc handler, use a separate handler so we can
properly check for specification exceptions for SRNMB.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We already forward the 3 bits correctly in the translation functions. We
also have to handle them properly and check for specification
exceptions.
Setting an invalid rounding mode (BFP only, all DFP rounding modes)
results in a specification exception. Setting unassigned bits in the
fpc, results in a specification exception.
This fixes LOAD FPC (AND SIGNAL), SET FPC (AND SIGNAL). Also for,
SET BFP ROUNDING MODE, 3-bit rounding mode is now explicitly checked.
Note: TCG_CALL_NO_WG is required for sfpc handler, as we now inject
exceptions.
We won't be modeling abscence of the "floating-point extension facility"
for now, not necessary as most take the facility for granted without
checking.
z14 PoP, 9-23, "LOAD FPC"
When the floating-point extension facility is
installed, bits 29-31 of the second operand must
specify a valid BFP rounding mode and bits 6-7,
14-15, 24, and 28 must be zero; otherwise, a
specification exception is recognized.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-9-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The trap is triggered based on priority of the enabled signaling flags.
Only overflow and underflow allow a concurrent inexact exception.
z14 PoP, 9-33, Figure 9-21
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We can directly work on the uint64_t value, no need for a temporary
uint32_t value.
Also cleanup and shorten the comments.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
IEEE underflows are not reported when the mask bit is off and we don't
also have an inexact exception.
z14 PoP, 9-20, "IEEE Underflow":
An IEEE-underflow exception is recognized for an
IEEE target when the tininess condition exists and
either: (1) the IEEE-underflow mask bit in the FPC
register is zero and the result value is inexact, or (2)
the IEEE-underflow mask bit in the FPC register is
one.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Many things are wrong and some parts cannot be fixed yet. Fix what we
can fix easily and add two FIXMEs:
The fpc flags are not updated in case an exception is actually injected.
Inexact exceptions have to be handled separately, as they are the only
exceptions that can coexist with underflows and overflows.
I reread the horribly complicated chapters in the PoP at least 5 times
and hope I got it right.
For references:
- z14 PoP, 9-18, "IEEE Exceptions"
- z14 PoP, 19-9, Figure 19-8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We want to reuse that function in vector instruction context. While at it,
cleanup the code, using defines for magic values and avoiding the
handcrafted bit conversion.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's use the proper conversion functions now that we have them.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Let's detect normal and denormal ("subnormal") numbers reliably. Also
test for quiet NaN's. As only one class is possible, test common cases
first.
While at it, use a better check to test for the mask bits in the data
class mask. The data class mask has 12 bits, whereby bit 0 is the
leftmost bit and bit 11 the rightmost bit. In the PoP an easy to read
table with the numbers is provided for the VECTOR FP TEST DATA CLASS
IMMEDIATE instruction, the table for TEST DATA CLASS is more confusing
as it is based on 64 bit values.
Factor the checks out into separate functions, as they will also be
needed for floating point vector instructions. We can use a makro to
generate the functions.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190218122710.23639-2-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Use a new CC helper to calculate the CC lazily if needed. While the
PoP mentions that "A 32-bit unsigned binary integer" is placed into the
first operand, there is no word telling that the other 32 bits (high
part) are left untouched. Maybe the other 32-bit are unpredictable.
So store 64 bit for now.
Bit magic courtesy of Richard.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-8-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Nice trick to load a 32 bit value into vector element 0 (32 bit element
size) from memory, zeroing out element1. The short HFP to long HFP
conversion really only is a shift.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Also properly wrap in 24bit mode. While at it, convert the comment (and
drop the comment about fundamental TCG optimizations).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We'll use that a lot along with gvec helpers, to calculate the start
address of a vector.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We will use s390x speak "Element Size" (es) for MO_8 == 0, MO_16 == 1
... Simple rename of variables.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Will be needed, so add it to the format description.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190225200318.16102-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If we have vector registers and the designation is not zero, we have
to try to write the vector registers. If the designation is zero or
if storing fails, we must not indicate validity. s390_build_validity_mcic()
automatically already sets validity if the vector instruction facility
is installed.
As long as we don't support the guarded-storage facility, the alignment
and size of the area is always 1024 bytes.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-4-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Convert this to QEMU style.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-3-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As we will support vector instructions soon, and vector registers are
stored in 64bit host chunks, let's use cpu_to_be64. Same applies to the
guarded storage control block.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190222081153.14206-2-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
* add MHU and dual-core support to Musca boards
* refactor some VFP insns to be gated by ID registers
* Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
* Implement ARMv8.2-FHM extension
* Advertise JSCVT via HWCAP for linux-user
-----BEGIN PGP SIGNATURE-----
iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAlx3wM8ZHHBldGVyLm1h
eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3t+yD/4hbg4UCNDNHvnHv5N0dwVo
xDnEwN8Ath5jhcIlwjB4sPg44wO1dTy9PXK75UskGbUXnJfl4VFQsTVOg6GELVPc
RJJ7S1hBjaipRxaS7tgBl+sE03JFSFniGaYuU5cpwxh62HWlZRBZ85+Pw3iNb9So
UgrnQeThPNb9STKt2x0T8TvgjmwuS6fRYqA0DSVqUWT7FRNgIpfJ+dVkGxAhC8Mh
YJVmLfR1Z/HS3lWRHkZHDBkv036by7XnrRdTEb7yftNflmFHaX0OdSO/4+Uueslf
Lz9uem7LUOwnz9x0tBDSdaUrfJ4hmJSNXZhoeINR0V4MUKQBVWvRUrlfymRlFL15
SlI7i19FS0OleFTZs26TflGutgLwvMTRzAvhVR/F+pBqlYs1UxvNk4eMPLZFYPuc
OlRsgoUUtmF722TjW2l+Uewixo22AMatyv9VsiR6Ut7etmLIj8HHABkDX5kQbqFc
wz60pkUvPcywGGATaMImQJ+uoHOTXZhegBPyfYZYhbTVXshjvEYxFSLtmfhoyVAo
SyUUhsQyu4KGRVm4zGXKQuAPALElaDcKJ/T1H11pobMrCgM48C3br3EGsSZyOEFp
2A7ulT73sYL+7EjQ2fS/4kTXUGOiIWijo1oR9ANvqYbcROQiKDYsl023oEz0dpVY
n2tWg1Gzt/KjeM0md8B/Lg==
=7MnB
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190228-1' into staging
target-arm queue:
* add MHU and dual-core support to Musca boards
* refactor some VFP insns to be gated by ID registers
* Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
* Implement ARMv8.2-FHM extension
* Advertise JSCVT via HWCAP for linux-user
# gpg: Signature made Thu 28 Feb 2019 11:06:55 GMT
# gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg: issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20190228-1:
linux-user: Enable HWCAP_ASIMDFHM, HWCAP_JSCVT
target/arm: Enable ARMv8.2-FHM for -cpu max
target/arm: Implement VFMAL and VFMSL for aarch32
target/arm: Implement FMLAL and FMLSL for aarch64
target/arm: Add helpers for FMLAL
Revert "arm: Allow system registers for KVM guests to be changed by QEMU code"
target/arm: Gate "miscellaneous FP" insns by ID register field
target/arm: Use MVFR1 feature bits to gate A32/T32 FP16 instructions
hw/arm/armsse: Unify init-svtor and cpuwait handling
hw/arm/iotkit-sysctl: Implement CPUWAIT and INITSVTOR*
hw/arm/iotkit-sysctl: Add SSE-200 registers
hw/misc/iotkit-sysctl: Correct typo in INITSVTOR0 register name
target/arm/arm-powerctl: Add new arm_set_cpu_on_and_reset()
target/arm/cpu: Allow init-svtor property to be set after realize
hw/arm/armsse: Wire up the MHUs
hw/misc/armsse-mhu.c: Model the SSE-200 Message Handling Unit
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcdpBIAAoJENSXKoln91plXDcH/377ByoCFuKu0uYN0f8m3eJ5
wCwOrcwExM36vga/zaMCkkj44TbrzpNtjeo/frBn+8pabFDpfF6NOXlBSC+CE/hg
i3G4Wm09GeNOyPH9JIdvItE1LvL3EEOf10pbheNdv6PeuFPRnUAV4pyQ/Rcu9USC
7pAwIJvR3GYXAEhsqa8sKbbuCBq1oiFXWpsEuBNwybWKgdVEpia6IJVYDi+xwnVc
FcpMF7BAqZDIX13kCSIgOAaa/XCKRFgxUYnZMd3bwD9m+x3iC442eS7Idx/HyXXK
5HDeyubff8bMKBzTUWFuM1J8t0uuQsRDqR61WptQ0rxf5Qf9Uiv9OcAww+LIENI=
=9h9l
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-feb-27-2019' into staging
MIPS queue for February 27th, 2019
# gpg: Signature made Wed 27 Feb 2019 13:27:36 GMT
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-feb-27-2019:
target/mips: Preparing for adding MMI instructions
tests/tcg: target/mips: Add tests for MSA integer max/min instructions
tests/tcg: target/mips: Add wrappers for MSA integer max/min instructions
qemu-doc: Add section on MIPS' Boston board
qemu-doc: Add section on MIPS' Fulong 2E board
qemu-doc: Move section on MIPS' mipssim pseudo board
disas: nanoMIPS: Fix a function misnomer
tests/tcg: target/mips: Add tests for MSA integer compare instructions
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Load/store opcodes may raise MMU exceptions. Normally exceptions should
be checked in priority order before any actual operations, but since MMU
exceptions are tightly coupled with actual memory access, there's
currently no way to do it.
Approximate this behavior by executing all load, then all store, and
then all other opcodes in the FLIX bundles. Use opcode dependency
mechanism to express ordering. Mark load/store opcodes with
XTENSA_OP_{LOAD,STORE} flags. Newer libisa has classifier functions that
can tell whether opcode is a load or store, but this information is not
available in the existing overlays.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Currently topologic opcode sorting stops at the first detected
dependency loop. Introduce struct opcode_arg_copy that describes
temporary register copy. Scan remaining opcodes searching for
dependencies that can be broken, break them by introducing temporary
register copies and record them in an array. In case of success
create local temporaries and initialize them with current register
values. Share single temporary copy between all register users. Delete
temporaries after translation.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
libisa represents boolean registers b0..b16 as a BR register file and as
BR4 and BR8 register groups. Add these register files and use
OpcodeArg::{in,out} parameters to access boolean registers in
translators.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
libisa represents MAC16 registers m0..m3 as an MR register file. Add
this register file and reference its registers directly from the
translate_mac16. Drop translator parameter that indicates whether opcode
argument is in ar or in mr.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
To support circular register dependencies in FLIX bundles opcode inputs
and outputs must be separate and adjustable. Circular dependencies can
be broken by making temporary copies of opcode inputs and substituting
them into the arguments array instead of the original registers.
E.g. the circular register dependency in the following bundle:
{ mov a2, a3 ; mov a3, a2 }
can be resolved by making copy a2' = a2 and substituting it as input
argument of the second opcode:
{ mov a2, a3 ; mov a3, a2' }
Change opcode translator prototype to accept OpcodeArg array as
argument. For each register argument initialize OpcodeArg::{in,out} with
TCGv_* of the respective register. Don't explicitly use cpu_R in the
opcode translators, use OpcodeArg::{in,out} instead.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Move return address calculation and WINDOW_START adjustment out of the
retw helper to simplify logic a bit and avoid using registers directly.
Pass a0 as a parameter to the helper.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Opcodes that modify WINDOW_BASE SR don't have dependency on opcodes that
use windowed registers. If such opcodes are combined in a single
instruction they may not be correctly ordered. Instead of adding said
dependency use temporary register to store changed WINDOW_BASE value and
do actual register window rotation as a postprocessing step.
Not all opcodes that change WINDOW_BASE need this: retw, rfwo and rfwu
are also jump opcodes, so they are guaranteed to be translated last and
thus will not affect other opcodes in the same instruction.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Some opcodes may need additional actions at every exit from the
translated instruction or may need to amend TB exit slots available to
jumps generated for the instruction. Add gen_postprocess function and
call it from the gen_jump_slot and from the disas_xtensa_insn.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Opcodes in different slots may read and write same resources (registers,
states). In the absence of resource dependency loops it must be possible
to sort opcodes to avoid interference.
Record resources used by each opcode in the bundle. Build opcode
dependency graph and use topological sort to order its nodes. In case of
success translate opcodes in sort order. In case of failure report and
raise invalid opcode exception.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Note that float16_to_float32 rightly squashes SNaN to QNaN.
But of course pickNaNMulAdd, for ARM, selects SNaNs first.
So we have to preserve SNaN long enough for the correct NaN
to be selected. Thus float16_to_float32_by_bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219222952.22183-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit 823e1b3818,
which introduces a regression running EDK2 guest firmware
under KVM:
error: kvm run failed Function not implemented
PC=000000013f5a6208 X00=00000000404003c4 X01=000000000000003a
X02=0000000000000000 X03=00000000404003c4 X04=0000000000000000
X05=0000000096000046 X06=000000013d2ef270 X07=000000013e3d1710
X08=09010755ffaf8ba8 X09=ffaf8b9cfeeb5468 X10=feeb546409010756
X11=09010757ffaf8b90 X12=feeb50680903068b X13=090306a1ffaf8bc0
X14=0000000000000000 X15=0000000000000000 X16=000000013f872da0
X17=00000000ffffa6ab X18=0000000000000000 X19=000000013f5a92d0
X20=000000013f5a7a78 X21=000000000000003a X22=000000013f5a7ab2
X23=000000013f5a92e8 X24=000000013f631090 X25=0000000000000010
X26=0000000000000100 X27=000000013f89501b X28=000000013e3d14e0
X29=000000013e3d12a0 X30=000000013f5a2518 SP=000000013b7be0b0
PSTATE=404003c4 -Z-- EL1t
with
[ 3507.926571] kvm [35042]: load/store instruction decoding not implemented
in the host dmesg.
Revert the change for the moment until we can investigate the
cause of the regression.
Reported-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is a set of VFP instructions which we implement in
disas_vfp_v8_insn() and gate on the ARM_FEATURE_V8 bit.
These were all first introduced in v8 for A-profile, but in
M-profile they appeared in v7M. Gate them on the MVFR2
FPMisc field instead, and rename the function appropriately.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190222170936.13268-3-peter.maydell@linaro.org
Instead of gating the A32/T32 FP16 conversion instructions on
the ARM_FEATURE_VFP_FP16 flag, switch to our new approach of
looking at ID register bits. In this case MVFR1 fields FPHP
and SIMDHP indicate the presence of these insns.
This change doesn't alter behaviour for any of our CPUs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190222170936.13268-2-peter.maydell@linaro.org
Currently the Arm arm-powerctl.h APIs allow:
* arm_set_cpu_on(), which powers on a CPU and sets its
initial PC and other startup state
* arm_reset_cpu(), which resets a CPU which is already on
(and fails if the CPU is powered off)
but there is no way to say "power on a CPU as if it had
just come out of reset and don't do anything else to it".
Add a new function arm_set_cpu_on_and_reset(), which does this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219125808.25174-5-peter.maydell@linaro.org
Make the M-profile "init-svtor" property be settable after realize.
This matches the hardware, where this is a config signal which
is sampled on CPU reset and can thus be changed between one
reset and another. To do this we have to change the API we
use to add the property.
(We will need this capability for the SSE-200.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190219125808.25174-4-peter.maydell@linaro.org
Set up MMI code to be compiled only for TARGET_MIPS64. This is
needed so that GPRs are 64 bit, and combined with MMI registers,
they will form full 128 bit registers.
Signed-off-by: Mateja Marjanovic <mateja.marjanovic@rt-rk.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1551183797-13570-2-git-send-email-mateja.marjanovic@rt-rk.com>
No guest support yet
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-13-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(Might need more patch splitting)
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-12-clg@kaod.org>
[dwg: Hack to fix compile with some earlier include tweaks of mine]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
That "b" means "base address" and thus shouldn't be in the name
of actual entries and related constants.
This patch keeps the synthetic patb_entry field of the spapr
virtual hypervisor unchanged until I figure out if that has
an impact on the migration stream.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-11-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Our TCG TLB only tags whether it's a HV vs a guest access, so it must
be flushed when the LPIDR is changed.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-10-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Let's use the generic helper tlb_flush_all_cpus_synced() instead
of iterating the CPUs ourselves.
We do lose the optimization of clearing the "other" CPUs "need flush"
flags but this shouldn't be a problem in practice.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-9-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
POWER9 (arch v3) slightly changes the HPTE format. The B bits move
from the first to the second half of the HPTE, and the AVPN/ARPN
are slightly shorter.
However, under SPAPR, the hypercalls still take the old format
(and probably will for the foreseable future).
The simplest way to support this is thus to convert the HPTEs from
new to old format when reading them if the MMU model is v3 and there
is no virtual hypervisor, leaving the rest of the code unchanged.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-8-clg@kaod.org>
[dwg: Moved function to .c since there was no real need for it in the .h]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
With mttcg, we can have MMU lookups happening at the same time
as the guest modifying the page tables.
Since the HPTEs of the hash table MMU contains two words (or
double worlds on 64-bit), we need to make sure we read them
in the right order, with the correct memory barrier.
Additionally, when using emulated SPAPR mode, the hypercalls
writing to the hash table must also perform the udpates in
the right order.
Note: This part is still not entirely correct
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-7-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Historically the 64-bit server MMU supports two way of configuring the
guest "real mode" mapping:
- The "RMA" with is a single chunk of physically contiguous
memory remapped as guest real, and controlled by the RMLS
field in the LPCR register and the RMOR register.
- The "VRMA" which uses special PTEs inserted in the partition
hash table by the hypervisor.
POWER9 deprecates the former, which is reflected by the filtering
done in ppc_store_lpcr() which effectively prevents setting of
the RMLS field.
However, when using fully emulated SPAPR machines, our qemu code
currently only knows how to define the guest real mode memory using
RMLS.
Thus you cannot run a SPAPR machine anymore with a POWER9 CPU
model today.
This works around it with a quirk in ppc_store_lpcr() to continue
allowing the RMLS field to be set when using a virtual hypervisor.
Ultimately we will want to implement configuring a VRMA instead
which will also be necessary if we want to migrate a SPAPR guest
between TCG and KVM but this is a lot more work.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that LPCR:HR is set properly for SPAPR, use it for deciding
the translation type, which also works for bare metal
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-3-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The HW relies on LPCR:HR along with the PATE to determine whether
to use Radix or Hash mode. In fact it uses LPCR:HR more commonly
than the PATE.
For us, it's also more efficient to do so, especially since unlike
the HW we do not maintain a cache of the current PATE and HV PATE
in a generic place.
Prepare the grounds for that by ensuring that LPCR:HR is set
properly on SPAPR machines.
Another option would have been to use a callback to get the PATE
but this gets messy when implementing bare metal support, it's
much simpler (and faster) to use LPCR.
Since existing migration streams may not have it, fix it up in
spapr_post_load() as well based on the pseudo-PATE entry that
we keep.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215170029.15641-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This controls whether the External Interrupt (0x500) can be
delivered to the hypervisor or not.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-11-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Adds support for the Hypervisor directed interrupts in addition to the
OS ones.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg: - modified the icp_realize() and xive_tctx_realize() to take
into account explicitely the POWER9 interrupt model
- introduced a specific power9_set_irq for POWER9 ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215161648.9600-10-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This adds support for delivering that exception
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-9-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
It's very easy for the CPU specific has_work() implementation
and the logic in ppc_hw_interrupt() to be subtly out of sync.
This can occasionally allow a CPU to wakeup from a PM state
and resume executing past the PM instruction when it should
resume at the 0x100 vector.
This detects if it happens and aborts, making it a lot easier
to catch such bugs when testing rather than chasing obscure
guest misbehaviour.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-8-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
And use it to get the correct HILE bit in HID0
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-7-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
To better reflect what this does, as it's specific to some of the
P7/P8/P9 PM states, not generic.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-6-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This moves the code to handle waking up from the 0x100 vector
from powerpc_excp() to a separate function, as the former is
already way too big as it is.
No functional change.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-5-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
STOP must act differently based on PSSCR:EC on POWER9. When set, it
acts like the P7/P8 power management instructions and wake up at 0x100
based on the wakeup conditions in LPCR.
When PSSCR:EC is clear however it will wakeup at the next instruction
after STOP (if EE is clear) or take the corresponding interrupts (if
EE is set).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-4-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When issuing a power management instruction, we set MSR:EE
to force ppc_hw_interrupt() into calling powerpc_excp()
to deal with the fact that on P7 and P8, the system reset
caused by the wakeup needs to be generated regardless of
the MSR:EE value (using LPCR only).
This however means that the OS will see a bogus SRR1:EE
value which is a problem. It also prevents properly
implementing P9 STOP "light".
So fix this by instead putting some logic in ppc_hw_interrupt()
to decide whether to deliver or not by taking into account the
fact that we are waking up from sleep.
The LPCR isn't checked as this is done in the has_work() test.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215161648.9600-3-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Those instructions currently raise an exception from within
the helper. This tends to result in a bogus nip value in
the env context (typically the beginning of the TB). Such
a helper needs a gen_update_nip() first.
This fixes it with a different approach which is to throw the
exception from translate.c instead of the helper using
gen_exception_nip() which does the right thing. Exception
EXCP_HLT is also used instead of POWERPC_EXCP_STOP to effectively
exit from the CPU execution loop.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[clg : modified the commit log to comment the use of EXCP_HLT instead
of POWERPC_EXCP_STOP]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190215161648.9600-2-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJcbu/QAAoJENSXKoln91plb0oH/RczDVACfmhnERAru8NhW19/
6YB5w1FjbH+CkNB4ZBdF5sQNRyAnuHxL6xMKT3LvZUCEy0ADk+D5KJxzg340JABB
eGc2FxKYe1vbhCAsYhQMOZyGhiye6UZnRjTXirYqMCm74zuFVI954X0V1ytfHARI
0AIsWcOOVLnJj+itU0Uj+i+dBFFec0TbHWodvB8rt+TVcg5SFsdiwbT7jLxUSCAA
VwhjmDUlE2+545LgbIrRbhMfnsEDkMgN2C1YGqkdBSM03dYnW0scxudGbxN0QPrV
l0KAVTvUXcdUj0i1B3E91QiF0s6KU34TpE1vwZFUBdyuqHpIPgNhJkK6Tmt/DqM=
=PVKW
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic/tags/mips-queue-feb-21-2019-v2' into staging
MIPS queue for February 21st, 2019, v2
# gpg: Signature made Thu 21 Feb 2019 18:37:04 GMT
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic/tags/mips-queue-feb-21-2019-v2:
target/mips: fulong2e: Dynamically generate SPD EEPROM data
target/mips: fulong2e: Fix bios flash size
hw/pci-host/bonito.c: Add PCI mem region mapped at the correct address
target/mips: implement QMP query-cpu-definitions command
tests/tcg: target/mips: Add wrappers for MSA integer compare instructions
tests/tcg: target/mips: Change directory name 'bit-counting' to 'bit-count'
tests/tcg: target/mips: Correct path to headers in some test source files
hw/misc: mips_itu: Fix 32/64 bit issue in a line involving shift operator
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch enables QMP-based querying of the available CPU types for
MIPS and MIPS64 platforms.
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: fixed a couple of comment typos]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There are lots of special cases within these insns. Split the
major argument decode/loading/saving into no_output (compares),
rd_is_dp, and rm_is_dp.
We still need to special case argument load for compare (rd as
input, rm as zero) and vcvt fixed (rd as input+output), but lots
of special cases do disappear.
Now that we have a full switch at the beginning, hoist the ISA
checks from the code generation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move all of the fp helpers out of helper.c into a new file.
This is code movement only. Since helper.c has no copyright
header, take the one from cpu.h for the new file.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For opcodes 0-5, move some if conditions into the structure
of a switch statement. For opcodes 6 & 7, decode everything
at once with a second switch.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190215192302.27855-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was introduced by
commit bf8d09694c
target/arm: Don't clear supported PMU events when initializing PMCEID1
and identified by Coverity (CID 1398645).
Signed-off-by: Aaron Lindsay <aaron@os.amperecomputing.com>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190219144621.450-1-aaron@os.amperecomputing.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The "background region" for a v8M MPU is a default which will be used
(if enabled, and if the access is privileged) if the access does
not match any specific MPU region. We were incorrectly using it
always (by putting the condition at the wrong nesting level). This
meant that we would always return the default background permissions
rather than the correct permissions for a specific region, and also
that we would not return the right information in response to a
TT instruction.
Move the check for the background region to the same place in the
logic as the equivalent v8M MPUCheck() pseudocode puts it.
This in turn means we must adjust the condition we use to detect
matches in multiple regions to avoid false-positives.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190214113408.10214-1-peter.maydell@linaro.org
FLIX adds branch and loop instruction variants with 15- and 18-bit wide
target offset. Implement them as additional names for the ordinary
branch/loop opcodes.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
There are opcodes that differ only in encoding or possible range of
immediate arguments. Allow multiple names for single opcode translation
table entry to reduce code duplication in that case.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Requirement for alphabetical opcode sorting in opcode tables is awkward
and does not allow sharing implementation between multiple opcodes.
Use hash tables to find opcodes by name. Move implementation from the
translate.c to the helper.c to its only user and remove declaration from
the cpu.h
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Don't run xtensa_finalize_config at the time of core registration,
instead run it at the CPU class initialization.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
xtensa-modules part of the test_mmuhifi_c3 core is missing fixes that
returns XTENSA_UNDEFINED for undefined opcodes and marks all data
structures static. Run sed script from target/xtensa/import_core.sh on
it. This fixes test_sr tests for missing special registers.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Here's the next batch of ppc and spapr patches. Higlights are:
* A bunch of improvements to TCG handling of vector instructions from
Richard Henderson and Marc Cave-Ayland
* Cleanup to the XICS interrupt controller from Greg Kurz, removing
the special KVM subclasses which were a bad idea
* Some refinements to the XIVE interrupt controller from Cédric Le
Goater
* Fix from Fabiano Rosas for a really dumb buffer overflow in the
device tree code for memory hotplug
* Code for allowing access to SPRs from the gdb stub from Fabiano
Rosas
* Assorted minor fixes and cleanups
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAlxqt4oACgkQbDjKyiDZ
s5KeaBAAzHortvO/rKiQ0hkhKdy9MtaBbuPIYwMYA5dQXYH2gOi/VZxXHBhwDczy
MdXv+5Y+OYEWL0RC6kJGceM4xCD4b+WzZMriwYA5q32YeiUHmduyWxdq8Ulasm32
xok5DheVjyJLS970Q8Qp1Ck7vRXfYVd/7R/hNExcKkYU3wczqVEDqglHyThxaP0s
pTKrPGSuT+kHfi4kuLQ2qyKeNe6XWrvmgBAnXsud6lqWQ7D0ZAalnzhEoMrEMeyK
ldjh/suB68WyJZ7Sl0REV2DlILLKc/wDSL4HMmjmyuV5ldEKVyqhM8f7tHMtzeET
Ab8zKd0F4L1ffjyN3gmrh4WtyTa5L1s8av/bJFfESFNT3ioPFuDeMYQGQH4y3hJg
nNGSJaWXRu/3c0/uRcA9SSxWQYSzKCz2WFEV06UK2JlajVd6Wy5zpjy/7spZhbQH
z4TOSQrnRdIveRBTyUTUkJjbAitocUfHs2vCfzDBhACfj2LovSicNG284LlZXF1U
/d6F668Z2aoDpdpgKh1QSOJ6bTS/1KwKCvZ89L15EUYOcCrZlZjECJR+WtGhTP7A
YKyylvBkZ5a+M7t0f/Rm8KAy5QnpEAy7fKqLGQw8aldqX2MK46acjEwA5v696yZk
iCyAas5gu0U6ytKMOYwT1Lq1hmID/fyBApXIeFJhz2KFzTb4PqM=
=QQra
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-4.0-20190219' into staging
ppc patch queue 2019-02-19
Here's the next batch of ppc and spapr patches. Higlights are:
* A bunch of improvements to TCG handling of vector instructions from
Richard Henderson and Marc Cave-Ayland
* Cleanup to the XICS interrupt controller from Greg Kurz, removing
the special KVM subclasses which were a bad idea
* Some refinements to the XIVE interrupt controller from Cédric Le
Goater
* Fix from Fabiano Rosas for a really dumb buffer overflow in the
device tree code for memory hotplug
* Code for allowing access to SPRs from the gdb stub from Fabiano
Rosas
* Assorted minor fixes and cleanups
# gpg: Signature made Mon 18 Feb 2019 13:47:54 GMT
# gpg: using RSA key 75F46586AE61A66CC44E87DC6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>" [full]
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>" [full]
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>" [full]
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>" [unknown]
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-4.0-20190219: (43 commits)
target/ppc: convert vmin* and vmax* to vector operations
target/ppc: convert vadd*s and vsub*s to vector operations
target/ppc: Split out VSCR_SAT to a vector field
target/ppc: Add set_vscr_sat
target/ppc: Use mtvscr/mfvscr for vmstate
target/ppc: Add helper_mfvscr
target/ppc: Remove vscr_nj and vscr_sat
target/ppc: Use helper_mtvscr for reset and gdb
target/ppc: Pass integer to helper_mtvscr
target/ppc: convert xxsel to vector operations
target/ppc: convert xxspltw to vector operations
target/ppc: convert xxspltib to vector operations
target/ppc: convert VSX logical operations to vector operations
target/ppc: convert vsplt[bhw] to use vector operations
target/ppc: convert vspltis[bhw] to use vector operations
target/ppc: convert vaddu[b,h,w,d] and vsubu[b,h,w,d] over to use vector operations
target/ppc: convert VMX logical instructions to use vector operations
xics: Drop the KVM ICS class
spapr/irq: Use the "simple" ICS class for KVM
xics: Handle KVM interrupt presentation from "simple" ICS code
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
iQIcBAABAgAGBQJcara+AAoJEDhwtADrkYZTmU4P/jt4seb0EQZBl/+YpqdyT75m
H8RvJWTbzh7mstSeJNbyeUG9P9hmNB7j9X9uVF978csnqnp9W8x8pK91SnG+hbcI
H6nPh+/tBxTFLdBkxiTbtr7BD4aDVLsspfdD7eT1ZticSYubfNiSd7g0rgIlrR7M
B/OPgE2vt9pKbMGcQoSjBiaui+qnuAnWcpJlHbzsPkaAS9x6U+5tkfA0YbuUgI7k
9CR9HrzZGB2YU1E93CUIE0JntmnRF/RUK1OoiKwZu9nVlcUI5K08RdqMBUTM1m9P
QouCEomzr63UXgSqSE0wCu5efwdluGOqbrDBqjzam6QOn5+Rqbn3krbbcXfY8Bub
fVYMYbeLuGkXbX/Uvyj9YoZRJ8JLvAjkLecuWz27+wEHR3V0CjqoFLCmNYQt8T9R
ti+jj9cWPt40kSoUPMF6QuboORBmTGITS/sy2akq6rMnXxsDeoN1SLdNdYC/4Rax
S9j5mh0gR/YkrWwWO7Ydr7xSF9ciYFltPVEsgxVtZy/biGj52IjpjnGhTST+gJeB
Icd65cs/vgoaN9gX+n0SKf0mna162aysw3DMT4hKO42iBVQ+P0c37j1xv80pXgdw
THMJcOJFJ/PGUWpWHl/Q0wr5RkUqRpHcVp9NvssYOsbQgMA8YH+/2NV4yoJ7TIK5
JLrDXbKvl18myezVKtz8
=pNCA
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2019-02-18' into staging
QAPI patches for 2019-02-18
# gpg: Signature made Mon 18 Feb 2019 13:44:30 GMT
# gpg: using RSA key 3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2019-02-18:
qapi: move RTC_CHANGE to the target schema
qmp: Deprecate query-events in favor of query-qmp-schema
Revert "qapi-events: add 'if' condition to implicit event enum"
qapi: remove qmp_unregister_command()
qapi: make query-cpu-definitions depend on specific targets
qapi: make query-cpu-model-expansion depend on s390 or x86
qapi: make query-gic-capabilities depend on TARGET_ARM
target.json: add a note about query-cpu* not being s390x-specific
qapi: make s390 commands depend on TARGET_S390X
qapi: make rtc-reset-reinjection and SEV depend on TARGET_I386
qapi: New module target.json
build: Deal with all of QAPI's .o in qapi/Makefile.objs
build-sys: move qmp-introspect per target
qapi: Generate QAPIEvent stuff into separate files
qapi: Prepare for system modules other than 'builtin'
qapi: Clean up modular built-in code generation a bit
qapi: Fix up documentation for recent commit a95291007b
qapi: Belatedly document modular code generation
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Move rtc-reset-reinjection and SEV in target.json and make them
conditional on TARGET_I386.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190214152251.2073-10-armbru@redhat.com>
Introduce the z14 GA2 cpu model for QEMU. There are no new features
introduced with this model, and will inherit the same feature set as
z14 GA1.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190212011657.18324-3-walling@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Latest systems and host kernels support mepoch, which is a
feature that was meant to be supported for z14 GA1 from the
get-go. Let's copy it to the z14 GA1 default CPU model.
Machines s390-ccw-virtio-3.1 and older will retain the old CPU
models and will not provide this bit nor the extended PTFF
functions in the default model.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Message-Id: <20190212011657.18324-2-walling@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The extended PTFF features (qsie, qtoue, stoe, stoue) are dependent
on the multiple-epoch facility (mepoch). Let's print a warning if these
features are enabled without mepoch.
While we're at it, let's move the FEAT_GROUP_INIT for mepochptff down
the s390_feature_groups list so it can be properly indexed with its
generated S390FeatGroup enum.
Signed-off-by: Collin Walling <walling@linux.ibm.com>
Message-Id: <20190212011657.18324-1-walling@linux.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
As we now always have PCI support, let's add it to the "qemu" CPU model,
taking care of backwards compatibility.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190212112323.15904-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This is a non-privileged instruction that was only implemented
for system mode. However, the stck instruction is used by glibc,
so this was causing SIGILL for programs run under debian stretch.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190212053044.29015-3-richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We will need these from CONFIG_USER_ONLY as well,
which cannot access include/hw/.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190212053044.29015-2-richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We tried to make pci support optional on s390x in the past;
unfortunately, we still require the s390 phb to be created
unconditionally due to backwards compatibility issues.
Instead of sinking more effort into this (including compat
handling for older machines etc.) for non-obvious gains, let's
just make CONFIG_PCI something that is always set on s390x.
Note that you can still fence off pci for the _guest_ if you
provide a cpu model without the zpci feature.
Message-Id: <20190211113255.3837-1-cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The license information in these files is rather confusing. The text
declares LGPL first, but then says that contributions after 2012 are
licensed under the GPL instead. How should the average user who just
downloaded the release tarball know which part is now GPL and which
is LGPL?
Looking at the text of the LGPL (see COPYING.LIB in the top directory),
the license clearly states how this should be done instead:
"3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License."
Thus let's clean up the confusing statements and use the proper GPL
text only.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1549456893-16589-1-git-send-email-thuth@redhat.com>
Acked-by: Laurent Vivier <laurent@vivier.eu>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-18-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-17-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Change the representation of VSCR_SAT such that it is easy
to set from vector code.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-16-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is required before changing the representation of the register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-15-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is required before changing the representation of the register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-14-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is required before changing the representation of the register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-13-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
These macros are no longer used.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-12-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Not setting flush_to_zero from gdb_set_avr_reg was a bug.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-11-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We can re-use this helper elsewhere if we're not passing
in an entire vector register.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-10-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-9-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-8-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-6-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190215100058.20015-4-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20190215100058.20015-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The ISA 2.06/2.07 Power Management instructions (doze, nap & rvwinkle)
don't exist on POWER9, don't enable them.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190128094625.4428-13-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The PPC BRANCH exception could bubble up, but this is an QEMU internal exception
and QEMU then crased. Instead it should trigger TRACE exception, according to
PPC 2.07 book. It could happen only when using branch stepping, which is not
commonly used.
Change gen_prep_dbgex do do trigger TRACE. The excp, argument is now removed,
since the type of exception can be inferred from the singlestep_enabled flags.
removed the guards around gen_exception, since they are unnecessary.
Fixes: 0e3bf48909 ("ppc: add DBCR based debugging").
Signed-off-by: Roman Kapl <rka@sysgo.com>
Message-Id: <20190212121255.2279-1-rka@sysgo.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Some debug stuff we don't need to keep there
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20190128094625.4428-7-clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
According to BookE docs, invalid bits (while undefined behaviour) should
not raise exception but be ignored. This seems to be implementation
dependent though and QEMU currently does what e500 CPUs do and raise
exception for invalid bits. Unfortunately some versions of libstdc++
(and so all programs compiled with it) have lwsync on PPC440 which is
invalid but on real hardware it's just executed as msync ignoring the
invalid bits (maybe that's why it got undetected) but they fail on QEMU.
This patch changes invalid mask of msync to allow these programs to run
but keep generating exception on e500 cores to follow what hardware does.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This allows reading and writing of SPRs via GDB:
(gdb) p/x $srr1
$1 = 0x8000000002803033
(gdb) p/x $pvr
$2 = 0x4b0201
(gdb) set $pvr=0x4b0000
(gdb) p/x $pvr
$3 = 0x4b0000
The `info` command can also be used:
(gdb) info registers spr
For this purpose, GDB needs to be provided with an XML description of
the registers (see the gdb-xml directory for examples) and a set of
callbacks for reading and writing the registers must be defined.
The XML file in this case is created dynamically, based on the SPRs
already defined in the machine. This way we avoid the need for several
XML files to suit each possible ppc machine.
The gdb_{get,set}_spr_reg callbacks take an index based on the order
the registers appear in the XML file. This index does not match the
actual location of the registers in the env->spr array so the
gdb_find_spr_idx function does that conversion.
Note: GDB currently needs to know the guest endianness in order to
properly print the registers values. This is done automatically by GDB
when provided with the ELF file or explicitly with the `set endian
<big|little>` command.
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Fortunately, the functions affected are so far only called from SVE,
so there is no tail to be cleared. But as we convert more of AdvSIMD
to gvec, this will matter.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-13-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For same-sign saturation, we have tcg vector operations. We can
compute the QC bit by comparing the saturated value against the
unsaturated value.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-12-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Change the representation of this field such that it is easy
to set from vector code.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-11-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Given that we mask bits properly on set, there is no reason
to mask them again on get. We failed to clear the exception
status bits, 0x9f, which means that the wrong value would be
returned on get. Except in the (probably normal) case in which
the set clears all of the bits.
Simplify the code in set to also clear the RES0 bits.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-10-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Minimize the code within a macro by splitting out a helper function.
Use deposit32 instead of manual bit manipulation.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-9-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The components of this register is stored in several
different locations.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
These are now unused.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The 32-bit PMIN/PMAX has been decomposed to scalars,
and so can be trivially expanded inline.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since we're now handling a == b generically, we no longer need
to do it by hand within target/arm/.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190209033847.9014-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At the moment the Arm implementations of kvm_arch_{get,put}_registers()
don't support having QEMU change the values of system registers
(aka coprocessor registers for AArch32). This is because although
kvm_arch_get_registers() calls write_list_to_cpustate() to
update the CPU state struct fields (so QEMU code can read the
values in the usual way), kvm_arch_put_registers() does not
call write_cpustate_to_list(), meaning that any changes to
the CPU state struct fields will not be passed back to KVM.
The rationale for this design is documented in a comment in the
AArch32 kvm_arch_put_registers() -- writing the values in the
cpregs list into the CPU state struct is "lossy" because the
write of a register might not succeed, and so if we blindly
copy the CPU state values back again we will incorrectly
change register values for the guest. The assumption was that
no QEMU code would need to write to the registers.
However, when we implemented debug support for KVM guests, we
broke that assumption: the code to handle "set the guest up
to take a breakpoint exception" does so by updating various
guest registers including ESR_EL1.
Support this by making kvm_arch_put_registers() synchronize
CPU state back into the list. We sync only those registers
where the initial write succeeds, which should be sufficient.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Dongjiu Geng <gengdongjiu@huawei.com>
There are a whole bunch more registers in the CPUID space which are
currently not used but are exposed as RAZ. To avoid too much
duplication we expand ARMCPRegUserSpaceInfo to understand glob
patterns so we only need one entry to tweak whole ranges of registers.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190205190224.2198-5-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As this is a single register we could expose it with a simple ifdef
but we use the existing modify_arm_cp_regs mechanism for consistency.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190205190224.2198-4-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
A number of CPUID registers are exposed to userspace by modern Linux
kernels thanks to the "ARM64 CPU Feature Registers" ABI. For QEMU's
user-mode emulation we don't need to emulate the kernels trap but just
return the value the trap would have done. To avoid too much #ifdef
hackery we process ARMCPRegInfo with a new helper (modify_arm_cp_regs)
before defining the registers. The modify routine is driven by a
simple data structure which describes which bits are exported and
which are fixed.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190205190224.2198-3-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Although technically not visible to userspace the kernel does make
them visible via a trap and emulate ABI. We provide a new permission
mask (PL0U_R) which maps to PL0_R for CONFIG_USER builds and adjust
the minimum permission check accordingly.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190205190224.2198-2-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The lo,hi order is different from the comments. And in commit
1ec182c333 ("target/arm: Convert to HAVE_CMPXCHG128"), it changes
the original code logic. So just restore the old code logic before this
commit:
do_paired_cmpxchg64_be():
cmpv = int128_make128(env->exclusive_high, env->exclusive_val);
newv = int128_make128(new_hi, new_lo);
This fixes a bug that would only be visible for big-endian
AArch64 guest code.
Fixes: 1ec182c333 ("target/arm: Convert to HAVE_CMPXCHG128")
Signed-off-by: Catherine Ho <catherine.hecx@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1548985244-24523-1-git-send-email-catherine.hecx@gmail.com
[PMM: added note that bug only affects BE guests]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
HACR_EL2 is a register with IMPDEF behaviour, which allows
implementation specific trapping to EL2. Implement it as RAZ/WI,
since QEMU's implementation has no extra traps. This also
matches what h/w implementations like Cortex-A53 and A57 do.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190205181218.8995-1-peter.maydell@linaro.org
Introduce MTTCG-enabled QEMU builds for mips32, mipsn32, and mips64.
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Hold BQL whenever mips_vpe_wake() is invoked.
Without this patch, MIPS MT with MTTCG enabled triggers an abort in
tcg_handle_interrupt() due to an unlocked access to cpu_interrupt().
This patch makes sure that the BQL is held in this case.
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Make sure BQL is held for all interrupt requests.
For MTTCG-enabled configurations, handling soft and hard interrupts
between vCPUs must be properly locked. By acquiring BQL, make sure
all paths triggering an IRQ are synchronized.
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Completely rewrite conditional stores handling. Use cmpxchg.
This eliminates need for separate implementations of SC instruction
emulation for user and system emulation.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Do only virtual addresses comaprisons in LL/SC sequence emulations.
Until this patch, physical addresses had been compared in SC part of
LL/SC sequence, even though such comparisons could be avoided. Getting
rid of them allows throwing away SC helpers and having common SC
implementations in user and system mode, avoiding the need for two
separate implementations selected by #ifdef CONFIG_USER_ONLY.
Correct guest software should not rely on LL/SC if they accesses the
same physical address via different virtual addresses or if page
mapping gets changed between LL/SC due to manipulating TLB entries.
MIPS Instruction Set Manual clearly says that an RMW sequence must
use the same address in the LL and SC (virtual address, physical
address, cacheability and coherency attributes must be identical).
Otherwise, the result of the SC is not predictable. This patch takes
advantage of this fact and removes the virtual->physical address
translation from SC helper.
lladdr served as Coprocessor 0 LLAddr register which captures physical
address of the most recent LL instruction, and also lladdr was used
for comparison with following SC physical address. This patch changes
the meaning of lladdr - now it will only keep the virtual address of
the most recent LL. Additionally, CP0_LLAddr field is introduced which
is the actual Coperocessor 0 LLAddr register that guest can access.
Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
This patch set contains a handful of patches I've collected over the
last few weeks. There's nothing really fundamental, but I thought it
would be good to send these out now as there are some other patch sets
on the mailing list that are getting ready to go.
As far as the actual patches, there's:
* A set that cleans up our FS dirty-mode handling.
* Support for writing MISA.
* The removal of Michael as a maintainer.
* A fix to {m,s}counteren handling.
* A fix to make sure the kernel's start address is computed correctly on
32-bit targets.
This makes my "RISC-V Patches for 3.2, Part 3" pull request defunct, as
it contains the same patches but based on a newer master. As usual,
I've tested this using a Fedora boot on the latest Linux. This patch
set does not include Bastian's decodetree patches because there were
some merge conflicts and while I've cleaned them up I want to get a
round of review first.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlxkOc4THHBhbG1lckBk
YWJiZWx0LmNvbQAKCRDvTKFQLMurQZwhEACZtcbDNgXFnV3lpN4mQ7np3EYUsiFv
lEGip+/iBlTp2IWA7pXxw4wTV584/0uK9Y7errHoIB16JSNPpK1si5RTiUVWe8yo
QfYI8/c8zrXHkupMr+T4WZEu1WP6dgl5ZFO8tE/3xF2G/uMJwtTEXZ29OxUAa9tr
8Xsk8Sbs8LBa3YYY+8fGCEa/duG9Bb2DNIgyC6U0Iz3liKCFYWHnjODvs8c+Hpft
0A+VJ3zhAKLAoPymrKmbJc6mYdNNljHMaVg7uDnoxDpLo2Hb0pNuCd0AwmnJVKr5
eI6HV7XzEAxXOY96z4YWtS+/Mqxlo1wUhlkDO0acDoxFSz7XDSMecxowwdNWuwzM
WlHPUAd7VQ8j8oSO4dnRAZnC7Trn172q1tpg+xjWxm8FZuyBzTrOjwoVUW9hoXTt
62GQKtDhWt++Uzq1q0hdaVAckz3c+yBGBCXlQG9wAJVyFSdowQTeYkcW5PU3f6nv
CkZ/nY4hQgtwgxB+PAIobcgkt07bhMnWAxQYRVJaKBAX5Ea7dudQHw9eSL6eI40X
GXhzt5jsj9HRhzSqaKqcIixO1ouIsvAoCD1QNLrCeXNEoa7xMOo7FCLWT3lpj49G
TWmUjrNA/qMB25HMVOaF7lH7mwRShg3wx5oqDQII35TcGx4u+psi9oApPyRUHOFx
syEZaIPiIn+nCw==
=Q/8G
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/palmer/tags/riscv-for-master-4.0-sf1' into staging
RISC-V Patches for the 4.0 Soft Freeze, Part 1
This patch set contains a handful of patches I've collected over the
last few weeks. There's nothing really fundamental, but I thought it
would be good to send these out now as there are some other patch sets
on the mailing list that are getting ready to go.
As far as the actual patches, there's:
* A set that cleans up our FS dirty-mode handling.
* Support for writing MISA.
* The removal of Michael as a maintainer.
* A fix to {m,s}counteren handling.
* A fix to make sure the kernel's start address is computed correctly on
32-bit targets.
This makes my "RISC-V Patches for 3.2, Part 3" pull request defunct, as
it contains the same patches but based on a newer master. As usual,
I've tested this using a Fedora boot on the latest Linux. This patch
set does not include Bastian's decodetree patches because there were
some merge conflicts and while I've cleaned them up I want to get a
round of review first.
# gpg: Signature made Wed 13 Feb 2019 15:37:50 GMT
# gpg: using RSA key 00CE76D1834960DFCE886DF8EF4CA1502CCBAB41
# gpg: issuer "palmer@dabbelt.com"
# gpg: Good signature from "Palmer Dabbelt <palmer@dabbelt.com>" [unknown]
# gpg: aka "Palmer Dabbelt <palmer@sifive.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 00CE 76D1 8349 60DF CE88 6DF8 EF4C A150 2CCB AB41
* remotes/palmer/tags/riscv-for-master-4.0-sf1:
riscv: Ensure the kernel start address is correctly cast
target/riscv: fix counter-enable checks in ctr()
MAINTAINERS: Remove Michael Clark as a RISC-V Maintainer
RISC-V: Add misa runtime write support
RISC-V: Add misa.MAFD checks to translate
RISC-V: Add misa to DisasContext
RISC-V: Add priv_ver to DisasContext
RISC-V: Use riscv prefix consistently on cpu helpers
RISC-V: Implement mstatus.TSR/TW/TVM
RISC-V: Mark mstatus.fs dirty
RISC-V: Split out mstatus_fs from tb_flags
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It looks like the operands where exchanged. HP bootrom tests the
following sequence:
0x00000000f0004064: ldil L%-66666800,r7
0x00000000f0004068: addi 19f,r7,r7
0x00000000f000406c: addi -1,r0,rp
0x00000000f0004070: addi f,r0,r4
0x00000000f0004074: addi 1,r4,r5
0x00000000f0004078: dcor rp,r6
0x00000000f000407c: cmpb,<>,n r6,r7,0xf000411
This returned 0x66666661 instead of the expected 0x9999999f in QEMU.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190211181907.2219-6-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
These conditions include the signed overflow bit. See page 5-3
of the Parisc 1.1 Architecture Reference Manual for details.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
[rth: More changes for c == 3, to compute (N^V)|Z properly.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We will be fixing do_cond vs signed overflow, which requires
that do_log_cond not rely on do_cond.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
When QEMU is compiled with -O0, these functions are inlined
which will cause a wrong restart address generated for the TB.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190211181907.2219-2-svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Now that the implementation is entirely within the generated
decode function, eliminate the wrapper.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With decodetree.py, the specializations would conflict so we
must have a single entry point for all variants of OR.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Convert the BREAK instruction to start.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Instead of returning DisasJumpType, immediately store it.
Return true in preparation for conversion to the decodetree script.
Tested-by: Helge Deller <deller@gmx.de>
Tested-by: Sven Schnelle <svens@stackframe.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Access to a counter in U-mode is permitted only if the corresponding
bit is set in both mcounteren and scounteren. The current code
ignores mcounteren and checks scounteren only for U-mode access.
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This patch adds support for writing misa. misa is validated based
on rules in the ISA specification. 'E' is mutually exclusive with
all other extensions. 'D' depends on 'F' so 'D' bit is dropped
if 'F' is not present. A conservative approach to consistency is
taken by flushing the translation cache on misa writes. misa_mask
is added to the CPU struct to store the original set of extensions.
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Add misa checks for M, A, F and D extensions and if they are
not present generate illegal instructions. This improves
emulation accurary for harts with a limited set of extensions.
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
gen methods should access state from DisasContext. Add misa
field to the DisasContext struct and remove CPURISCVState
argument from all gen methods.
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
The gen methods should access state from DisasContext. Add priv_ver
field to the DisasContext struct.
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
* Add riscv prefix to raise_exception function
* Add riscv prefix to CSR read/write functions
* Add riscv prefix to signal handler function
* Add riscv prefix to get fflags function
* Remove redundant declaration of riscv_cpu_init
and rename cpu_riscv_init to riscv_cpu_init
* rename riscv_set_mode to riscv_cpu_set_mode
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
This adds the necessary minimum to support S-mode
virtualization for priv ISA >= v1.10
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Co-authored-by: Matthew Suozzo <msuozzo@google.com>
Co-authored-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Modifed from Richard Henderson's patch [1] to integrate
with the new control and status register implementation.
[1] https://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg07034.html
Note: the f* CSRs already mark mstatus.FS dirty using
env->mstatus |= mstatus.FS so the bug in the first
spin of this patch has been fixed in a prior commit.
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Co-authored-by: Richard Henderson <richard.henderson@linaro.org>
Co-authored-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Signed-off-by: Michael Clark <mjc@sifive.com>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
Merge gen_callwi and gen_callw into their only users, translate_callw
and translate_callxw. Extract jump slot adjustment logic into a separate
function and use it in gen_jumpi and translate_callw.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Use libisa to extract whether opcode uses windowed registers and
construct mask based on that. This only leaves special case for the
'entry' opcode, as it needs to probe a register dynamically.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
xtensa-modules.c produced by recent Tensilica tools have
Opcode_*_encode_fns arrays defined as static. Don't add extra 'static'
in front of them when importing.
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
It's either "GNU *Library* General Public License version 2" or "GNU
Lesser General Public License version *2.1*", but there was no "version
2.0" of the "Lesser" license. So assume that version 2.1 is meant here.
Also the files mentioned the GPL instead of the LGPL after declaring
that the files are licensed under the LGPL, so change these spots to
use LGPL, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1549266858-5043-1-git-send-email-thuth@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
PA-RISC specification says: "Setting the PSW Q-bit, PSW{28}, to 1
with this instruction, if it was not already 1, is an undefined
operation." However, at least HP-UX 10.20 sets the Q bit from 0 to 1
with the SSM instruction. Tested this both on HP9000/712 and
HP9000/785/C3750, both machines set the Q bit from 0 to 1 without
exception. This makes HP-UX 10.20 progress a little bit further.
Signed-off-by: Sven Schnelle <svens@stackframe.org>
Message-Id: <20190129191402.29539-1-svens@stackframe.org>
[rth: Add a comment to the code as well.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>