Commit Graph

2494 Commits

Author SHA1 Message Date
Eduardo Habkost
7210a02c58 i386: Disable TOPOEXT by default on "-cpu host"
Enabling TOPOEXT is always allowed, but it can't be enabled
blindly by "-cpu host" because it may make guests crash if the
rest of the cache topology information isn't provided or isn't
consistent.

This addresses the bug reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1613277

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180809221852.15285-1-ehabkost@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Reviewed-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-08-16 13:43:01 -03:00
Wanpeng Li
7f710c32bb target-i386: adds PV_SEND_IPI CPUID feature bit
Adds PV_SEND_IPI CPUID feature bit.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
Message-Id: <1530526971-1812-1-git-send-email-wanpengli@tencent.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-08-16 13:43:01 -03:00
Robert Hoo
8a11c62da9 i386: Add new CPU model Icelake-{Server,Client}
New CPU models mostly inherit features from ancestor Skylake, while addin new
features: UMIP, New Instructions ( PCONIFIG (server only), WBNOINVD,
AVX512_VBMI2, GFNI, AVX512_VNNI, VPCLMULQDQ, VAES, AVX512_BITALG),
Intel PT and 5-level paging (Server only). As well as
IA32_PRED_CMD, SSBD support for speculative execution
side channel mitigations.

Note:
For 5-level paging, Guest physical address width can be configured, with
parameter "phys-bits". Unless explicitly specified, we still use its default
value, even for Icelake-Server cpu model.
At present, hold on expose IA32_ARCH_CAPABILITIES to guest, as 1) This MSR
actually presents more than 1 'feature', maintainers are considering expanding current
features presentation of only CPUIDs to MSR bits; 2) a reasonable default value
for MSR_IA32_ARCH_CAPABILITIES needs to settled first. These 2 are actully
beyond Icelake CPU model itself but fundamental. So split these work apart
and do it later.
https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00774.html
https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg00796.html

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-6-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-08-16 13:43:01 -03:00
Robert Hoo
59a80a19ca i386: Add CPUID bit for WBNOINVD
WBNOINVD: Write back and do not invalidate cache, enumerated by
CPUID.(EAX=80000008H, ECX=0):EBX[bit 9].

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-5-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-08-16 13:43:01 -03:00
Robert Hoo
5131dc433d i386: Add CPUID bit for PCONFIG
PCONFIG: Platform configuration, enumerated by CPUID.(EAX=07H, ECX=0):
EDX[bit18].

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-4-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-08-16 13:43:01 -03:00
Robert Hoo
3fc7c73139 i386: Add CPUID bit and feature words for IA32_ARCH_CAPABILITIES MSR
Support of IA32_PRED_CMD MSR already be enumerated by same CPUID bit as
SPEC_CTRL.

At present, mark CPUID_7_0_EDX_ARCH_CAPABILITIES unmigratable, per Paolo's
comment.

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-3-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-08-16 13:43:01 -03:00
Robert Hoo
8c80c99fcc i386: Add new MSR indices for IA32_PRED_CMD and IA32_ARCH_CAPABILITIES
IA32_PRED_CMD MSR gives software a way to issue commands that affect the state
of indirect branch predictors. Enumerated by CPUID.(EAX=7H,ECX=0):EDX[26].
IA32_ARCH_CAPABILITIES MSR enumerates architectural features of RDCL_NO and
IBRS_ALL. Enumerated by CPUID.(EAX=07H, ECX=0):EDX[29].

https://software.intel.com/sites/default/files/managed/c5/63/336996-Speculative-Execution-Side-Channel-Mitigations.pdf

Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
Message-Id: <1530781798-183214-2-git-send-email-robert.hu@linux.intel.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-08-16 13:43:01 -03:00
Richard Henderson
b8a4a96db3 target/arm: Fix aa64 FCADD and FCMLA decode
These insns require u=1; failed to include that in the switch
cases.  This probably happened during one of the rebases just
before final commit.

Fixes: d17b7cdcf4
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:29:58 +01:00
Richard Henderson
e4ab5124a5 target/arm: Use FZ not FZ16 for SVE FCVT single-half and double-half
We were using the wrong flush-to-zero bit for the non-half input.

Fixes: 46d33d1e3c
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:29:58 +01:00
Richard Henderson
52a339b11d target/arm: Use fp_status_fp16 for do_fmpa_zpzzz_h
This makes float16_muladd correctly use FZ16 not FZ.

Fixes: 6ceabaad11
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:29:58 +01:00
Richard Henderson
19062c169e target/arm: Ignore float_flag_input_denormal from fp_status_f16
When FZ is set, input_denormal exceptions are recognized, but this does
not happen with FZ16.  The softfloat code has no way to distinguish
these bits and will raise such exceptions into fp_status_f16.flags,
so ignore them when computing the accumulated flags.

Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:29:58 +01:00
Richard Henderson
0b62159be3 target/arm: Adjust FPCR_MASK for FZ16
When support for FZ16 was added, we failed to include the bit
within FPCR_MASK, which means that it could never be set.
Continue to zero FZ16 when ARMv8.2-FP16 is not enabled.

Fixes: d81ce0ef2c
Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180810193129.1556-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:29:58 +01:00
Stefan Hajnoczi
191776b96a target/arm: add "cortex-m0" CPU model
Define a "cortex-m0" ARMv6-M CPU model.

Most of the register reset values set by other CPU models are not
relevant for the cut-down ARMv6-M architecture.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180814162739.11814-3-stefanha@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:05:28 +01:00
Richard Henderson
adf92eab90 target/arm: Add sve-max-vq cpu property to -cpu max
This allows the default (and maximum) vector length to be set
from the command-line.  Which is extraordinarily helpful in
debugging problems depending on vector length without having to
bake knowledge of PR_SET_SVE_VL into every guest binary.

Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:05:28 +01:00
Richard Henderson
2bf5f3f91b target/arm: Dump SVE state if enabled
Also fold the FPCR/FPSR state onto the same line as PSTATE,
and mention but do not dump disabled FPU state.

Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:05:28 +01:00
Richard Henderson
3cb506a399 target/arm: Reformat integer register dump
With PC, there are 33 registers.  Three per line lines up nicely
without overflowing 80 columns.

Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:05:28 +01:00
Richard Henderson
50ef1cbf31 target/arm: Fix offset scaling for LD_zprr and ST_zprr
The scaling should be solely on the memory operation size; the number
of registers being loaded does not come in to the initial computation.

Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:05:27 +01:00
Richard Henderson
d0e372b029 target/arm: Fix offset for LD1R instructions
The immediate should be scaled by the size of the memory reference,
not the size of the elements into which it is loaded.

Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:05:27 +01:00
Richard Henderson
19f2acc915 target/arm: Fix sign-extension in sve do_ldr/do_str
The expression (int) imm + (uint32_t) len_align turns into uint32_t
and thus with negative imm produces a memory operation at the wrong
offset.  None of the numbers involved are particularly large, so
change everything to use int.

Cc: qemu-stable@nongnu.org (3.0.1)
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:05:27 +01:00
Richard Henderson
573ec0fe40 target/arm: Fix typo in helper_sve_ld1hss_r
Cc: qemu-stable@nongnu.org (3.0.1)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-16 14:05:27 +01:00
Richard Henderson
054e7adf4e target/arm: Fix typo in helper_sve_movz_d
Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-14 17:17:22 +01:00
Richard Henderson
bbd0968c45 target/arm: Reorganize SVE WHILE
The pseudocode for this operation is an increment + compare loop,
so comparing <= the maximum integer produces an all-true predicate.

Rather than bound in both the inline code and the helper, pass the
helper the number of predicate bits to set instead of the number
of predicate elements to set.

Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-14 17:17:22 +01:00
Richard Henderson
7a31e0c6c6 target/arm: Fix typo in do_sat_addsub_64
Used the wrong temporary in the computation of subtractive overflow.

Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-14 17:17:22 +01:00
Richard Henderson
df4e001093 target/arm: Fix sign of sve_cmpeq_ppzw/sve_cmpne_ppzw
The normal vector element is sign-extended before
comparing with the wide vector element.

Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180801123111.3595-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-14 17:17:22 +01:00
Peter Maydell
5f62d3b9e6 target/arm: Implement tailchaining for M profile cores
Tailchaining is an optimization in handling of exception return
for M-profile cores: if we are about to pop the exception stack
for an exception return, but there is a pending exception which
is higher priority than the priority we are returning to, then
instead of unstacking and then immediately taking the exception
and stacking registers again, we can chain to the pending
exception without unstacking and stacking.

For v6M and v7M it is IMPDEF whether tailchaining happens for pending
exceptions; for v8M this is architecturally required.  Implement it
in QEMU for all M-profile cores, since in practice v6M and v7M
hardware implementations generally do have it.

(We were already doing tailchaining for derived exceptions which
happened during exception return, like the validity checks and
stack access failures; these have always been required to be
tailchained for all versions of the architecture.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180720145647.8810-5-peter.maydell@linaro.org
2018-08-14 17:17:22 +01:00
Peter Maydell
89b1fec193 target/arm: Restore M-profile CONTROL.SPSEL before any tailchaining
On exception return for M-profile, we must restore the CONTROL.SPSEL
bit from the EXCRET value before we do any kind of tailchaining,
including for the derived exceptions on integrity check failures.
Otherwise we will give the guest an incorrect EXCRET.SPSEL value on
exception entry for the tailchained exception.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180720145647.8810-4-peter.maydell@linaro.org
2018-08-14 17:17:22 +01:00
Peter Maydell
b8109608bc target/arm: Initialize exc_secure correctly in do_v7m_exception_exit()
In do_v7m_exception_exit(), we use the exc_secure variable to track
whether the exception we're returning from is secure or non-secure.
Unfortunately the statement initializing this was accidentally
inside an "if (env->v7m.exception != ARMV7M_EXCP_NMI)" conditional,
which meant that we were using the wrong value for NMI handlers.
Move the initialization out to the right place.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180720145647.8810-3-peter.maydell@linaro.org
2018-08-14 17:17:21 +01:00
Peter Maydell
a9074977ef target/arm: Improve exception-taken logging
Improve the exception-taken logging by logging in
v7m_exception_taken() the exception we're going to take
and whether it is secure/nonsecure.

This requires us to move logging at many callsites from after the
call to before it, so that the logging appears in a sensible order.

(This will make tail-chaining produce more useful logs; for the
current callers of v7m_exception_taken() we know which exception
we're going to take, so custom log messages at the callsite sufficed;
for tail-chaining only v7m_exception_taken() knows the exception
number that we're going to tail-chain to.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180720145647.8810-2-peter.maydell@linaro.org
2018-08-14 17:17:21 +01:00
Peter Maydell
3d0e3080d8 target/arm: Treat SCTLR_EL1.M as if it were zero when HCR_EL2.TGE is set
One of the required effects of setting HCR_EL2.TGE is that when
SCR_EL3.NS is 1 then SCTLR_EL1.M must behave as if it is zero for
all purposes except direct reads. That is, it effectively disables
the MMU for the NS EL0/EL1 translation regime.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-6-peter.maydell@linaro.org
2018-08-14 17:17:21 +01:00
Peter Maydell
ac656b166b target/arm: Provide accessor functions for HCR_EL2.{IMO, FMO, AMO}
The IMO, FMO and AMO bits in HCR_EL2 are defined to "behave as
1 for all purposes other than direct reads" if HCR_EL2.TGE
is set and HCR_EL2.E2H is 0, and to "behave as 0 for all
purposes other than direct reads" if HCR_EL2.TGE is set
and HRC_EL2.E2H is 1.

To avoid having to check E2H and TGE everywhere where we test IMO and
FMO, provide accessors arm_hcr_el2_imo(), arm_hcr_el2_fmo()and
arm_hcr_el2_amo().  We don't implement ARMv8.1-VHE yet, so the E2H
case will never be true, but we include the logic to save effort when
we eventually do get to that.

(Note that in several of these callsites the change doesn't
actually make a difference as either the callsite is handling
TGE specially anyway, or the CPU can't get into that situation
with TGE set; we change everywhere for consistency.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-5-peter.maydell@linaro.org
2018-08-14 17:17:21 +01:00
Peter Maydell
7556edfb4d target/arm: Honour HCR_EL2.TGE when raising synchronous exceptions
Whene we raise a synchronous exception, if HCR_EL2.TGE is set then
exceptions targeting NS EL1 must be redirected to EL2.  Implement
this in raise_exception() -- all synchronous exceptions go through
this function.

(Asynchronous exceptions go via arm_cpu_exec_interrupt(), which
already honours HCR_EL2.TGE when it determines the target EL
in arm_phys_excp_target_el().)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-4-peter.maydell@linaro.org
2018-08-14 17:17:21 +01:00
Peter Maydell
30ac6339dc target/arm: Honour HCR_EL2.TGE and MDCR_EL2.TDE in debug register access checks
Some debug registers can be trapped via MDCR_EL2 bits TDRA, TDOSA,
and TDA, which we implement in the functions access_tdra(),
access_tdosa() and access_tda(). If MDCR_EL2.TDE or HCR_EL2.TGE
are 1, the TDRA, TDOSA and TDA bits should behave as if they were 1.
Implement this by having the access functions check MDCR_EL2.TDE
and HCR_EL2.TGE.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-3-peter.maydell@linaro.org
2018-08-14 17:17:21 +01:00
Peter Maydell
2ccf0fef63 target/arm: Mask virtual interrupts if HCR_EL2.TGE is set
If the "trap general exceptions" bit HCR_EL2.TGE is set, we
must mask all virtual interrupts (as per DDI0487C.a D1.14.3).
Implement this in arm_excp_unmasked().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180724115950.17316-2-peter.maydell@linaro.org
2018-08-14 17:17:21 +01:00
Peter Maydell
d4b6275df3 target/arm: Allow execution from small regions
Now that we have full support for small regions, including execution,
we can remove the workarounds where we marked all small regions as
non-executable for the M-profile MPU and SAU.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180710160013.26559-7-peter.maydell@linaro.org
2018-08-14 17:17:19 +01:00
Julia Suvorova
22ab346001 arm: Add ARMv6-M programmer's model support
Forbid stack alignment change. (CCR)
Reserve FAULTMASK, BASEPRI registers.
Report any fault as a HardFault. Disable MemManage, BusFault and
UsageFault, so they always escalated to HardFault. (SHCSR)

Signed-off-by: Julia Suvorova <jusual@mail.ru>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180718095628.26442-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-14 17:17:19 +01:00
Julia Suvorova
def183446c target/arm: Forbid unprivileged mode for M Baseline
MSR handling is the only place where CONTROL.nPRIV is modified.

Signed-off-by: Julia Suvorova <jusual@mail.ru>
Message-id: 20180705222622.17139-1-jusual@mail.ru
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-14 17:17:18 +01:00
Thomas Huth
09d98b6980 target/xtensa/cpu: Set owner of memory region in xtensa_cpu_initfn
The instance_init function of the xtensa CPUs creates a memory region,
but does not set an owner, so the memory region is not destroyed
correctly when the CPU object is removed. This can happen when
introspecting the CPU devices, so introspecting the CPU device will
leave a dangling memory region object in the QOM tree. Make sure to
set the right owner here to fix this issue.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-id: 1532005320-17794-1-git-send-email-thuth@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-08-06 19:07:21 +01:00
Peter Maydell
7b69454a12 target/arm: Add dummy needed functions to M profile vmstate subsections
Currently the migration code incorrectly treats a subsection with
no .needed function pointer as if it was the subsection list
terminator -- it is ignored and so is everything after it.
Work around this by giving various M profile vmstate structs
a 'needed' function that always returns true.
We reuse m_needed() for this, since it's always true here.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180806123445.1459-4-peter.maydell@linaro.org
2018-08-06 16:19:33 +01:00
Peter Maydell
45a505d0a4 Bug fixes.
-----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAlte/ecUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroP5Egf+I5rWCZ5p3YGjP1HnSoWEoVNbeZvZ
 aXWtvYYBnL8B/nVJ11CynyZgwqHSDiCODHH29Q3Qiopu/yeEYEBc5NtsheND7cMi
 2cXXT2uxHCenMx2oRBJ7H0580n7V1xC4HRbLsNhuk5G8VL0je/pdBylNWNlyEjlj
 8K+J18IKYqSsrIpcHtnN5Y/hyvMoGTqi3dwUeTL6u8rFzulVVUo2q9G0+k9WybJD
 lK7f1gXJImTFxqAfyMIuJSRKI9PjZMwWT9pCJ4ie52l7EwL+hj0068EMziw7/AMd
 asUGd56PG9K/37h2WC/qcMdZnRHO/8EBvEMEHHmXKMEDFF2XSQXIoO4Qvw==
 =35+O
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

Bug fixes.

# gpg: Signature made Mon 30 Jul 2018 13:00:39 BST
# gpg:                using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream:
  backends/cryptodev: remove dead code
  timer: remove replay clock probe in deadline calculation
  i386: implement MSR_SMI_COUNT for TCG
  i386: do not migrate MSR_SMI_COUNT on machine types <2.12

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-31 11:14:53 +01:00
Philippe Mathieu-Daudé
0261fb805c target/arm: Remove duplicate 'host' entry in '-cpu ?' output
Since 86f0a186d6 the TYPE_ARM_HOST_CPU is only compiled when CONFIG_KVM
is enabled.

Remove the now redundant special-case introduced in a96c0514ab, to avoid:

  $ qemu-system-aarch64 -machine virt -cpu \? | fgrep host
  host
  host (only available in KVM mode)

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180727132311.2777-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-30 15:07:08 +01:00
Paolo Bonzini
1d3db6bdbb i386: implement MSR_SMI_COUNT for TCG
This is trivial, so just do it.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-30 14:00:11 +02:00
Paolo Bonzini
990e0be260 i386: do not migrate MSR_SMI_COUNT on machine types <2.12
MSR_SMI_COUNT started being migrated in QEMU 2.12.  Do not migrate it
on older machine types, or the subsection causes a load failure for
guests that use SMM.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-30 14:00:11 +02:00
Peter Maydell
768cef2974 Fix for -rc2
* Fix build failure on mips host
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABCAAGBQJbVjAvAAoJECgHk2+YTcWmJAcP/02GNO/kx4IKR1IAzEW6YfF4
 6jbdlFv6bDZzIkYei/4xjOmDPKaGhJrstXb1x38stFWFME5l8IQKUGpurMcyk2+k
 BYSMgkafJNicrk63K7NNwsplJ+/jUdmpKSiEnyd6V70Tmc4uVzqyUkV11woyECKj
 G90K3oXCh1BSifTTl+j/0VeZ1x3CU5jpi7eEV8MZ6JKVY7GCBYCjhb2tAmZgQctJ
 gWRar6z4xGZ2KzCegCEqSQ2PPZqHWO9z/ILVX0+CRR/r46vZTbfmKc0r3JLU1g4A
 aVlQeg8gulcHitSiBsgnhe08TaOOuOT/bdrYTLZjfUHkNdhIPSEZ1DPmz0yPYwAK
 IBmznUrs0MfAVigRorcEzJXKW4fyUrurO+OrmuMJ2WQqCOrq1XtplozX3rIY2sGk
 hD/MN7MQurRUiG7efaiYeD5dwQZVOCjMlM7Z9mWJNhPDrOix6ZFIk19Q6fOGa8Ng
 kHCNJiPaOLaA4PvOFJNsvT7Fp3btC72s82m+zeRJXHGoTXEJGGkDoMieuodmvDHK
 o5ejz3dChdm3abUBndJqVFX5l934D/DGDS4gMb47wqC+xFR6JGQ3uOLUjkZWJkhU
 YJTQKRqMd/EKqoGbZeBryiNqCGm9QmhCWN90qgumoRw82MspIvAtsP+K6OJa0ONq
 by7Nh21+Kdv+43zqG9L7
 =3ckJ
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/ehabkost/tags/x86-next-pull-request' into staging

Fix for -rc2

* Fix build failure on mips host

# gpg: Signature made Mon 23 Jul 2018 20:44:47 BST
# gpg:                using RSA key 2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF  D1AA 2807 936F 984D C5A6

* remotes/ehabkost/tags/x86-next-pull-request:
  i386: Rename enum CacheType members

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-24 10:37:52 +01:00
Eduardo Habkost
5f00335aec i386: Rename enum CacheType members
Rename DCACHE to DATA_CACHE and ICACHE to INSTRUCTION_CACHE.
This avoids conflict with Linux asm/cachectl.h macros and fixes
build failure on mips hosts.

Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180717194010.30096-1-ehabkost@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Babu Moger <babu.moger@amd.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2018-07-23 12:56:19 -03:00
Peter Maydell
9d2b5a58f8 target/arm: Correctly handle overlapping small MPU regions
To correctly handle small (less than TARGET_PAGE_SIZE) MPU regions,
we must correctly handle the case where the address being looked
up hits in an MPU region that is not small but the address is
in the same page as a small region. For instance if MPU region
1 covers an entire page from 0x2000 to 0x2400 and MPU region
2 is small and covers only 0x2200 to 0x2280, then for an access
to 0x2000 we must not return a result covering the full page
even though we hit the page-sized region 1. Otherwise we will
then cache that result in the TLB and accesses that should
hit region 2 will incorrectly find the region 1 information.

Check for the case where we miss an MPU region but it is still
within the same page, and in that case narrow the size we will
pass to tlb_set_page_with_attrs() for whatever the final
outcome is of the MPU lookup.

Reported-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180716133302.25989-1-peter.maydell@linaro.org
2018-07-23 15:21:26 +01:00
David Hildenbrand
677ff32db1 s390x/cpumodel: fix segmentation fault when baselining models
Usually, when baselining two CPU models, whereby one of them has base
CPU features disabled (e.g. z14-base,msa=off), we fallback to an older
model that did not have these features in the base model. We always try to
create a "sane" CPU model (as far as possible), and one part of it is that
removing base features is no good and to be avoided.

Now, if we disable base features that were part of a z900, we're out of
luck. We won't find a CPU model and QEMU will segfault. This is a
scenario that should never happen in real life, but it can be used to
crash QEMU.

So let's properly report an error if we baseline e.g.:

{ "execute": "query-cpu-model-baseline",
  "arguments" : { "modela": { "name": "z14-base", "props": {"esan3" : false}},
                  "modelb": { "name": "z14"}} }

Instead of segfaulting.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180718092330.19465-1-david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-18 14:20:02 +02:00
Peter Maydell
59b5552f02 Bug fixes.
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJbTgXfAAoJEL/70l94x66DTbYH/3NutBAkNZKX7EImj/d0I1O8
 nERMVH1R70KBcugdsjhaBfTRoATDXdrBng4MBqloIK9dEMT3g6D4TFZJLU+WAjOc
 8sItx0BrUR7Sl8SnAvWNFoqVtvVancFiLnu11DsFGM0l8mJHRlZSkQZ0Fd0FL2W/
 OPnW7t6F7B2bc1VlPfSs093FVCoD3S+lJmbj64dwNrn8+fOX918V6gSaYQe92aIY
 pSbJjkRDx2iULmzMY8QH4OQiHgnd/Pijj+D628DMrUc0iW1Rsw5V2Yq7SMY6zoa8
 MoI/YDwX6eRMU2mq74BrKlULZrpmQn+6ZCdZTvXzLwc2zpKD4puO4FuMBOA7yx4=
 =GcxI
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

Bug fixes.

# gpg: Signature made Tue 17 Jul 2018 16:06:07 BST
# gpg:                using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream:
  Document command line options with single dash
  opts: remove redundant check for NULL parameter
  i386: only parse the initrd_filename once for multiboot modules
  i386: fix regression parsing multiboot initrd modules
  virtio-scsi: fix hotplug ->reset() vs event race
  qdev: add HotplugHandler->post_plug() callback
  hw/char/serial: retry write if EAGAIN
  PC Chipset: Improve serial divisor calculation
  vhost-user-test: added proper TestServer *dest initialization in test_migrate()
  hyperv: ensure VP index equal to QEMU cpu_index
  hyperv: rename vcpu_id to vp_index
  accel: Fix typo and grammar in comment
  dump: add kernel_gs_base to QEMU CPU state

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-17 17:06:32 +01:00
Richard Henderson
628fc75f3a target/arm: Fix LD1W and LDFF1W (scalar plus vector)
'I' was being double-incremented; correctly within the inner loop
and incorrectly within the outer loop.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180711103957.3040-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-16 17:18:41 +01:00
Roman Kagan
e9688fabc3 hyperv: ensure VP index equal to QEMU cpu_index
Hyper-V identifies vCPUs by Virtual Processor (VP) index which can be
queried by the guest via HV_X64_MSR_VP_INDEX msr.  It is defined by the
spec as a sequential number which can't exceed the maximum number of
vCPUs per VM.

It has to be owned by QEMU in order to preserve it across migration.

However, the initial implementation in KVM didn't allow to set this
msr, and KVM used its own notion of VP index.  Fortunately, the way
vCPUs are created in QEMU/KVM makes it likely that the KVM value is
equal to QEMU cpu_index.

So choose cpu_index as the value for vp_index, and push that to KVM on
kernels that support setting the msr.  On older ones that don't, query
the kernel value and assert that it's in sync with QEMU.

Besides, since handling errors from vCPU init at hotplug time is
impossible, disable vCPU hotplug.

This patch also introduces accessor functions to encapsulate the mapping
between a vCPU and its vp_index.

Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20180702134156.13404-3-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-16 16:58:16 +02:00
Roman Kagan
1b2013ea5d hyperv: rename vcpu_id to vp_index
In Hyper-V-related code, vCPUs are identified by their VP (virtual
processor) index.  Since it's customary for "vcpu_id" in QEMU to mean
APIC id, rename the respective variables to "vp_index" to make the
distinction clear.

Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20180702134156.13404-2-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-16 16:58:16 +02:00
Viktor Prutyanov
46fac17dca dump: add kernel_gs_base to QEMU CPU state
This patch adds field with content of KERNEL_GS_BASE MSR to QEMU note in
ELF dump.

On Windows, if all vCPUs are running usermode tasks at the time the dump is
created, this can be helpful in the discovery of guest system structures
during conversion ELF dump to MEMORY.DMP dump.

Signed-off-by: Viktor Prutyanov <viktor.prutyanov@virtuozzo.com>
Message-Id: <20180714123000.11326-1-viktor.prutyanov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-16 16:13:34 +02:00
Peter Maydell
2b83714d4e target/arm: Use correct mmu_idx for exception-return unstacking
For M-profile exception returns, the mmu index to use for exception
return unstacking is supposed to be that of wherever we are returning to:
 * if returning to handler mode, privileged
 * if returning to thread mode, privileged or unprivileged depending on
   CONTROL.nPRIV for the destination security state

We were passing the wrong thing as the 'priv' argument to
arm_v7m_mmu_idx_for_secstate_and_priv(). The effect was that guests
which programmed the MPU to behave differently for privileged and
unprivileged code could get spurious MemManage Unstack exceptions.

Reported-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180709124535.1116-1-peter.maydell@linaro.org
2018-07-10 10:54:40 +01:00
Richard Henderson
be0e3d7a1e target/sh4: Fix translator.c assertion failure for gUSA
The translator loop does not allow the tb_start hook to set
dc->base.is_jmp; the only hook allowed to do that is translate_insn.

Split the work between init_disas_context where we validate
the gUSA parameters, and translate_insn where we emit code.

Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-07-09 10:34:04 -07:00
Richard Henderson
973558a3f8 target/arm: Fix do_predset for large VL
Use MAKE_64BIT_MASK instead of open-coding.  Remove an odd
vector size check that is unlikely to be more profitable
than 3 64-bit integer stores.  Correct the iteration for WORD
to avoid writing too much data.

Fixes RISU tests of PTRUE for VL 256.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180705191929.30773-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-09 14:51:34 +01:00
Richard Henderson
2f95a3b09a target/arm: Suppress Coverity warning for PRF
These instructions must perform the sve_access_check, but
since they are implemented as NOPs there is no generated
code to elide when the access check fails.

Fixes: Coverity issues 1393780 & 1393779.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-09 14:51:34 +01:00
Laurent Vivier
4fff72185b target/ppc: fix build on ppc64 host
When I try to build a ppc64 target on a ppc64 host (gcc 8.1.1), I have:

.../target/ppc/int_helper.c: In function 'helper_vinsertb':
.../target/ppc/int_helper.c:1954:32: error: array subscript 18446744073709551608 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds]
         memmove(&r->u8[index], &b->u8[8 - sizeof(r->element)],              \
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.../target/ppc/int_helper.c:1965:1: note: in expansion of macro 'VINSERT'

If we compare with the macro for ppc64le, we can see
sizeof(r->element[0]) should be used instead of sizeof(r->element).

And VINSERT uses only u8, u16, u32 and u64, so the maximum value
of sizeof(r->element[0]) is 8

Suggested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-07 12:12:27 +10:00
Greg Kurz
02693cc4f4 i386: fix '-cpu ?' output for host cpu type
Since commit d6dcc5583e, '-cpu ?' shows the description of the
X86_CPU_TYPE_NAME("max") for the host CPU model:

Enables all features supported by the accelerator in the current host

instead of the expected:

KVM processor with all supported host features

or

HVF processor with all supported host features

This is caused by the early use of kvm_enabled() and hvf_enabled() in
a class_init function. Since the accelerator isn't configured yet, both
helpers return false unconditionally.

A QEMU binary will only be compiled with one of these accelerators, not
both. The appropriate description can thus be decided at build time.

Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <153055056654.212317.4697363278304826913.stgit@bahia.lan>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-06 18:39:19 +02:00
Peter Maydell
f988c7e191 OpenRISC cleanups and Fixes for QEMU 3.0
Mostly patches from Richard Henderson fixing multiple things:
  * Fix singlestepping in GDB.
  * Use more TB linking.
  * Fixes to exit TB after updating SPRs to enable registering of state
    changes.
  * Significant optimizations and refactors to the TLB
  * Split out disassembly from translation.
  * Add qemu-or1k to qemu-binfmt-conf.sh.
  * Implement signal handling for linux-user.
 
 Then there are a few fixups from me:
  * Fix delay slot detections to match hardware, this was masking a bug
    in the linus kernel.
  * Fix stores to the PIC mask register
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJbO32qAAoJEMOzHC1eZifkdHwP/2zV/dE3A0XvEynghJU4XeVe
 KlNRupCjp2civk9d9E+BJwIOVDMPfBQbBKfGC2fjzBGOuop8ZjUvvUuazNTEQoov
 9RfeXPMkP8xJUzGp02Gl87ZcMY9ZXJrqlPb2BaJ//8f/E0CF+91ODnkeLK62UXnb
 EbBCf5IlJy/B6Fp9icfdE09/nYx6SmQHPJZo9nC8xiWNZ8LewXn+DWGH81EHd8w1
 j99FV5ijImwB/LNOP6aVelyyKV9ZpInI6ZqC1LztWWaZftJ42TuvUq4vboP1P2s9
 vC9RV5oWl3/DL9HQxEphrynqKNvrxcceoQhxXirEzbLeYG83Tx1ed2a7J3x83gtY
 reChNmnwRuuchCot3cK4xDn2e0dY87dT24wtBM9HNmLsGgEzudZuGPwdlHrBoRFP
 o3exRItbfFr6SFdgUZaMjeC0vRSVU/FPqPRswJESWelEEMCi1R/CeQFKaw8BmaOG
 rw9Ed1rtX23R1Ce/ggEQgkxh2cWGyV1Tc0q5M09UDDmHKq0pd27d7CLlVMYsqZqk
 8guPjPzsYP12vNFTjx6tC8inOwJalK7kDVJv1a+c/bpyqaulcT22o7ck8rqlCZbU
 wpQbhAAGbbVrwnjQevO11MZsXk+6FANw6KIXxEDdDVbfqQ+WLtHOfXsUkG/xVUuR
 K1hiEeYKl77HCrDFkhqB
 =mbu2
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/shorne/tags/pull-or-20180703' into staging

OpenRISC cleanups and Fixes for QEMU 3.0

Mostly patches from Richard Henderson fixing multiple things:
 * Fix singlestepping in GDB.
 * Use more TB linking.
 * Fixes to exit TB after updating SPRs to enable registering of state
   changes.
 * Significant optimizations and refactors to the TLB
 * Split out disassembly from translation.
 * Add qemu-or1k to qemu-binfmt-conf.sh.
 * Implement signal handling for linux-user.

Then there are a few fixups from me:
 * Fix delay slot detections to match hardware, this was masking a bug
   in the linus kernel.
 * Fix stores to the PIC mask register

# gpg: Signature made Tue 03 Jul 2018 14:44:10 BST
# gpg:                using RSA key C3B31C2D5E6627E4
# gpg: Good signature from "Stafford Horne <shorne@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D9C4 7354 AEF8 6C10 3A25  EFF1 C3B3 1C2D 5E66 27E4

* remotes/shorne/tags/pull-or-20180703: (25 commits)
  target/openrisc: Fix writes to interrupt mask register
  target/openrisc: Fix delay slot exception flag to match spec
  linux-user: Fix struct sigaltstack for openrisc
  linux-user: Implement signals for openrisc
  target/openrisc: Add support in scripts/qemu-binfmt-conf.sh
  target/openrisc: Reorg tlb lookup
  target/openrisc: Increase the TLB size
  target/openrisc: Stub out handle_mmu_fault for softmmu
  target/openrisc: Use identical sizes for ITLB and DTLB
  target/openrisc: Fix cpu_mmu_index
  target/openrisc: Fix tlb flushing in mtspr
  target/openrisc: Reduce tlb to a single dimension
  target/openrisc: Merge mmu_helper.c into mmu.c
  target/openrisc: Remove indirect function calls for mmu
  target/openrisc: Merge tlb allocation into CPUOpenRISCState
  target/openrisc: Form the spr index from tcg
  target/openrisc: Exit the TB after l.mtspr
  target/openrisc: Split out is_user
  target/openrisc: Link more translation blocks
  target/openrisc: Fix singlestep_enabled
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-03 16:04:41 +01:00
Peter Maydell
b07cd3e748 ppc patch queue 2018-07-03
Here's a last minue pull request before today's soft freeze.  Ideally
 I would have sent this earlier, but I was waiting for a couple of
 extra fixes I knew were close.  And the freeze crept up on me, like
 always.
 
 Most of the changes here are bugfixes in any case.  There are some
 cleanups as well, which have been in my staging tree for a little
 while.  There are a couple of truly new features (some extensions to
 the sam460ex platform), but these are low risk, since they only affect
 a new and not really stabilized machine type anyway.
 
 Higlights are:
   * Mac platform improvements from Mark Cave-Ayland
   * Sam460ex improvements from BALATON Zoltan et al.
   * XICS interrupt handler cleanups from Cédric Le Goater
   * TCG improvements for atomic loads and stores from Richard
     Henderson
   * Assorted other bugfixes
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAls7D8oACgkQbDjKyiDZ
 s5Lxmg//YzPfC/nKqTTKkyJPzh/NnSC+kRTMAT3mbxdRIc7yfgMqJtWGGbS1iKgK
 EeJ9hl5Qm0HfscfDuzf0xasU62ZEv3kNdLnWJEIgkqiXrxoO5KCnC0y4D8NN1W03
 mvINNCa8+QDg2OsirGmNUTkriiG3wLIrHTpLZ4+JuC2Bd9H3nTHZgJ0MXON/1VWY
 oRgr6kMZ5+IAzPhvYLFR6l3nPI883fgJOFyRo7YqYrkVBKFrFkfK0Xjw6vpsNxcx
 2dE/YCHhNIriLuBG5noewL7GuqZRtLnl6rjjee5VAKIe1EmFeR+jsXwNjzGOVOJg
 dhjOtsJsQQ3WdEw5uImJzE64kV228WCgmkeXzZd1010JBLr7sUkrd2EuoZ23vvat
 uvZAHVSBrJg5WvzMo1VMEoPU3VeeZQ5HL+MI80iKiU6oUgRK11gVJcebtA0sEKt+
 zhJC4JiUlHtZLTGIpMBmU8DJZ3Tyk1cBEm+Ky+SaPE+dsz16UHI0fazFQXJnXphE
 MLHEGAyQgzWYp7kIcAjUFev0Geq/Uovy4JKIGI6ISop1wRPEQDxkthfkfRyQxQkE
 zuse4EBcEH/Undw9KrmEQa0hCe+8BRkxklVbPesFPPdqH3PKNxtHYuWpSShQF0PW
 XMjw43O2Rbsl8kBUHCpy4pYSugD1hpfgaw/mVUOU1u/M1O6toTw=
 =AHrx
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-3.0-20180703' into staging

ppc patch queue 2018-07-03

Here's a last minue pull request before today's soft freeze.  Ideally
I would have sent this earlier, but I was waiting for a couple of
extra fixes I knew were close.  And the freeze crept up on me, like
always.

Most of the changes here are bugfixes in any case.  There are some
cleanups as well, which have been in my staging tree for a little
while.  There are a couple of truly new features (some extensions to
the sam460ex platform), but these are low risk, since they only affect
a new and not really stabilized machine type anyway.

Higlights are:
  * Mac platform improvements from Mark Cave-Ayland
  * Sam460ex improvements from BALATON Zoltan et al.
  * XICS interrupt handler cleanups from Cédric Le Goater
  * TCG improvements for atomic loads and stores from Richard
    Henderson
  * Assorted other bugfixes

# gpg: Signature made Tue 03 Jul 2018 06:55:22 BST
# gpg:                using RSA key 6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg:                 aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg:                 aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg:                 aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-3.0-20180703: (35 commits)
  ppc: Include vga cirrus card into the compiling process
  target/ppc: Relax reserved bitmask of indexed store instructions
  target/ppc: set is_jmp on ppc_tr_breakpoint_check
  spapr: compute default value of "hpt-max-page-size" later
  target/ppc/kvm: don't pass cpu to kvm_get_smmu_info()
  target/ppc/kvm: get rid of kvm_get_fallback_smmu_info()
  ppc440_uc: Basic emulation of PPC440 DMA controller
  sam460ex: Add RTC device
  hw/timer: Add basic M41T80 emulation
  ppc4xx_i2c: Rewrite to model hardware more closely
  hw/ppc: Give sam46ex its own config option
  fpu_helper.c: fix setting FPSCR[FI] bit
  target/ppc: Implement the rest of gen_st_atomic
  target/ppc: Implement the rest of gen_ld_atomic
  target/ppc: Use atomic min/max helpers
  target/ppc: Use MO_ALIGN for EXIWX and ECOWX
  target/ppc: Split out gen_st_atomic
  target/ppc: Split out gen_ld_atomic
  target/ppc: Split out gen_load_locked
  target/ppc: Tidy gen_conditional_store
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

# Conflicts:
#	hw/ppc/spapr.c
2018-07-03 14:59:27 +01:00
Stafford Horne
dfc84745bb target/openrisc: Fix writes to interrupt mask register
The interrupt controller mask register (PICMR) allows writing any value
to any of the 32 interrupt mask bits.  Writing a 0 masks the interrupt
writing a 1 unmasks (enables) the the interrupt.

For some reason the old code was or'ing the write values to the PICMR
meaning it was not possible to ever mask a interrupt once it was
enabled.

I have tested this by running linux 4.18 and my regular checks, I don't
see any issues.

Reported-by: Davidson Francis <davidsondfgl@gmail.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 22:40:33 +09:00
Stafford Horne
9f6e8afad7 target/openrisc: Fix delay slot exception flag to match spec
The delay slot exception flag is only set on the SR register during
exception.  Previously it was being set on both the ESR and SR this
caused QEMU to differ from the spec.  The was apparent as the linux
kernel had a bug where it could boot on QEMU but not on real hardware.

The fixed logic now matches hardware.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 22:40:33 +09:00
Richard Henderson
e8f29049b1 linux-user: Implement signals for openrisc
All of the existing code was boilerplate from elsewhere,
and would crash the guest upon the first signal.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>

---
v2:
  Add a comment to the new definition of target_pt_regs.
  Install the signal mask into the ucontext.
v3:
  Incorporate feedback from Laurent.
2018-07-03 22:40:33 +09:00
Richard Henderson
f0655423ca target/openrisc: Reorg tlb lookup
While openrisc has a split i/d tlb, qemu does not.  Perform a
lookup on both i & d tlbs in parallel and put the composite
rights into qemu's tlb.  This avoids ping-ponging the qemu tlb
between EXEC and READ.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 22:40:33 +09:00
BALATON Zoltan
0123d3cbb0 target/ppc: Relax reserved bitmask of indexed store instructions
The PPC440 User Manual says that if bit 31 is set, the contents of
CR[CR0] are undefined for indexed store instructions but this form is
not invalid. Other PPC variants confirming to recent ISA where this
bit may be reserved should ignore reserved bits and not raise invalid
instruction exception. In particular, MorphOS has an stwx instruction
with bit 31 set and fails to boot currently because of this. With this
patch it gets further.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 11:13:08 +10:00
Emilio G. Cota
2a8ceefca2 target/ppc: set is_jmp on ppc_tr_breakpoint_check
The use of GDB breakpoints was broken by b0c2d52 ("target/ppc: convert
to TranslatorOps", 2018-02-16).

Fix it by setting is_jmp, so that we break from the translation loop
as originally intended.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 11:00:02 +10:00
Greg Kurz
ab25696009 target/ppc/kvm: don't pass cpu to kvm_get_smmu_info()
In a future patch the machine code will need to retrieve the MMU
information from KVM during machine initialization before the CPUs
are created.

Actually, KVM_PPC_GET_SMMU_INFO is a VM class ioctl, and thus, we
don't need to have a CPU object around. We just need for KVM to
be initialized and use the kvm_state global. This patch just does
that.

Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Greg Kurz
71d0f1eac4 target/ppc/kvm: get rid of kvm_get_fallback_smmu_info()
Now that we're checking our MMU configuration is supported by KVM,
rather than adjusting it to KVM, it doesn't really make sense to
have a fallback for kvm_get_smmu_info(). If KVM is too old or buggy
to provide the details, we should rather treat this as an error.

This patch thus adds error reporting to kvm_get_smmu_info() and get
rid of the fallback code. QEMU will now terminate if KVM fails to
provide MMU details. This may break some very old setups, but the
simplification is worth the sacrifice.

Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
John Arbuckle
9e430ca3da fpu_helper.c: fix setting FPSCR[FI] bit
The FPSCR[FI] bit indicates if the last floating point instruction had a result that was rounded. Each consecutive floating point instruction is suppose to set this bit to the correct value. What currently happens is this bit is not set as often as it should be. I have verified that this is the behavior of a real PowerPC 950. This patch fixes that problem by deciding to set this bit after each floating point instruction.

https://www.pdfdrive.net/powerpc-microprocessor-family-the-programming-environments-for-32-e3087633.html
Page 63 in table 2-4 is where the description of this bit can be found.

Signed-off-by: John Arbuckle <programmingkidx@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
7fbc2b20d2 target/ppc: Implement the rest of gen_st_atomic
The store twin case was stubbed out.  For now, implement it only within
a serial context, forcing parallel execution to synchronize.  It would
be possible to implement with a cmpxchg loop, if we care, but the loose
alignment requirements (simply no crossing 32-byte boundary) might send
us back to the serial context anyway.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
20923c1d02 target/ppc: Implement the rest of gen_ld_atomic
These cases were stubbed out.  For now, implement them only within
a serial context, forcing parallel execution to synchronize.  It
would be possible to implement these with cmpxchg loops, if we care.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
b8ce0f8678 target/ppc: Use atomic min/max helpers
These operations were previously unimplemented for ppc.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
c674a9831e target/ppc: Use MO_ALIGN for EXIWX and ECOWX
This avoids the need for gen_check_align entirely.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
9deb041cbd target/ppc: Split out gen_st_atomic
Move the guts of ST_ATOMIC to a function.  Use foo_tl for the operations
instead of foo_i32 or foo_i64 specifically.  Use MO_ALIGN instead of an
explicit call to gen_check_align.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
20ba8504a6 target/ppc: Split out gen_ld_atomic
Move the guts of LD_ATOMIC to a function.  Use foo_tl for the operations
instead of foo_i32 or foo_i64 specifically.  Use MO_ALIGN instead of an
explicit call to gen_check_align.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
2a4e6c1bff target/ppc: Split out gen_load_locked
Leave only the minimal amount of code within the LDAR macro,
moving the rest of the code into gen_load_locked.  Use MO_ALIGN
and remove the explicit call to gen_check_align.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
d8b8689827 target/ppc: Tidy gen_conditional_store
Leave only the minimal amount of code within the STCX macro,
moving the rest of the code into gen_conditional_store.
Remove the explicit call to gen_check_align; the matching LDAX will
have already checked alignment, and we verify the same address.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
14db18997e target/ppc: Remove POWERPC_EXCP_STCX
Always use the gen_conditional_store implementation that uses
atomic_cmpxchg.  Make sure and clear reserve_addr across most
interrupts crossing the cpu_loop.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
4a9b3c5dd3 target/ppc: Use atomic cmpxchg for STQCX
When running in a parallel context, we must use a helper in order
to perform the 128-bit atomic operation.  When running in a serial
context, do the compare before the store.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
f89ced5f55 target/ppc: Use atomic store for STQ
Section 1.4 of the Power ISA v3.0B states that this insn is
single-copy atomic.  As we cannot (yet) issue 128-bit stores
within TCG, use the generic helpers provided.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
94bf265867 target/ppc: Use atomic load for LQ and LQARX
Section 1.4 of the Power ISA v3.0B states that both of these
instructions are single-copy atomic.  As we cannot (yet) issue
128-bit loads within TCG, use the generic helpers provided.

Since TCG cannot (yet) return a 128-bit value, add a slot within
CPUPPCState for returning the high half of a 128-bit return value.
This solution is preferred to the helper assigning to architectural
registers directly, as it avoids clobbering all TCG live values.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Richard Henderson
0f3110fa67 target/ppc: Add do_unaligned_access hook
This allows faults from MO_ALIGN to have the same effect
as from gen_check_align.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2018-07-03 09:56:52 +10:00
Peter Maydell
e8c858944e * IEC units series (Philippe)
* Hyper-V PV TLB flush (Vitaly)
 * git archive detection (Daniel)
 * host serial passthrough fix (David)
 * NPT support for SVM emulation (Jan)
 * x86 "info mem" and "info tlb" fix (Doug)
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.22 (GNU/Linux)
 
 iQEcBAABAgAGBQJbOkI9AAoJEL/70l94x66DaA0IAIzJD+3hUdwDCqitlW65x/yX
 D+KAoX4Ytpz7+QOtcXC7BBUW3JwvHTS5sfuvaAqKWnqEXSDrQs4/gG2iEB1UJ3Ko
 hC2LHGKygdcD9k3vuQ2q2USOu08jEUYRvvjgHmD6lsyaAQ+cb2heAYz/SxQqbkkt
 qun6TFaWuTGBQF1qy0xjJitdPokGwFZgprlZyVmMId/yLlsbsFlwmGIJh/l1+zqw
 I4DBzRzuhAg/nLH9qVZ3LWOjH1H0MLPGBUG59w4GbIDpwRh1VZu+GTyAmAYaquHl
 dSHYweXywNTvhi0WLroP8SD0Nqf/ZObuSRtop60gqJuP3YAbPrBMeRTlsqoZIRE=
 =Xzc8
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging

* IEC units series (Philippe)
* Hyper-V PV TLB flush (Vitaly)
* git archive detection (Daniel)
* host serial passthrough fix (David)
* NPT support for SVM emulation (Jan)
* x86 "info mem" and "info tlb" fix (Doug)

# gpg: Signature made Mon 02 Jul 2018 16:18:21 BST
# gpg:                using RSA key BFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini/tags/for-upstream: (50 commits)
  tcg: simplify !CONFIG_TCG handling of tb_invalidate_*
  i386/monitor.c: make addresses canonical for "info mem" and "info tlb"
  target-i386: Add NPT support
  serial: Open non-block
  bsd-user: Use the IEC binary prefix definitions
  linux-user: Use the IEC binary prefix definitions
  tests/crypto: Use the IEC binary prefix definitions
  vl: Use the IEC binary prefix definitions
  monitor: Use the IEC binary prefix definitions
  cutils: Do not include "qemu/units.h" directly
  hw/rdma: Use the IEC binary prefix definitions
  hw/virtio: Use the IEC binary prefix definitions
  hw/vfio: Use the IEC binary prefix definitions
  hw/sd: Use the IEC binary prefix definitions
  hw/usb: Use the IEC binary prefix definitions
  hw/net: Use the IEC binary prefix definitions
  hw/i386: Use the IEC binary prefix definitions
  hw/ppc: Use the IEC binary prefix definitions
  hw/mips: Use the IEC binary prefix definitions
  hw/mips/r4k: Constify params_size
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-07-02 19:07:19 +01:00
Richard Henderson
1cc9e5d896 target/openrisc: Increase the TLB size
The architecture supports 128 TLB entries.  There is no reason
not to provide all of them.  In the process we need to fix a
bug that failed to parameterize the configuration register that
tells the operating system the number of entries.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>

---
v2:
  - Change VMState version.
2018-07-03 00:05:28 +09:00
Richard Henderson
5ce5dad352 target/openrisc: Stub out handle_mmu_fault for softmmu
This hook is only used by CONFIG_USER_ONLY.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
56c3a14156 target/openrisc: Use identical sizes for ITLB and DTLB
The sizes are already the same, however, we can improve things
if they are identical by design.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
b9bed1b9ab target/openrisc: Fix cpu_mmu_index
The code in cpu_mmu_index does not properly honor SR_DME.
This bug has workarounds elsewhere in that we flush the
tlb more often than necessary, on the state changes that
should be reflected in a change of mmu_index.

Fixing this means that we can respect the mmu_index that
is given to tlb_flush.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
fffde6695f target/openrisc: Fix tlb flushing in mtspr
The previous code was confused, avoiding the flush of the old entry
if the new entry is invalid.  We need to flush the old page if the
old entry is valid and the new page if the new entry is valid.

This bug was masked by over-flushing elsewhere.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
2acaa2331b target/openrisc: Reduce tlb to a single dimension
While we had defines for *_WAYS, we didn't define more than 1.
Reduce the complexity by eliminating this unused dimension.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
fd992ee7e3 target/openrisc: Merge mmu_helper.c into mmu.c
With tlb_fill in mmu.c, we can simplify things further.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
23d45ebdb1 target/openrisc: Remove indirect function calls for mmu
There is no reason to use an indirect branch instead
of simply testing the SR bits that control mmu state.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
455d45d22c target/openrisc: Merge tlb allocation into CPUOpenRISCState
There is no reason to allocate this separately.  This was probably
copied from target/mips which makes the same mistake.

While doing so, move tlb into the clear-on-reset range.  While not
all of the TLB bits are guaranteed zero on reset, all of the valid
bits are cleared, and the rest of the bits are unspecified.
Therefore clearing the whole of the TLB is correct.

Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
c28fa81f91 target/openrisc: Form the spr index from tcg
Rather than pass base+offset to the helper, pass the full index.
In most cases the base is r0 and optimization yields a constant.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
01ec3ec930 target/openrisc: Exit the TB after l.mtspr
A store to SR changes interrupt state, which should return
to the main loop to recognize that state.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
2ba6541792 target/openrisc: Split out is_user
This allows us to limit the amount of ifdefs and isolate
the test for usermode.

Reviewed-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
8000ba56cc target/openrisc: Link more translation blocks
Track direct jumps via dc->jmp_pc_imm.  Use that in
preference to jmp_pc when possible.  Emit goto_tb in
that case, and lookup_and_goto_tb otherwise.

Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
e0a369cf88 target/openrisc: Fix singlestep_enabled
We failed to store to cpu_pc before raising the exception,
which caused us to re-execute the same insn that we stepped.

Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
64e46c9581 target/openrisc: Use exit_tb instead of CPU_INTERRUPT_EXITTB
No need to use the interrupt mechanisms when we can
simply exit the tb directly.

Reviewed-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
c86395c850 target/openrisc: Remove DISAS_JUMP & DISAS_TB_JUMP
These values are unused.

Reviewed-by: Stafford Horne <shorne@gmail.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
378cd36f3c target/openrisc: Log interrupts
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Richard Henderson
d5cabcce62 target/openrisc: Add print_insn_or1k
Rather than emit disassembly while translating, reuse the
generated decoder to build a separate disassembler.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-03 00:05:28 +09:00
Doug Gale
3afc969a6e i386/monitor.c: make addresses canonical for "info mem" and "info tlb"
Correct the output of the "info mem" and "info tlb" monitor commands to
correctly show canonical addresses.

In 48-bit addressing mode, the upper 16 bits of linear addresses are
equal to bit 47. In 57-bit addressing mode (LA57), the upper 7 bits of
linear addresses are equal to bit 56.

Signed-off-by: Doug Gale <doug16k@gmail.com>
Message-Id: <20180617084025.29198-1-doug16k@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02 15:41:18 +02:00
Jan Kiszka
fe441054bb target-i386: Add NPT support
This implements NPT suport for SVM by hooking into
x86_cpu_handle_mmu_fault where it reads the stage-1 page table. Whether
we need to perform this 2nd stage translation, and how, is decided
during vmrun and stored in hflags2, along with nested_cr3 and
nested_pg_mode.

As get_hphys performs a direct cpu_vmexit in case of NPT faults, we need
retaddr in that function. To avoid changing the signature of
cpu_handle_mmu_fault, this passes the value from tlb_fill to get_hphys
via the CPU state.

This was tested successfully via the Jailhouse hypervisor.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <567473a0-6005-5843-4c73-951f476085ca@web.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02 15:41:18 +02:00
Philippe Mathieu-Daudé
ab3dd74924 hw/ppc: Use the IEC binary prefix definitions
It eases code review, unit is explicit.

Patch generated using:

  $ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/

and modified manually.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-Id: <20180625124238.25339-33-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02 15:41:16 +02:00
Philippe Mathieu-Daudé
b941329dc4 hw/xtensa: Use the IEC binary prefix definitions
It eases code review, unit is explicit.

Patch generated using:

  $ git grep -E '(1024|2048|4096|8192|(<<|>>).?(10|20|30))' hw/ include/hw/
  $ git grep -n '[<>][<>]= ?[1-5]0'

and modified manually.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Message-Id: <20180625124238.25339-22-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02 15:41:14 +02:00
Richard Henderson
c3513c836e target/openrisc: Fix mtspr shadow gprs
Missing break when this feature was added in 89e71e873d
("target/openrisc: implement shadow registers").  This was causing
strange issues as we get writes into the translation block jump cache
and other bits of state.

Fixes: 89e71e873d ("target/openrisc: implement shadow registers")
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Stafford Horne <shorne@gmail.com>
2018-07-02 22:31:59 +09:00
Philippe Mathieu-Daudé
6a4e0614c3 x86/cpu: Use definitions from "qemu/units.h"
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20180625124238.25339-4-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02 14:45:23 +02:00
Vitaly Kuznetsov
475120099d i386/kvm: add support for Hyper-V TLB flush
Add support for Hyper-V TLB flush which recently got added to KVM.

Just like regular Hyper-V we announce HV_EX_PROCESSOR_MASKS_RECOMMENDED
regardless of how many vCPUs we have. Windows is 'smart' and uses less
expensive non-EX Hypercall whenever possible (when it wants to flush TLB
for all vCPUs or the maximum vCPU index in the vCPU set requires flushing
is less than 64).

Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Message-Id: <20180610184927.19309-1-vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-07-02 14:45:23 +02:00
David Hildenbrand
30c8db0e21 s390x/tcg: fix locking problem with tcg_s390_tod_updated
tcg_s390_tod_updated() is always called with the iothread being locked
(e.g. from S390TODClass->set() e.g. via HELPER(sck) or on incoming
migration). The helper we call takes the lock itself - bad.

Let's change that by factoring out updating the ckc timer. This now looks
much nicer than having to call a helper from another function.

While touching it we also make sure that env->ckc is updated even if the
new value is -1ULL, for now it would not have been modified in that case.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180629170520.13671-1-david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
d44444b074 s390x/kvm: indicate alignment in legacy_s390_alloc()
Let's do this for completeness reason, although we don't support e.g.
PCDIMM/NVDIMM, which would use the alignment for placing the memory
region in guest physical memory. But maybe someday we would want to
support something like this - then we don't forget about this if
allowing multiple allocations in legacy_s390_alloc().

Use the same alignment as we would set in qemu_anon_ram_alloc(). Our
fixed address satisfies this alignment (1MB). This implicitly sets the
alignment of the underlying memory region.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180628113817.30814-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
8151942151 s390x/kvm: legacy_s390_alloc() only supports one allocation
We always allocate at a fixed address, a second allocation can therefore
of course never work. We would simply overwrite mappings.

This can e.g. happen in s390_memory_init(), if trying to allocate more
than > 8TB. Let's just bail out, as there is no need for supporting it
(legacy handling for z/VM).

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180628113817.30814-2-david@redhat.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
d66b43c896 s390x/tcg: fix CPU hotplug with single-threaded TCG
run_on_cpu() doesn't seem to work reliably until the CPU has been fully
created if the single-threaded TCG main loop is already running.

Therefore, hotplugging a CPU under single-threaded TCG does currently
not work. We should use the direct call instead of going via
run_on_cpu().

So let's use run_on_cpu() for KVM only - KVM requires it due to the initial
CPU reset ioctl. As a nice side effect, we get rid of the ifdef.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-10-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
7c12f710ba s390x/tcg: rearm the CKC timer during migration
If the CPU data is migrated after the TOD clock, the CKC timer of a CPU
is not rearmed. Let's rearm it when loading the CPU state.

Introduce tcg-stub.c just like kvm-stub.c for tcg specific stubs.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-9-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
9dc6753718 s390x/tcg: implement SET CLOCK
This allows a guest to change its TOD. We already take care of updating
all CKC timers from within S390TODClass.

Use MO_ALIGN to load the operand manually - this will properly trigger a
SPECIFICATION exception.

Acked-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-8-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
345f1ab96e s390x/tcg: SET CLOCK COMPARATOR can clear CKC interrupts
Let's stop the timer and delete any pending CKC IRQ before doing
anything else.

While at it, add a comment why the check for ckc == -1ULL is needed.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-7-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
7de3b1cdc6 s390x/tcg: properly implement the TOD
Right now, each CPU has its own TOD. Especially, the TOD will differ
based on creation time of a CPU - e.g. when hotplugging a CPU the times
will differ quite a lot, resulting in stall warnings in the guest.

Let's use a single TOD by implementing our new TOD device. Prepare it
for TOD-clock epoch extension.

Most importantly, whenever we set the TOD, we have to update the CKC
timer.

Introduce "tcg_s390x.h" just like "kvm_s390x.h" for tcg specific
function declarations that should not go into cpu.h.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
f777b20544 s390x/tcg: drop tod_basetime
Never set to anything but 0.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-5-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
8046f374a6 s390x/tod: factor out TOD into separate device
Let's treat this like a separate device. TCG will have to store the
actual state/time later on.

Include cpu-qom.h in kvm_s390x.h (due to S390CPU) to compile tod-kvm.c.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
4ab6a1feac s390x/kvm: pass values instead of pointers to kvm_s390_set_clock_*()
We are going to factor out the TOD into a separate device and use const
pointers for device class functions where possible. We are passing right
now ordinary pointers that should never be touched when setting the TOD.
Let's just pass the values directly.

Note that s390_set_clock() will be removed in a follow-on patch and
therefore its calling convention is not changed.

Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-3-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
David Hildenbrand
14055ce53c s390x/tcg: avoid overflows in time2tod/tod2time
Big values for the TOD/ns clock can result in some overflows that can be
avoided. Not all overflows can be handled however, as the conversion either
multiplies by 4.096 or divided by 4.096.

Apply the trick used in the Linux kernel in arch/s390/include/asm/timex.h
for tod_to_ns() and use the same trick also for the conversion in the
other direction.

Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20180627134410.4901-2-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
Christian Borntraeger
8727315111 s390x/cpumodel: default enable bpb and ppa15 for z196 and later
Most systems and host kernels provide the necessary building blocks for
bpb and ppa15. We can reverse the logic and default enable those
features, while still allowing to disable it via cpu model.

So let us add bpb and ppa15 to z196 and later default CPU model for the
qemu 3.0 machine. (like -cpu z13).  Older machine types (e.g.
s390-ccw-virtio-2.12) will retain the old value and not provide those
bits in the default model.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20180626123830.18282-1-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
Thomas Huth
0f0f8b611e loader: Check access size when calling rom_ptr() to avoid crashes
The rom_ptr() function allows direct access to the ROM blobs that we
load during startup. However, there are currently no checks for the
size of the accesses, so it's currently possible to crash QEMU for
example with:

$ echo "Insane in the mainframe" > /tmp/test.txt
$ s390x-softmmu/qemu-system-s390x -kernel /tmp/test.txt -append xyz
Segmentation fault (core dumped)
$ s390x-softmmu/qemu-system-s390x -kernel /tmp/test.txt -initrd /tmp/test.txt
Segmentation fault (core dumped)
$ echo -n HdrS > /tmp/hdr.txt
$ sparc64-softmmu/qemu-system-sparc64 -kernel /tmp/hdr.txt -initrd /tmp/hdr.txt
Segmentation fault (core dumped)

We need a possibility to check the size of the ROM area that we want
to access, thus let's add a size parameter to the rom_ptr() function
to avoid these problems.

Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1530005740-25254-1-git-send-email-thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-02 10:37:38 +02:00
Peter Maydell
0f02251a30 xtensa: Avoid calling get_page_addr_code() from helper function
The xtensa frontend calls get_page_addr_code() from its
itlb_hit_test helper function. This function is really part
of the TCG core's internals, and calling it from a target
helper makes it awkward to make changes to that core code.
It also means that we don't pass the correct retaddr to
tlb_fill(), so we won't correctly handle the case where
an exception is generated.

The helper is used for the instructions IHI, IHU and IPFL.

Change it to call cpu_ldb_code_ra() instead.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-30 12:00:17 -07:00
Richard Henderson
9c509ff94e target/xtensa: Convert to TranslatorOps
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30 12:00:17 -07:00
Richard Henderson
1d38a7011f target/xtensa: Change gen_intermediate_code dc to pointer
This will reduce the size of the patch in the next patch,
where the context will have to be a pointer.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30 12:00:17 -07:00
Richard Henderson
3cc18eec0a target/xtensa: Convert to DisasContextBase
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30 12:00:14 -07:00
Richard Henderson
f3531da588 target/xtensa: Replace DISAS_UPDATE with DISAS_NORETURN
The usage of DISAS_UPDATE is after noreturn helpers.
It is thus indistinguishable from DISAS_NORETURN.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30 11:58:03 -07:00
Max Filippov
f40385c959 target/xtensa: check zero overhead loop alignment
ISA book documents that the first instruction of zero overhead loop
must fit completely into naturally aligned region of an instruction
fetch unit size. Check that condition and log a message if it's
violated.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2018-06-30 11:58:02 -07:00
Richard Henderson
802abf4024 target/arm: Add ID_ISAR6
This register was added to aa32 state by ARMv8.2.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180629001538.11415-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:30:54 +01:00
Richard Henderson
0b33968e7f target/arm: Prune a15 features from max
There is no need to re-set these 3 features already
implied by the call to aarch64_a15_initfn.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180629001538.11415-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:30:54 +01:00
Richard Henderson
156a706536 target/arm: Prune a57 features from max
There is no need to re-set these 9 features already
implied by the call to aarch64_a57_initfn.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180629001538.11415-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:30:54 +01:00
Richard Henderson
11d7870b1b target/arm: Fix SVE system register access checks
Leave ARM_CP_SVE, removing ARM_CP_FPU; the sve_access_check
produced by the flag already includes fp_access_check.  If
we also check ARM_CP_FPU the double fp_access_check asserts.

Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20180629001538.11415-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:30:53 +01:00
Richard Henderson
7e8fafbfd0 target/arm: Fix SVE signed division vs x86 overflow exception
We already check for the same condition within the normal integer
sdiv and sdiv64 helpers.  Use a slightly different formation that
does not require deducing the expression type.

Fixes: f97cfd596e
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180629001538.11415-2-richard.henderson@linaro.org
[PMM: reworded a comment]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:28:24 +01:00
Aaron Lindsay
b7d793ad3d target/arm: Mark PMINTENSET accesses as possibly doing IO
This makes it match its AArch64 equivalent, PMINTENSET_EL1

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1529699547-17044-13-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:18 +01:00
Aaron Lindsay
a6070648aa target/arm: Remove redundant DIV detection for KVM
KVM implies V7VE, which implies ARM_DIV and THUMB_DIV. The conditional
detection here is therefore unnecessary. Because V7VE is already
unconditionally specified for all KVM hosts, ARM_DIV and THUMB_DIV are
already indirectly specified and do not need to be included here at all.

Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1529699547-17044-6-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:18 +01:00
Aaron Lindsay
5110e6836b target/arm: Add ARM_FEATURE_V7VE for v7 Virtualization Extensions
Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org>
Message-id: 1529699547-17044-5-git-send-email-alindsay@codeaurora.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:17 +01:00
Alex Bennée
26c4a83bd4 target/arm: support reading of CNT[VCT|FRQ]_EL0 from user-space
Since kernel commit a86bd139f2 (arm64: arch_timer: Enable CNTVCT_EL0
trap..), released in kernel version v4.12, user-space has been able
to read these system registers. As we can't use QEMUTimer's in
linux-user mode we just directly call cpu_get_clock().

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180625160009.17437-2-alex.bennee@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:16 +01:00
Richard Henderson
26c470a7bb target/arm: Implement ARMv8.2-DotProd
We've already added the helpers with an SVE patch, all that remains
is to wire up the aa64 and aa32 translators.  Enable the feature
within -cpu max for CONFIG_USER_ONLY.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-36-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:15 +01:00
Richard Henderson
802ac0e1e9 target/arm: Enable SVE for aarch64-linux-user
Enable ARM_FEATURE_SVE for the generic "max" cpu.

Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-35-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:15 +01:00
Richard Henderson
16fcfdc732 target/arm: Implement SVE dot product (indexed)
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-34-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:15 +01:00
Richard Henderson
d730ecaae7 target/arm: Implement SVE dot product (vectors)
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-33-richard.henderson@linaro.org
[PMM: moved 'ra=%reg_movprfx' here from following patch]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:13 +01:00
Richard Henderson
18fc240578 target/arm: Implement SVE fp complex multiply add (indexed)
Enhance the existing helpers to support SVE, which takes the
index from each 128-bit segment.  The change has no effect
for AdvSIMD, since there is only one such segment.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-32-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:12 +01:00
Richard Henderson
2cc99919a8 target/arm: Pass index to AdvSIMD FCMLA (indexed)
For aa64 advsimd, we had been passing the pre-indexed vector.
However, sve applies the index to each 128-bit segment, so we
need to pass in the index separately.

For aa32 advsimd, the fp32 operation always has index 0, but
we failed to interpret the fp16 index correctly.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180627043328.11531-31-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:12 +01:00
Richard Henderson
05f48bab30 target/arm: Implement SVE fp complex multiply add
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-30-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:12 +01:00
Richard Henderson
76a9d9cdc4 target/arm: Implement SVE floating-point complex add
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-29-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:11 +01:00
Richard Henderson
a21035822e target/arm: Implement SVE MOVPRFX
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-28-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:11 +01:00
Richard Henderson
ec5b375bb5 target/arm: Implement SVE floating-point unary operations
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-27-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:11 +01:00
Richard Henderson
cda3c75322 target/arm: Implement SVE floating-point round to integral value
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-26-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:10 +01:00
Richard Henderson
df4de1affc target/arm: Implement SVE floating-point convert to integer
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-25-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:10 +01:00
Richard Henderson
46d33d1e3c target/arm: Implement SVE floating-point convert precision
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-24-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:10 +01:00
Richard Henderson
67fcd9ad35 target/arm: Implement SVE floating-point trig multiply-add coefficient
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180627043328.11531-23-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-06-29 15:11:09 +01:00