Commit Graph

1093 Commits

Author SHA1 Message Date
Pavel Dovgalyuk
9e9b10c649 icount: remove unnecessary gen_io_end calls
Prior patch resets can_do_io flag at the TB entry. Therefore there is no
need in resetting this flag at the end of the block.
This patch removes redundant gen_io_end calls.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Message-Id: <156404429499.18669.13404064982854123855.stgit@pasha-Precision-3630-Tower>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@gmail.com>
2019-08-20 17:26:22 +02:00
Peter Maydell
afd7605393 target-arm queue:
* target/arm: generate a custom MIDR for -cpu max
  * hw/misc/zynq_slcr: refactor to use standard register definition
  * Set ENET_BD_BDU in I.MX FEC controller
  * target/arm: Fix routing of singlestep exceptions
  * refactor a32/t32 decoder handling of PC
  * minor optimisations/cleanups of some a32/t32 codegen
  * target/arm/cpu64: Ensure kvm really supports aarch64=off
  * target/arm/cpu: Ensure we can use the pmu with kvm
  * target/arm: Minor cleanups preparatory to KVM SVE support
 -----BEGIN PGP SIGNATURE-----
 
 iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAl1WrIsZHHBldGVyLm1h
 eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3tJ/D/9I0ccyciHwuekySUHs+Wq6
 2grX8t6RFzlhA1ULoAaEO4x8uWWGnbiGTeSGM819T3nj1a7neQV12Xe5RRGG0j7n
 aeVseYnZF96oshKPkDSVTcGQisVfmmHIJ0oqx2k1aUGrmyFJlTuLWQBZCCiZKhxA
 zA6YzUbOA2apfi9nun6SbbjysiRD2lp2i9vI79nVlo+ca77v/1sdFUwzg0hRE//X
 IondHeWtCZScmc/GwABv4EdNzQ4Aerfe10v/pOKXEC59rPwEiaiSGBPu6SRUaGWH
 qHlwjVU2+BFGkz9Oy/7+tDTBk6saPi4taZF8SxxiC/QTyNV2ijyKV5iy9KOYAFw7
 E41fhv4+Kch569/SX7fiyAxL0gAS2HGFtegByuQEgjjioOCRugFcX275NXvuW06j
 jfOP/zSD9P39WA0jCJaNj5FdJTcLmIuFxKjBUEX3Cdb+3igIq1BW0ZFd/OOBoo1W
 GHcEmO6tLyx35kigOb3TkayQpkqCoaGCcgzJ0g2Oy06rKwlcci+BfCfc3aG+uSSY
 +TuGjRhpQxQJJt880d7tBqeH9R5FABvQ0TEwGuACylDEZM5bN7BpZxCxCVN/bFG+
 pzvzs/QtOq0FN7LK4L4rbuJui4nBhAyalbiIXQ8ihWQgmMqaYQSK8mXFgSZgizFl
 qATcYIr/q2gL4wHRos3XdA==
 =8BAF
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/pmaydell/tags/pull-target-arm-20190816' into staging

target-arm queue:
 * target/arm: generate a custom MIDR for -cpu max
 * hw/misc/zynq_slcr: refactor to use standard register definition
 * Set ENET_BD_BDU in I.MX FEC controller
 * target/arm: Fix routing of singlestep exceptions
 * refactor a32/t32 decoder handling of PC
 * minor optimisations/cleanups of some a32/t32 codegen
 * target/arm/cpu64: Ensure kvm really supports aarch64=off
 * target/arm/cpu: Ensure we can use the pmu with kvm
 * target/arm: Minor cleanups preparatory to KVM SVE support

# gpg: Signature made Fri 16 Aug 2019 14:15:55 BST
# gpg:                using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE
# gpg:                issuer "peter.maydell@linaro.org"
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@gmail.com>" [ultimate]
# gpg:                 aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate]
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83  15CF 3C25 25ED 1436 0CDE

* remotes/pmaydell/tags/pull-target-arm-20190816: (29 commits)
  target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word
  target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR
  target/arm: Use tcg_gen_rotri_i32 for gen_swap_half
  target/arm: Use ror32 instead of open-coding the operation
  target/arm: Remove redundant shift tests
  target/arm: Use tcg_gen_deposit_i32 for PKHBT, PKHTB
  target/arm: Use tcg_gen_extract_i32 for shifter_out_im
  target/arm/kvm64: Move the get/put of fpsimd registers out
  target/arm/kvm64: Fix error returns
  target/arm/cpu: Use div-round-up to determine predicate register array size
  target/arm/helper: zcr: Add build bug next to value range assumption
  target/arm/cpu: Ensure we can use the pmu with kvm
  target/arm/cpu64: Ensure kvm really supports aarch64=off
  target/arm: Remove helper_double_saturate
  target/arm: Use unallocated_encoding for aarch32
  target/arm: Remove offset argument to gen_exception_bkpt_insn
  target/arm: Replace offset with pc in gen_exception_internal_insn
  target/arm: Replace offset with pc in gen_exception_insn
  target/arm: Replace s->pc with s->base.pc_next
  target/arm: Remove redundant s->pc & ~1
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 17:21:40 +01:00
Richard Henderson
664b7e3b97 target/arm: Use tcg_gen_extrh_i64_i32 to extract the high word
Separate shift + extract low will result in one extra insn
for hosts like RISC-V, MIPS, and Sparc.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190808202616.13782-8-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:53 +01:00
Richard Henderson
5f8cd06ebc target/arm: Simplify SMMLA, SMMLAR, SMMLS, SMMLSR
All of the inputs to these instructions are 32-bits.  Rather than
extend each input to 64-bits and then extract the high 32-bits of
the output, use tcg_gen_muls2_i32 and other 32-bit generator functions.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190808202616.13782-7-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:53 +01:00
Richard Henderson
adefba76e8 target/arm: Use tcg_gen_rotri_i32 for gen_swap_half
Rotate is the more compact and obvious way to swap 16-bit
elements of a 32-bit word.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190808202616.13782-6-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:53 +01:00
Richard Henderson
dd861b3f29 target/arm: Use ror32 instead of open-coding the operation
The helper function is more documentary, and also already
handles the case of rotate by zero.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190808202616.13782-5-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:53 +01:00
Richard Henderson
464eaa9571 target/arm: Remove redundant shift tests
The immediate shift generator functions already test for,
and eliminate, the case of a shift by zero.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190808202616.13782-4-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:52 +01:00
Richard Henderson
d1f8755fc9 target/arm: Use tcg_gen_deposit_i32 for PKHBT, PKHTB
Use deposit as the composit operation to merge the
bits from the two inputs.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190808202616.13782-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:52 +01:00
Richard Henderson
191f4bfe8d target/arm: Use tcg_gen_extract_i32 for shifter_out_im
Extract is a compact combination of shift + and.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190808202616.13782-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:51 +01:00
Andrew Jones
30e3537fa5 target/arm/kvm64: Move the get/put of fpsimd registers out
Move the getting/putting of the fpsimd registers out of
kvm_arch_get/put_registers() into their own helper functions
to prepare for alternatively getting/putting SVE registers.

No functional change.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:51 +01:00
Andrew Jones
4ed9d9f894 target/arm/kvm64: Fix error returns
A couple return -EINVAL's forgot their '-'s.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:51 +01:00
Andrew Jones
46417784d2 target/arm/cpu: Use div-round-up to determine predicate register array size
Unless we're guaranteed to always increase ARM_MAX_VQ by a multiple of
four, then we should use DIV_ROUND_UP to ensure we get an appropriate
array size.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:51 +01:00
Andrew Jones
7b351d9870 target/arm/helper: zcr: Add build bug next to value range assumption
The current implementation of ZCR_ELx matches the architecture, only
implementing the lower four bits, with the rest RAZ/WI. This puts
a strict limit on ARM_MAX_VQ of 16. Make sure we don't let ARM_MAX_VQ
grow without a corresponding update here.

Suggested-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:51 +01:00
Andrew Jones
ae502508f8 target/arm/cpu: Ensure we can use the pmu with kvm
We first convert the pmu property from a static property to one with
its own accessors. Then we use the set accessor to check if the PMU is
supported when using KVM. Indeed a 32-bit KVM host does not support
the PMU, so this check will catch an attempt to use it at property-set
time.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:51 +01:00
Andrew Jones
b9e758f0b5 target/arm/cpu64: Ensure kvm really supports aarch64=off
If -cpu <cpu>,aarch64=off is used then KVM must also be used, and it
and the host must support running the vcpu in 32-bit mode. Also, if
-cpu <cpu>,aarch64=on is used, then it doesn't matter if kvm is
enabled or not.

Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:51 +01:00
Richard Henderson
640581a06d target/arm: Remove helper_double_saturate
Replace x = double_saturate(y) with x = add_saturate(y, y).
There is no need for a separate more specialized helper.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:50 +01:00
Richard Henderson
3cb3663715 target/arm: Use unallocated_encoding for aarch32
Promote this function from aarch64 to fully general use.
Use it to unify the code sequences for generating illegal
opcode exceptions.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-11-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:50 +01:00
Richard Henderson
06bcbda3f6 target/arm: Remove offset argument to gen_exception_bkpt_insn
Unlike the other more generic gen_exception{,_internal}_insn
interfaces, breakpoints always refer to the current instruction.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-10-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:50 +01:00
Richard Henderson
aee828e754 target/arm: Replace offset with pc in gen_exception_internal_insn
The offset is variable depending on the instruction set.
Passing in the actual value is clearer in intent.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-9-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:50 +01:00
Richard Henderson
a767fac802 target/arm: Replace offset with pc in gen_exception_insn
The offset is variable depending on the instruction set, whereas
we have stored values for the current pc and the next pc.  Passing
in the actual value is clearer in intent.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:50 +01:00
Richard Henderson
a04159166b target/arm: Replace s->pc with s->base.pc_next
We must update s->base.pc_next when we return from the translate_insn
hook to the main translator loop.  By incrementing s->base.pc_next
immediately after reading the insn word, "pc_next" contains the address
of the next instruction throughout translation.

All remaining uses of s->pc are referencing the address of the next insn,
so this is now a simple global replacement.  Remove the "s->pc" field.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
4818c3743b target/arm: Remove redundant s->pc & ~1
The thumb bit has already been removed from s->pc, and is always even.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
16e0d8234e target/arm: Introduce add_reg_for_lit
Provide a common routine for the places that require ALIGN(PC, 4)
as the base address as opposed to plain PC.  The two are always
the same for A32, but the difference is meaningful for thumb mode.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
fdbcf6329d target/arm: Introduce read_pc
We currently have 3 different ways of computing the architectural
value of "PC" as seen in the ARM ARM.

The value of s->pc has been incremented past the current insn,
but that is all.  Thus for a32, PC = s->pc + 4; for t32, PC = s->pc;
for t16, PC = s->pc + 2.  These differing computations make it
impossible at present to unify the various code paths.

With the newly introduced s->pc_curr, we can compute the correct
value for all cases, using the formula given in the ARM ARM.

This changes the behaviour for load_reg() and load_reg_var()
when called with reg==15 from a 32-bit Thumb instruction:
previously they would have returned the incorrect value
of pc_curr + 6, and now they will return the architecturally
correct value of PC, which is pc_curr + 4. This will not
affect well-behaved guest software, because all of the places
we call these functions from T32 code are instructions where
using r15 is UNPREDICTABLE. Using the architectural PC value
here is more consistent with the T16 and A32 behaviour.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-4-richard.henderson@linaro.org
[PMM: added commit message note about UNPREDICTABLE T32 cases]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
43722a6d4f target/arm: Introduce pc_curr
Add a new field to retain the address of the instruction currently
being translated.  The 32-bit uses are all within subroutines used
by a32 and t32.  This will become less obvious when t16 support is
merged with a32+t32, and having a clear definition will help.

Convert aarch64 as well for consistency.  Note that there is one
instance of a pre-assert fprintf that used the wrong value for the
address of the current instruction.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Richard Henderson
331b1ca616 target/arm: Pass in pc to thumb_insn_is_16bit
This function is used in two different contexts, and it will be
clearer if the function is given the address to which it applies.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190807045335.1361-2-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:49 +01:00
Peter Maydell
8bd587c106 target/arm: Fix routing of singlestep exceptions
When generating an architectural single-step exception we were
routing it to the "default exception level", which is to say
the same exception level we execute at except that EL0 exceptions
go to EL1. This is incorrect because the debug exception level
can be configured by the guest for situations such as single
stepping of EL0 and EL1 code by EL2.

We have to track the target debug exception level in the TB
flags, because it is dependent on CPU state like HCR_EL2.TGE
and MDCR_EL2.TDE. (That we were previously calling the
arm_debug_target_el() function to determine dc->ss_same_el
is itself a bug, though one that would only have manifested
as incorrect syndrome information.) Since we are out of TB
flag bits unless we want to expand into the cs_base field,
we share some bits with the M-profile only HANDLER and
STACKCHECK bits, since only A-profile has this singlestep.

Fixes: https://bugs.launchpad.net/qemu/+bug/1838913
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190805130952.4415-3-peter.maydell@linaro.org
2019-08-16 14:02:49 +01:00
Peter Maydell
c1d5f50f09 target/arm: Factor out 'generate singlestep exception' function
Factor out code to 'generate a singlestep exception', which is
currently repeated in four places.

To do this we need to also pull the identical copies of the
gen-exception() function out of translate-a64.c and translate.c
into translate.h.

(There is a bug in the code: we're taking the exception to the wrong
target EL.  This will be simpler to fix if there's only one place to
do it.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20190805130952.4415-2-peter.maydell@linaro.org
2019-08-16 14:02:48 +01:00
Alex Bennée
2bd5f41c00 target/arm: generate a custom MIDR for -cpu max
While most features are now detected by probing the ID_* registers
kernels can (and do) use MIDR_EL1 for working out of they have to
apply errata. This can trip up warnings in the kernel as it tries to
work out if it should apply workarounds to features that don't
actually exist in the reported CPU type.

Avoid this problem by synthesising our own MIDR value.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190726113950.7499-1-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-16 14:02:48 +01:00
Markus Armbruster
54d31236b9 sysemu: Split sysemu/runstate.h off sysemu/sysemu.h
sysemu/sysemu.h is a rather unfocused dumping ground for stuff related
to the system-emulator.  Evidence:

* It's included widely: in my "build everything" tree, changing
  sysemu/sysemu.h still triggers a recompile of some 1100 out of 6600
  objects (not counting tests and objects that don't depend on
  qemu/osdep.h, down from 5400 due to the previous two commits).

* It pulls in more than a dozen additional headers.

Split stuff related to run state management into its own header
sysemu/runstate.h.

Touching sysemu/sysemu.h now recompiles some 850 objects.  qemu/uuid.h
also drops from 1100 to 850, and qapi/qapi-types-run-state.h from 4400
to 4200.  Touching new sysemu/runstate.h recompiles some 500 objects.

Since I'm touching MAINTAINERS to add sysemu/runstate.h anyway, also
add qemu/main-loop.h.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-30-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
[Unbreak OS-X build]
2019-08-16 13:37:36 +02:00
Markus Armbruster
d5938f29fe Clean up inclusion of sysemu/sysemu.h
In my "build everything" tree, changing sysemu/sysemu.h triggers a
recompile of some 5400 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h).

Almost a third of its inclusions are actually superfluous.  Delete
them.  Downgrade two more to qapi/qapi-types-run-state.h, and move one
from char/serial.h to char/serial.c.

hw/semihosting/config.c, monitor/monitor.c, qdev-monitor.c, and
stubs/semihost.c define variables declared in sysemu/sysemu.h without
including it.  The compiler is cool with that, but include it anyway.

This doesn't reduce actual use much, as it's still included into
widely included headers.  The next commit will tackle that.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190812052359.30071-27-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2019-08-16 13:31:53 +02:00
Markus Armbruster
12e9493df9 Include hw/boards.h a bit less
hw/boards.h pulls in almost 60 headers.  The less we include it into
headers, the better.  As a first step, drop superfluous inclusions,
and downgrade some more to what's actually needed.  Gets rid of just
one inclusion into a header.

Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190812052359.30071-23-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
2019-08-16 13:31:53 +02:00
Markus Armbruster
db72581598 Include qemu/main-loop.h less
In my "build everything" tree, changing qemu/main-loop.h triggers a
recompile of some 5600 out of 6600 objects (not counting tests and
objects that don't depend on qemu/osdep.h).  It includes block/aio.h,
which in turn includes qemu/event_notifier.h, qemu/notify.h,
qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h,
qemu/thread.h, qemu/timer.h, and a few more.

Include qemu/main-loop.h only where it's needed.  Touching it now
recompiles only some 1700 objects.  For block/aio.h and
qemu/event_notifier.h, these numbers drop from 5600 to 2800.  For the
others, they shrink only slightly.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190812052359.30071-21-armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16 13:31:52 +02:00
Markus Armbruster
650d103d3e Include hw/hw.h exactly where needed
In my "build everything" tree, changing hw/hw.h triggers a recompile
of some 2600 out of 6600 objects (not counting tests and objects that
don't depend on qemu/osdep.h).

The previous commits have left only the declaration of hw_error() in
hw/hw.h.  This permits dropping most of its inclusions.  Touching it
now recompiles less than 200 objects.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190812052359.30071-19-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16 13:31:52 +02:00
Markus Armbruster
8a9358cc6e migration: Move the VMStateDescription typedef to typedefs.h
We declare incomplete struct VMStateDescription in a couple of places
so we don't have to include migration/vmstate.h for the typedef.
That's fine with me.  However, the next commit will drop
migration/vmstate.h from a massive number of compiles.  Move the
typedef to qemu/typedefs.h now, so I don't have to insert struct in
front of VMStateDescription all over the place then.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190812052359.30071-15-armbru@redhat.com>
2019-08-16 13:31:52 +02:00
Markus Armbruster
64552b6be4 Include hw/irq.h a lot less
In my "build everything" tree, changing hw/irq.h triggers a recompile
of some 5400 out of 6600 objects (not counting tests and objects that
don't depend on qemu/osdep.h).

hw/hw.h supposedly includes it for convenience.  Several other headers
include it just to get qemu_irq and.or qemu_irq_handler.

Move the qemu_irq and qemu_irq_handler typedefs from hw/irq.h to
qemu/typedefs.h, and then include hw/irq.h only where it's still
needed.  Touching it now recompiles only some 500 objects.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190812052359.30071-13-armbru@redhat.com>
2019-08-16 13:31:52 +02:00
Peter Maydell
02ac2f7f61 target/arm: Avoid bogus NSACR traps on M-profile without Security Extension
In Arm v8.0 M-profile CPUs without the Security Extension and also in
v7M CPUs, there is no NSACR register. However, the code we have to handle
the FPU does not always check whether the ARM_FEATURE_M_SECURITY bit
is set before testing whether env->v7m.nsacr permits access to the
FPU. This means that for a CPU with an FPU but without the Security
Extension we would always take a bogus fault when trying to stack
the FPU registers on an exception entry.

We could fix this by adding extra feature bit checks for all uses,
but it is simpler to just make the internal value of nsacr 0xcff
("all non-secure accesses allowed"), since this is not guest
visible when the Security Extension is not present. This allows
us to continue to follow the Arm ARM pseudocode which takes a
similar approach. (In particular, in the v8.1 Arm ARM the register
is documented as reading as 0xcff in this configuration.)

Fixes: https://bugs.launchpad.net/qemu/+bug/1838475
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Damien Hedde <damien.hedde@greensocs.com>
Message-id: 20190801105742.20036-1-peter.maydell@linaro.org
2019-08-02 17:18:16 +01:00
Peter Maydell
987a232242 target/arm: Deliver BKPT/BRK exceptions to correct exception level
Most Arm architectural debug exceptions (eg watchpoints) are ignored
if the configured "debug exception level" is below the current
exception level (so for example EL1 can't arrange to get debug exceptions
for EL2 execution). Exceptions generated by the BRK or BPKT instructions
are a special case -- they must always cause an exception, so if
we're executing above the debug exception level then we
must take them to the current exception level.

This fixes a bug where executing BRK at EL2 could result in an
exception being taken at EL1 (which is strictly forbidden by the
architecture).

Fixes: https://bugs.launchpad.net/qemu/+bug/1838277
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190730132522.27086-1-peter.maydell@linaro.org
2019-07-30 17:05:22 +01:00
Peter Maydell
8f4821d77e target/arm: Limit ID register assertions to TCG
In arm_cpu_realizefn() we make several assertions about the values of
guest ID registers:
 * if the CPU provides AArch32 v7VE or better it must advertise the
   ARM_DIV feature
 * if the CPU provides AArch32 A-profile v6 or better it must
   advertise the Jazelle feature

These are essentially consistency checks that our ID register
specifications in cpu.c didn't accidentally miss out a feature,
because increasingly the TCG emulation gates features on the values
in ID registers rather than using old-style checks of ARM_FEATURE_FOO
bits.

Unfortunately, these asserts can cause problems if we're running KVM,
because in that case we don't control the values of the ID registers
-- we read them from the host kernel.  In particular, if the host
kernel is older than 4.15 then it doesn't expose the ID registers via
the KVM_GET_ONE_REG ioctl, and we set up dummy values for some
registers and leave the rest at zero.  (See the comment in
target/arm/kvm64.c kvm_arm_get_host_cpu_features().) This set of
dummy values is not sufficient to pass our assertions, and so on
those kernels running an AArch32 guest on AArch64 will assert.

We could provide a more sophisticated set of dummy ID registers in
this case, but that still leaves the possibility of a host CPU which
reports bogus ID register values that would cause us to assert.  It's
more robust to only do these ID register checks if we're using TCG,
as that is the only case where this is truly a QEMU code bug.

Reported-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190718125928.20147-1-peter.maydell@linaro.org
Fixes: https://bugs.launchpad.net/qemu/+bug/1830864
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-22 14:07:39 +01:00
Philippe Mathieu-Daudé
9bbb4ef991 target/arm: Add missing break statement for Hypervisor Trap Exception
Reported by GCC9 when building with  -Wimplicit-fallthrough=2:

  target/arm/helper.c: In function ‘arm_cpu_do_interrupt_aarch32_hyp’:
  target/arm/helper.c:7958:14: error: this statement may fall through [-Werror=implicit-fallthrough=]
   7958 |         addr = 0x14;
        |         ~~~~~^~~~~~
  target/arm/helper.c:7959:5: note: here
   7959 |     default:
        |     ^~~~~~~
  cc1: all warnings being treated as errors

Fixes: b9bc21ff9f
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reported-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190719111451.12406-1-philmd@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-22 14:07:38 +01:00
Peter Maydell
51c9122e92 target/arm: NS BusFault on vector table fetch escalates to NS HardFault
In the M-profile architecture, when we do a vector table fetch and it
fails, we need to report a HardFault.  Whether this is a Secure HF or
a NonSecure HF depends on several things.  If AIRCR.BFHFNMINS is 0
then HF is always Secure, because there is no NonSecure HardFault.
Otherwise, the answer depends on whether the 'underlying exception'
(MemManage, BusFault, SecureFault) targets Secure or NonSecure.  (In
the pseudocode, this is handled in the Vector() function: the final
exc.isSecure is calculated by looking at the exc.isSecure from the
exception returned from the memory access, not the isSecure input
argument.)

We weren't doing this correctly, because we were looking at
the target security domain of the exception we were trying to
load the vector table entry for. This produces errors of two kinds:
 * a load from the NS vector table which hits the "NS access
   to S memory" SecureFault should end up as a Secure HardFault,
   but we were raising an NS HardFault
 * a load from the S vector table which causes a BusFault
   should raise an NS HardFault if BFHFNMINS == 1 (because
   in that case all BusFaults are NonSecure), but we were raising
   a Secure HardFault

Correct the logic.

We also fix a comment error where we claimed that we might
be escalating MemManage to HardFault, and forgot about SecureFault.
(Vector loads can never hit MPU access faults, because they're
always aligned and always use the default address map.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190705094823.28905-1-peter.maydell@linaro.org
2019-07-15 14:17:04 +01:00
Peter Maydell
cb7cef8b32 target/arm: Set VFP-related MVFR0 fields for arm926 and arm1026
The ARMv5 architecture didn't specify detailed per-feature ID
registers. Now that we're using the MVFR0 register fields to
gate the existence of VFP instructions, we need to set up
the correct values in the cpu->isar structure so that we still
provide an FPU to the guest.

This fixes a regression in the arm926 and arm1026 CPUs, which
are the only ones that both have VFP and are ARMv5 or earlier.
This regression was introduced by the VFP refactoring, and more
specifically by commits 1120827fa1 and 266bd25c48,
which accidentally disabled VFP short-vector support and
double-precision support on these CPUs.

Fixes: 1120827fa1
Fixes: 266bd25c48
Fixes: https://bugs.launchpad.net/qemu/+bug/1836192
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Christophe Lyon <christophe.lyon@linaro.org>
Message-id: 20190711131241.22231-1-peter.maydell@linaro.org
2019-07-15 14:17:04 +01:00
Alex Bennée
45b1a243b8 target/arm: report ARMv8-A FP support for AArch32 -cpu max
When we converted to using feature bits in 602f6e42cf we missed out
the fact (dp && arm_dc_feature(s, ARM_FEATURE_V8)) was supported for
-cpu max configurations. This caused a regression in the GCC test
suite. Fix this by setting the appropriate bits in mvfr1.FPHP to
report ARMv8-A with FP support (but not ARMv8.2-FP16).

Fixes: https://bugs.launchpad.net/qemu/+bug/1836078
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190711103737.10017-1-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-15 14:17:02 +01:00
Richard Henderson
08b97f7ff2 tcg: Introduce set/clear_helper_retaddr
At present we have a potential error in that helper_retaddr contains
data for handle_cpu_signal, but we have not ensured that those stores
will be scheduled properly before the operation that may fault.

It might be that these races are not in practice observable, due to
our use of -fno-strict-aliasing, but better safe than sorry.

Adjust all of the setters of helper_retaddr.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-07-14 12:19:00 +02:00
Philippe Mathieu-Daudé
85795187f4 target/arm/vfp_helper: Call set_fpscr_to_host before updating to FPSCR
In commit e9d652824b we extracted the vfp_set_fpscr_to_host()
function but failed at calling it in the correct place, we call
it after xregs[ARM_VFP_FPSCR] is modified.

Fix by calling this function before we update FPSCR.

Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Tested-by: Laurent Desnogues <laurent.desnogues@gmail.com>
Message-id: 20190705124318.1075-1-philmd@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-08 14:11:31 +01:00
Richard Henderson
6a02a73211 target/arm: Fix sve_zcr_len_for_el
Off by one error in the EL2 and EL3 tests.  Remove the test
against EL3 entirely, since it must always be true.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190702104732.31154-1-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-07-08 14:11:30 +01:00
Like Xu
cc7d44c2e0 hw/arm: Replace global smp variables with machine smp properties
The global smp variables in arm are replaced with smp machine properties.
The init_cpus() and *_create_rpu() are refactored to pass MachineState.

A local variable of the same name would be introduced in the declaration
phase if it's used widely in the context OR replace it on the spot if it's
only used once. No semantic changes.

Signed-off-by: Like Xu <like.xu@linux.intel.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190518205428.90532-9-like.xu@linux.intel.com>
[ehabkost: Fix hw/arm/sbsa-ref.c and hw/arm/aspeed.c]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2019-07-05 17:08:03 -03:00
Peter Maydell
89a11ff756 target/arm: Correct VMOV_imm_dp handling of short vectors
Coverity points out (CID 1402195) that the loop in trans_VMOV_imm_dp()
that iterates over the destination registers in a short-vector VMOV
accidentally throws away the returned updated register number
from vfp_advance_dreg(). Add the missing assignment. (We got this
correct in trans_VMOV_imm_sp().)

Fixes: 18cf951af9
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190702105115.9465-1-peter.maydell@linaro.org
2019-07-04 17:25:30 +01:00
Peter Maydell
5529de1e55 target/arm: Execute Thumb instructions when their condbits are 0xf
Thumb instructions in an IT block are set up to be conditionally
executed depending on a set of condition bits encoded into the IT
bits of the CPSR/XPSR.  The architecture specifies that if the
condition bits are 0b1111 this means "always execute" (like 0b1110),
not "never execute"; we were treating it as "never execute".  (See
the ConditionHolds() pseudocode in both the A-profile and M-profile
Arm ARM.)

This is a bit of an obscure corner case, because the only legal
way to get to an 0b1111 set of condbits is to do an exception
return which sets the XPSR/CPSR up that way. An IT instruction
which encodes a condition sequence that would include an 0b1111 is
UNPREDICTABLE, and for v8A the CONSTRAINED UNPREDICTABLE choices
for such an IT insn are to NOP, UNDEF, or treat 0b1111 like 0b1110.
Add a comment noting that we take the latter option.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190617175317.27557-7-peter.maydell@linaro.org
2019-07-04 17:25:30 +01:00
Peter Maydell
2884fbb604 target/arm: Use _ra versions of cpu_stl_data() in v7M helpers
In the various helper functions for v7M/v8M instructions, use
the _ra versions of cpu_stl_data() and friends. Otherwise we
may get wrong behaviour or an assert() due to not being able
to locate the TB if there is an exception on the memory access
or if it performs an IO operation when in icount mode.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190617175317.27557-5-peter.maydell@linaro.org
2019-07-04 17:25:30 +01:00