Commit Graph

106700 Commits

Author SHA1 Message Date
Arseny Solokha
1945606031 kvm/ppc/mpic: drop unused IRQ_testbit
Drop unused static procedure which doesn't have callers within its
translation unit. It had been already removed independently in QEMU[1]
from the OpenPIC implementation borrowed from the kernel.

[1] https://lists.gnu.org/archive/html/qemu-devel/2014-06/msg01812.html

Signed-off-by: Arseny Solokha <asolokha@kb.kras.ru>
Cc: Alexander Graf <agraf@suse.de>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1424768706-23150-3-git-send-email-asolokha@kb.kras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:58 +02:00
Eugene Korenevsky
92d71bc695 KVM: nVMX: remove unnecessary double caching of MAXPHYADDR
After speed-up of cpuid_maxphyaddr() it can be called frequently:
instead of heavyweight enumeration of CPUID entries it returns a cached
pre-computed value. It is also inlined now. So caching its result became
unnecessary and can be removed.

Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com>
Message-Id: <20150329205644.GA1258@gnote>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:58 +02:00
Eugene Korenevsky
9090422f1c KVM: nVMX: checks for address bits beyond MAXPHYADDR on VM-entry
On each VM-entry CPU should check the following VMCS fields for zero bits
beyond physical address width:
-  APIC-access address
-  virtual-APIC address
-  posted-interrupt descriptor address
This patch adds these checks required by Intel SDM.

Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com>
Message-Id: <20150329205627.GA1244@gnote>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:57 +02:00
Eugene Korenevsky
5a4f55cde8 KVM: x86: cache maxphyaddr CPUID leaf in struct kvm_vcpu
cpuid_maxphyaddr(), which performs lot of memory accesses is called
extensively across KVM, especially in nVMX code.

This patch adds a cached value of maxphyaddr to vcpu.arch to reduce the
pressure onto CPU cache and simplify the code of cpuid_maxphyaddr()
callers. The cached value is initialized in kvm_arch_vcpu_init() and
reloaded every time CPUID is updated by usermode. It is obvious that
these reloads occur infrequently.

Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com>
Message-Id: <20150329205612.GA1223@gnote>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:56 +02:00
Radim Krčmář
80f0e95d1b KVM: vmx: pass error code with internal error #2
Exposing the on-stack error code with internal error is cheap and
potentially useful.

Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Message-Id: <1428001865-32280-1-git-send-email-rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:56 +02:00
Radim Krčmář
80f7fdb1c7 x86: vdso: fix pvclock races with task migration
If we were migrated right after __getcpu, but before reading the
migration_count, we wouldn't notice that we read TSC of a different
VCPU, nor that KVM's bug made pvti invalid, as only migration_count
on source VCPU is increased.

Change vdso instead of updating migration_count on destination.

Cc: stable@vger.kernel.org
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Fixes: 0a4e6be9ca ("x86: kvm: Revert "remove sched notifier for cross-cpu migrations"")
Message-Id: <1428000263-11892-1-git-send-email-rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:55 +02:00
Paolo Bonzini
9c8fd1ba22 KVM: x86: optimize delivery of TSC deadline timer interrupt
The newly-added tracepoint shows the following results on
the tscdeadline_latency test:

        qemu-kvm-8387  [002]  6425.558974: kvm_vcpu_wakeup:      poll time 10407 ns
        qemu-kvm-8387  [002]  6425.558984: kvm_vcpu_wakeup:      poll time 0 ns
        qemu-kvm-8387  [002]  6425.561242: kvm_vcpu_wakeup:      poll time 10477 ns
        qemu-kvm-8387  [002]  6425.561251: kvm_vcpu_wakeup:      poll time 0 ns

and so on.  This is because we need to go through kvm_vcpu_block again
after the timer IRQ is injected.  Avoid it by polling once before
entering kvm_vcpu_block.

On my machine (Xeon E5 Sandy Bridge) this removes about 500 cycles (7%)
from the latency of the TSC deadline timer.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:54 +02:00
Paolo Bonzini
362c698f82 KVM: x86: extract blocking logic from __vcpu_run
Rename the old __vcpu_run to vcpu_run, and extract part of it to a new
function vcpu_block.

The next patch will add a new condition in vcpu_block, avoid extra
indentation.

Reviewed-by: David Matlack <dmatlack@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:53 +02:00
Wanpeng Li
35fd68a38d kvm: x86: fix x86 eflags fixed bit
Guest can't be booted w/ ept=0, there is a message dumped as below:

If you're running a guest on an Intel machine without unrestricted mode
support, the failure can be most likely due to the guest entering an invalid
state for Intel VT. For example, the guest maybe running in big real mode
which is not supported on less recent Intel processors.

EAX=00000011 EBX=f000d2f6 ECX=00006cac EDX=000f8956
ESI=bffbdf62 EDI=00000000 EBP=00006c68 ESP=00006c68
EIP=0000d187 EFL=00000004 [-----P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =e000 000e0000 ffffffff 00809300 DPL=0 DS16 [-WA]
CS =f000 000f0000 ffffffff 00809b00 DPL=0 CS16 [-RA]
SS =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA]
DS =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA]
FS =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA]
GS =0000 00000000 ffffffff 00809300 DPL=0 DS16 [-WA]
LDT=0000 00000000 0000ffff 00008200 DPL=0 LDT
TR =0000 00000000 0000ffff 00008b00 DPL=0 TSS32-busy
GDT=     000f6a80 00000037
IDT=     000f6abe 00000000
CR0=00000011 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=01 1e b8 6a 2e 0f 01 16 74 6a 0f 20 c0 66 83 c8 01 0f 22 c0 <66> ea 8f d1 0f 00 08 00 b8 10 00 00 00 8e d8 8e c0 8e d0 8e e0 8e e8 89 c8 ff e2 89 c1 b8X

X86 eflags bit 1 is fixed set, which means that 1 << 1 is set instead of 1,
this patch fix it.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Message-Id: <1428473294-6633-1-git-send-email-wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-04-08 10:46:52 +02:00
Paolo Bonzini
7f22b45d66 Features and fixes for 4.1 (kvm/next)
1. Assorted changes
 1.1 allow more feature bits for the guest
 1.2 Store breaking event address on program interrupts
 
 2. Interrupt handling rework
 2.1 Fix copy_to_user while holding a spinlock (cc stable)
 2.2 Rework floating interrupts to follow the priorities
 2.3 Allow to inject all local interrupts via new ioctl
 2.4 allow to get/set the full local irq state, e.g. for migration
     and introspection
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iQIcBAABAgAGBQJVGvEUAAoJEBF7vIC1phx82tEP/3KrwsDRs+buiBqyv9k+qCFV
 v+R94gReBB5ggfbGfUYgBJMR2/4XQ+0jcZ55jfBCC4osOq6Juw/8HIj2nSgbQHmz
 F9Go0n8IqJ3DnqPTc0KYdFZ7kqDvMV5ME3XJrFiAHv1TUL9H/KpZArkcVIwD2NOo
 w01AVrCDY4bTajYqKShzGFymQl1K5vTGGvgxhh4kAHct4Nt5N5HFmyROm0RrsFZx
 Sycx4t177O7zhCN2tv5Zy8iWaEvzHAESoXkhZ2cJ6t+FXii2Eov5IgyyfYRXBfbm
 YACyvlFD087UdFGTt85ggPVS/S/5hn9xXmVHuIimHeyZU7CXCN5vYPcn+ZyksYr5
 uA8+/2OPAgcaeDa2f7nCjl8jmcLR3hkQ0n/urA+pPYAZANJoFDfiGOr/kVk6aKff
 JTGSFUjNK891/IGEsdrSk2p64U5xMd8LFa3Il++kZT91gc2nrZOHNz5FGlXlkLdJ
 sADeNFWhoprEt/2P4aX6W2j26L8G874XkldDSjrS41U8L55+IiEm09r8oAWgfc5A
 pryeDaN4nSjFC+HOtlPkcVkAcsswiI6nHIm3+/XFetCq+v4pnVKFMHWsTeEjiQgQ
 H5aV9mfEKTJaCPrAJMsj8ZsKq0usG+BeRNqpIvxPAQB8fyl3jw9iu+RHeY1xWsTg
 BRHB/+CGYIxDu4XdRexv
 =Rrx5
 -----END PGP SIGNATURE-----

Merge tag 'kvm-s390-next-20150331' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD

Features and fixes for 4.1 (kvm/next)

1. Assorted changes
1.1 allow more feature bits for the guest
1.2 Store breaking event address on program interrupts

2. Interrupt handling rework
2.1 Fix copy_to_user while holding a spinlock (cc stable)
2.2 Rework floating interrupts to follow the priorities
2.3 Allow to inject all local interrupts via new ioctl
2.4 allow to get/set the full local irq state, e.g. for migration
    and introspection
2015-04-07 18:10:03 +02:00
Paolo Bonzini
bf0fb67cf9 KVM/ARM changes for v4.1:
- fixes for live migration
 - irqfd support
 - kvm-io-bus & vgic rework to enable ioeventfd
 - page ageing for stage-2 translation
 - various cleanups
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJVHQ0kAAoJECPQ0LrRPXpDHKQQALjw6STaZd7n20OFopNgHd4P
 qVeWYEKBxnsiSvL4p3IOSlZlEul+7x08aZqtyxWQRQcDT4ggTI+3FKKfc+8yeRpH
 WV6YJP0bGqz7039PyMLuIgs48xkSZtntePw69hPJfHZh4C1RBlP5T2SfE8mU8VZX
 fWToiU3W12QfKnmN7JFgxZopnGhrYCrG0EexdTDziAZu0GEMlDrO4wnyTR60WCvT
 4TEF73R0kpAz4yplKuhcDHuxIG7VFhQ4z7b09M1JtR0gQ3wUvfbD3Wqqi49SwHkv
 NQOStcyLsIlDosSRcLXNCwb3IxjObXTBcAxnzgm2Aoc1xMMZX1ZPQNNs6zFZzycb
 2c6QMiQ35zm7ellbvrG+bT+BP86JYWcAtHjWcaUFgqSJjb8MtqcMtsCea/DURhqx
 /kictqbPYBBwKW6SKbkNkisz59hPkuQnv35fuf992MRCbT9LAXLPRLbcirucCzkE
 p1MOotsWoO3ldJMZaVn0KYk3sQf6mCIfbYPEdOcw3fhJlvyy3NdjVkLOFbA5UUg1
 rQ7Ru2rTemBc0ExVrymngNTMpMB4XcEeJzXfhcgMl3DWbDj60Ku/O26sDtZ6bsFv
 JuDYn8FVDHz9gpEQHgiUi1YMsBKXLhnILa1ppaa6AflykU3BRfYjAk1SXmX84nQK
 mJUJEdFuxi6pHN0UKxUI
 =avA4
 -----END PGP SIGNATURE-----

Merge tag 'kvm-arm-for-4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into 'kvm-next'

KVM/ARM changes for v4.1:

- fixes for live migration
- irqfd support
- kvm-io-bus & vgic rework to enable ioeventfd
- page ageing for stage-2 translation
- various cleanups
2015-04-07 18:09:20 +02:00
Paolo Bonzini
8999602d08 Fixes for KVM/ARM for 4.0-rc5.
Fixes page refcounting issues in our Stage-2 page table management code,
 fixes a missing unlock in a gicv3 error path, and fixes a race that can
 cause lost interrupts if signals are pending just prior to entering the
 guest.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJVBs95AAoJEEtpOizt6ddyAfoH/Rwj2T38ZDikImPpgfeFrmJs
 ZlWC+Z3akwjVHPv308/MsKdyashtA7OjiMp3DOheMFMYJay/ecgY/92vFCc6uh5S
 LDoXCbp+Pneth6C6bbU2Gw+aoCD07ZYCn9PeFq40MfpQUhCEGWhx41OFzHppqOZx
 e+jodHRE+sBVTFUtbz+HubAfcM46f/8bP7682CEKsVZPeTSiHyeojdZEglfB37MG
 ar/iC1/cyO/097vWaBqv1t1WZoHbWmMrDlzo5X+AtayVXFNdv4Ztw0Rz2kRhnLB8
 8GXYawoSQoTN8FX1oyTyr5YWcWD7wDTzhcHsHS1xZHhvrdLCEcFrHeEWkuUlYjU=
 =YS6j
 -----END PGP SIGNATURE-----

Merge tag 'kvm-arm-fixes-4.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into 'kvm-next'

Fixes for KVM/ARM for 4.0-rc5.

Fixes page refcounting issues in our Stage-2 page table management code,
fixes a missing unlock in a gicv3 error path, and fixes a race that can
cause lost interrupts if signals are pending just prior to entering the
guest.
2015-04-07 18:06:01 +02:00
Jens Freimann
816c7667ea KVM: s390: migrate vcpu interrupt state
This patch adds support to migrate vcpu interrupts. Two new vcpu ioctls
are added which get/set the complete status of pending interrupts in one
go. The ioctls are marked as available with the new capability
KVM_CAP_S390_IRQ_STATE.

We can not use a ONEREG, as the number of pending local interrupts is not
constant and depends on the number of CPUs.

To retrieve the interrupt state we add an ioctl KVM_S390_GET_IRQ_STATE.
Its input parameter is a pointer to a struct kvm_s390_irq_state which
has a buffer and length.  For all currently pending interrupts, we copy
a struct kvm_s390_irq into the buffer and pass it to userspace.

To store interrupt state into a buffer provided by userspace, we add an
ioctl KVM_S390_SET_IRQ_STATE. It passes a struct kvm_s390_irq_state into
the kernel and injects all interrupts contained in the buffer.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2015-03-31 21:07:31 +02:00
Jens Freimann
79e87a103d KVM: s390: refactor vcpu injection function
Let's provide a version of kvm_s390_inject_vcpu() that
does not acquire the local-interrupt lock and skips
waking up the vcpu.
To be used in a later patch for vcpu-local interrupt migration,
where we are already holding the lock.

Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2015-03-31 21:07:30 +02:00
Jens Freimann
47b43c52ee KVM: s390: add ioctl to inject local interrupts
We have introduced struct kvm_s390_irq a while ago which allows to
inject all kinds of interrupts as defined in the Principles of
Operation.
Add ioctl to inject interrupts with the extended struct kvm_s390_irq

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2015-03-31 21:07:30 +02:00
David Hildenbrand
b4aec92567 KVM: s390: cpu timer irq priority
We now have a mechanism for delivering interrupts according to their priority.

Let's inject them using our new infrastructure (instead of letting only hardware
handle them), so we can be sure that the irq priorities are satisfied.

For s390, the cpu timer and the clock comparator are to be checked for common
code kvm_cpu_has_pending_timer(), although the cpu timer is only stepped when
the guest is being executed.

Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2015-03-31 21:07:29 +02:00
Jens Freimann
6d3da24141 KVM: s390: deliver floating interrupts in order of priority
This patch makes interrupt handling compliant to the z/Architecture
Principles of Operation with regard to interrupt priorities.

Add a bitmap for pending floating interrupts. Each bit relates to a
interrupt type and its list. A turned on bit indicates that a list
contains items (interrupts) which need to be delivered.  When delivering
interrupts on a cpu we can merge the existing bitmap for cpu-local
interrupts and floating interrupts and have a single mechanism for
delivery.
Currently we have one list for all kinds of floating interrupts and a
corresponding spin lock. This patch adds a separate list per
interrupt type. An exception to this are service signal and machine check
interrupts, as there can be only one pending interrupt at a time.

Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2015-03-31 21:07:27 +02:00
Jens Freimann
94aa033efc KVM: s390: fix get_all_floating_irqs
This fixes a bug introduced with commit c05c4186bb ("KVM: s390:
add floating irq controller").

get_all_floating_irqs() does copy_to_user() while holding
a spin lock. Let's fix this by filling a temporary buffer
first and copy it to userspace after giving up the lock.

Cc: <stable@vger.kernel.org> # 3.18+: 69a8d45626 KVM: s390: no need to hold...

Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2015-03-31 21:05:51 +02:00
Joe Perches
1d804d079a x86: Use bool function return values of true/false not 1/0
Use the normal return values for bool functions

Signed-off-by: Joe Perches <joe@perches.com>
Message-Id: <9f593eb2f43b456851cd73f7ed09654ca58fb570.1427759009.git.joe@perches.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-31 18:05:09 +02:00
Christian Borntraeger
a3ed8dae6e KVM: s390: enable more features that need no hypervisor changes
After some review about what these facilities do, the following
facilities will work under KVM and can, therefore, be reported
to the guest if the cpu model and the host cpu provide this bit.

There are plans underway to make the whole bit thing more readable,
but its not yet finished. So here are some last bit changes and
we enhance the KVM mask with:

9 The sense-running-status facility is installed in the
  z/Architecture architectural mode.
  ---> handled by SIE or KVM

10 The conditional-SSKE facility is installed in the
   z/Architecture architectural mode.
  ---> handled by SIE. KVM will retry SIE

13 The IPTE-range facility is installed in the
   z/Architecture architectural mode.
  ---> handled by SIE. KVM will retry SIE

36 The enhanced-monitor facility is installed in the
   z/Architecture architectural mode.
  ---> handled by SIE

47 The CMPSC-enhancement facility is installed in the
   z/Architecture architectural mode.
  ---> handled by SIE

48 The decimal-floating-point zoned-conversion facility
   is installed in the z/Architecture architectural mode.
  ---> handled by SIE

49 The execution-hint, load-and-trap, miscellaneous-
   instruction-extensions and processor-assist
  ---> handled by SIE

51 The local-TLB-clearing facility is installed in the
   z/Architecture architectural mode.
  ---> handled by SIE

52 The interlocked-access facility 2 is installed.
  ---> handled by SIE

53 The load/store-on-condition facility 2 and load-and-
   zero-rightmost-byte facility are installed in the
   z/Architecture architectural mode.
  ---> handled by SIE

57 The message-security-assist-extension-5 facility is
  installed in the z/Architecture architectural mode.
  ---> handled by SIE

66 The reset-reference-bits-multiple facility is installed
  in the z/Architecture architectural mode.
  ---> handled by SIE. KVM will retry SIE

80 The decimal-floating-point packed-conversion
   facility is installed in the z/Architecture architectural
   mode.
  ---> handled by SIE

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Michael Mueller <mimu@linux.vnet.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2015-03-31 13:49:08 +02:00
David Hildenbrand
2ba4596852 KVM: s390: store the breaking-event address on pgm interrupts
If the PER-3 facility is installed, the breaking-event address is to be
stored in the low core.

There is no facility bit for PER-3 in stfl(e) and Linux always uses the
value at address 272 no matter if PER-3 is available or not.
We can't hide its existence from the guest. All program interrupts
injected via the SIE automatically store this information if the PER-3
facility is available in the hypervisor. Also the itdb contains the
address automatically.

As there is no switch to turn this mechanism off, let's simply make it
consistent and also store the breaking event address in case of manual
program interrupt injection.

Reviewed-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
2015-03-31 13:49:08 +02:00
Nikolay Nikolaev
d44758c0df KVM: arm/arm64: enable KVM_CAP_IOEVENTFD
As the infrastructure for eventfd has now been merged, report the
ioeventfd capability as being supported.

Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
[maz: grouped the case entry with the others, fixed commit log]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2015-03-30 17:07:24 +01:00
Andre Przywara
950324ab81 KVM: arm/arm64: rework MMIO abort handling to use KVM MMIO bus
Currently we have struct kvm_exit_mmio for encapsulating MMIO abort
data to be passed on from syndrome decoding all the way down to the
VGIC register handlers. Now as we switch the MMIO handling to be
routed through the KVM MMIO bus, it does not make sense anymore to
use that structure already from the beginning. So we keep the data in
local variables until we put them into the kvm_io_bus framework.
Then we fill kvm_exit_mmio in the VGIC only, making it a VGIC private
structure. On that way we replace the data buffer in that structure
with a pointer pointing to a single location in a local variable, so
we get rid of some copying on the way.
With all of the virtual GIC emulation code now being registered with
the kvm_io_bus, we can remove all of the old MMIO handling code and
its dispatching functionality.

I didn't bother to rename kvm_exit_mmio (to vgic_mmio or something),
because that touches a lot of code lines without any good reason.

This is based on an original patch by Nikolay.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Cc: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2015-03-30 17:07:19 +01:00
Eugene Korenevsky
2f729b10bb KVM: remove useless check of "ret" variable prior to returning the same value
A trivial code cleanup. This `if` is redundant.

Signed-off-by: Eugene Korenevsky <ekorenevsky@gmail.com>
Message-Id: <20150328222717.GA6508@gnote>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-30 16:57:15 +02:00
Nadav Amit
b32a991800 KVM: x86: Remove redundant definitions
Some constants are redfined in emulate.c. Avoid it.

s/SELECTOR_RPL_MASK/SEGMENT_RPL_MASK
s/SELECTOR_TI_MASK/SEGMENT_TI_MASK

No functional change.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Message-Id: <1427635984-8113-3-git-send-email-namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-30 16:46:42 +02:00
Nadav Amit
0efb04406d KVM: x86: removing redundant eflags bits definitions
The eflags are redefined (using other defines) in emulate.c.
Use the definition from processor-flags.h as some mess already started.
No functional change.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Message-Id: <1427635984-8113-2-git-send-email-namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-30 16:46:37 +02:00
Nadav Amit
900efe200e KVM: x86: BSF and BSR emulation change register unnecassarily
If the source of BSF and BSR is zero, the destination register should not
change. That is how real hardware behaves.  If we set the destination even with
the same value that we had before, we may clear bits [63:32] unnecassarily.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Message-Id: <1427719163-5429-4-git-send-email-namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-30 16:46:11 +02:00
Nadav Amit
6fd8e12757 KVM: x86: POPA emulation may not clear bits [63:32]
POPA should assign the values to the registers as usual registers are assigned.
In other words, 32-bits register assignments should clear bits [63:32] of the
register.

Split the code of register assignments that will be used by future changes as
well.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Message-Id: <1427719163-5429-3-git-send-email-namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-30 16:46:03 +02:00
Nadav Amit
b91aa14d95 KVM: x86: CMOV emulation on legacy mode is wrong
On legacy mode CMOV emulation should still clear bits [63:32] even if the
assignment is not done. The previous fix 140bad89fd ("KVM: x86: emulation of
dword cmov on long-mode should clear [63:32]") was incomplete.

Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Message-Id: <1427719163-5429-2-git-send-email-namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-30 16:45:50 +02:00
Petr Matousek
2dccb4cdbf kvm: x86: i8259: return initialized data on invalid-size read
If data is read from PIC with invalid access size, the return data stays
uninitialized even though success is returned.

Fix this by always initializing the data.

Signed-off-by: Petr Matousek <pmatouse@redhat.com>
Reported-by: Nadav Amit <nadav.amit@gmail.com>
Message-Id: <20150311111609.GG8544@dhcp-25-225.brq.redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-03-30 16:40:07 +02:00
James Hogan
d952bd070f MIPS: KVM: Wire up MSA capability
Now that the code is in place for KVM to support MIPS SIMD Architecutre
(MSA) in MIPS guests, wire up the new KVM_CAP_MIPS_MSA capability.

For backwards compatibility, the capability must be explicitly enabled
in order to detect or make use of MSA from the guest.

The capability is not supported if the hardware supports MSA vector
partitioning, since the extra support cannot be tested yet and it
extends the state that the userland program would have to save.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Cc: linux-api@vger.kernel.org
Cc: linux-doc@vger.kernel.org
2015-03-27 21:25:22 +00:00
James Hogan
ab86bd6004 MIPS: KVM: Expose MSA registers
Add KVM register numbers for the MIPS SIMD Architecture (MSA) registers,
and implement access to them with the KVM_GET_ONE_REG / KVM_SET_ONE_REG
ioctls when the MSA capability is enabled (exposed in a later patch) and
present in the guest according to its Config3.MSAP bit.

The MSA vector registers use the same register numbers as the FPU
registers except with a different size (128bits). Since MSA depends on
Status.FR=1, these registers are inaccessible when Status.FR=0. These
registers are returned as a single native endian 128bit value, rather
than least significant half first with each 64-bit half native endian as
the kernel uses internally.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Cc: linux-api@vger.kernel.org
Cc: linux-doc@vger.kernel.org
2015-03-27 21:25:21 +00:00
James Hogan
c2537ed9fb MIPS: KVM: Add MSA exception handling
Add guest exception handling for MIPS SIMD Architecture (MSA) floating
point exceptions and MSA disabled exceptions.

MSA floating point exceptions from the guest need passing to the guest
kernel, so for these a guest MSAFPE is emulated.

MSA disabled exceptions are normally handled by passing a reserved
instruction exception to the guest (because no guest MSA was supported),
but the hypervisor can now handle them if the guest has MSA by passing
an MSA disabled exception to the guest, or if the guest has MSA enabled
by transparently restoring the guest MSA context and enabling MSA and
the FPU.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:20 +00:00
James Hogan
2b6009d646 MIPS: KVM: Emulate MSA bits in COP0 interface
Emulate MSA related parts of COP0 interface so that the guest will be
able to enable/disable MSA (Config5.MSAEn) once the MSA capability has
been wired up.

As with the FPU (Status.CU1) setting Config5.MSAEn has no immediate
effect if the MSA state isn't live, as MSA state is restored lazily on
first use. Changes after the MSA state has been restored take immediate
effect, so that the guest can start getting MSA disabled exceptions
right away for guest MSA operations. The MSA state is saved lazily too,
as MSA may get re-enabled in the near future anyway.

A special case is also added for when Status.CU1 is set while FR=0 and
the MSA state is live. In this case we are at risk of getting reserved
instruction exceptions if we try and save the MSA state, so we lose the
MSA state sooner while MSA is still usable.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:19 +00:00
James Hogan
539cb89fbd MIPS: KVM: Add base guest MSA support
Add base code for supporting the MIPS SIMD Architecture (MSA) in MIPS
KVM guests. MSA cannot yet be enabled in the guest, we're just laying
the groundwork.

As with the FPU, whether the guest's MSA context is loaded is stored in
another bit in the fpu_inuse vcpu member. This allows MSA to be disabled
when the guest disables it, but keeping the MSA context loaded so it
doesn't have to be reloaded if the guest re-enables it.

New assembly code is added for saving and restoring the MSA context,
restoring only the upper half of the MSA context (for if the FPU context
is already loaded) and for saving/clearing and restoring MSACSR (which
can itself cause an MSA FP exception depending on the value). The MSACSR
is restored before returning to the guest if MSA is already enabled, and
the existing FP exception die notifier is extended to catch the possible
MSA FP exception and step over the ctcmsa instruction.

The helper function kvm_own_msa() is added to enable MSA and restore
the MSA context if it isn't already loaded, which will be used in a
later patch when the guest attempts to use MSA for the first time and
triggers an MSA disabled exception.

The existing FPU helpers are extended to handle MSA. kvm_lose_fpu()
saves the full MSA context if it is loaded (which includes the FPU
context) and both kvm_lose_fpu() and kvm_drop_fpu() disable MSA.

kvm_own_fpu() also needs to lose any MSA context if FR=0, since there
would be a risk of getting reserved instruction exceptions if CU1 is
enabled and we later try and save the MSA context. We shouldn't usually
hit this case since it will be handled when emulating CU1 changes,
however there's nothing to stop the guest modifying the Status register
directly via the comm page, which will cause this case to get hit.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:19 +00:00
James Hogan
5fafd8748b MIPS: KVM: Wire up FPU capability
Now that the code is in place for KVM to support FPU in MIPS KVM guests,
wire up the new KVM_CAP_MIPS_FPU capability.

For backwards compatibility, the capability must be explicitly enabled
in order to detect or make use of the FPU from the guest.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Cc: linux-api@vger.kernel.org
Cc: linux-doc@vger.kernel.org
2015-03-27 21:25:18 +00:00
James Hogan
379245cdf1 MIPS: KVM: Expose FPU registers
Add KVM register numbers for the MIPS FPU registers, and implement
access to them with the KVM_GET_ONE_REG / KVM_SET_ONE_REG ioctls when
the FPU capability is enabled (exposed in a later patch) and present in
the guest according to its Config1.FP bit.

The registers are accessible in the current mode of the guest, with each
sized access showing what the guest would see with an equivalent access,
and like the architecture they may become UNPREDICTABLE if the FR mode
is changed. When FR=0, odd doubles are inaccessible as they do not exist
in that mode.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Cc: linux-api@vger.kernel.org
Cc: linux-doc@vger.kernel.org
2015-03-27 21:25:17 +00:00
James Hogan
1c0cd66adb MIPS: KVM: Add FP exception handling
Add guest exception handling for floating point exceptions and
coprocessor 1 unusable exceptions.

Floating point exceptions from the guest need passing to the guest
kernel, so for these a guest FPE is emulated.

Also, coprocessor 1 unusable exceptions are normally passed straight
through to the guest (because no guest FPU was supported), but the
hypervisor can now handle them if the guest has its FPU enabled by
restoring the guest FPU context and enabling the FPU.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:16 +00:00
James Hogan
6cdc65e31d MIPS: KVM: Emulate FPU bits in COP0 interface
Emulate FPU related parts of COP0 interface so that the guest will be
able to enable/disable the following once the FPU capability has been
wired up:
- The FPU (Status.CU1)
- 64-bit FP register mode (Status.FR)
- Hybrid FP register mode (Config5.FRE)

Changing Status.CU1 has no immediate effect if the FPU state isn't live,
as the FPU state is restored lazily on first use. After that, changes
take place immediately in the host Status.CU1, so that the guest can
start getting coprocessor unusable exceptions right away for guest FPU
operations if it is disabled. The FPU state is saved lazily too, as the
FPU may get re-enabled in the near future anyway.

Any change to Status.FR causes the FPU state to be discarded and FPU
disabled, as the register state is architecturally UNPREDICTABLE after
such a change. This should also ensure that the FPU state is fully
initialised (with stale state, but that's fine) when it is next used in
the new FP mode.

Any change to the Config5.FRE bit is immediately updated in the host
state so that the guest can get the relevant exceptions right away for
single-precision FPU operations.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:15 +00:00
James Hogan
98e91b8457 MIPS: KVM: Add base guest FPU support
Add base code for supporting FPU in MIPS KVM guests. The FPU cannot yet
be enabled in the guest, we're just laying the groundwork.

Whether the guest's FPU context is loaded is stored in a bit in the
fpu_inuse vcpu member. This allows the FPU to be disabled when the guest
disables it, but keeping the FPU context loaded so it doesn't have to be
reloaded if the guest re-enables it.

An fpu_enabled vcpu member stores whether userland has enabled the FPU
capability (which will be wired up in a later patch).

New assembly code is added for saving and restoring the FPU context, and
for saving/clearing and restoring FCSR (which can itself cause an FP
exception depending on the value). The FCSR is restored before returning
to the guest if the FPU is already enabled, and a die notifier is
registered to catch the possible FP exception and step over the ctc1
instruction.

The helper function kvm_lose_fpu() is added to save FPU context and
disable the FPU, which is used when saving hardware state before a
context switch or KVM exit (the vcpu_get_regs() callback).

The helper function kvm_own_fpu() is added to enable the FPU and restore
the FPU context if it isn't already loaded, which will be used in a
later patch when the guest attempts to use the FPU for the first time
and triggers a co-processor unusable exception.

The helper function kvm_drop_fpu() is added to discard the FPU context
and disable the FPU, which will be used in a later patch when the FPU
state will become architecturally UNPREDICTABLE (change of FR mode) to
force a reload of [stale] context in the new FR mode.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:14 +00:00
James Hogan
b86ecb3766 MIPS: KVM: Add vcpu_get_regs/vcpu_set_regs callback
Add a vcpu_get_regs() and vcpu_set_regs() callbacks for loading and
restoring context which may be in hardware registers. This may include
floating point and MIPS SIMD Architecture (MSA) state which may be
accessed directly by the guest (but restored lazily by the hypervisor),
and also dedicated guest registers as provided by the VZ ASE.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:13 +00:00
James Hogan
c771607af9 MIPS: KVM: Add Config4/5 and writing of Config registers
Add Config4 and Config5 co-processor 0 registers, and add capability to
write the Config1, Config3, Config4, and Config5 registers using the KVM
API.

Only supported bits can be written, to minimise the chances of the guest
being given a configuration from e.g. QEMU that is inconsistent with
that being emulated, and as such the handling is in trap_emul.c as it
may need to be different for VZ. Currently the only modification
permitted is to make Config4 and Config5 exist via the M bits, but other
bits will be added for FPU and MSA support in future patches.

Care should be taken by userland not to change bits without fully
handling the possible extra state that may then exist and which the
guest may begin to use and depend on.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:12 +00:00
James Hogan
2211ee810a MIPS: KVM: Simplify default guest Config registers
Various semi-used definitions exist in kvm_host.h for the default guest
config registers. Remove them and use the appropriate values directly
when initialising the Config registers.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:11 +00:00
James Hogan
7bd4acec42 MIPS: KVM: Clean up register definitions a little
Clean up KVM_GET_ONE_REG / KVM_SET_ONE_REG register definitions for
MIPS, to prepare for adding a new group for FPU & MSA vector registers.

Definitions are added for common bits in each group of registers, e.g.
KVM_REG_MIPS_CP0 = KVM_REG_MIPS | 0x10000, for the coprocessor 0
registers.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:10 +00:00
James Hogan
58a115bcec MIPS: KVM: Drop pr_info messages on init/exit
The information messages when the KVM module is loaded and unloaded are
a bit pointless and out of line with other architectures, so lets drop
them.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:09 +00:00
James Hogan
e93d4c159c MIPS: KVM: Sort kvm_mips_get_reg() registers
Sort the registers in the kvm_mips_get_reg() switch by register number,
which puts ERROREPC after the CONFIG registers.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:08 +00:00
James Hogan
1068eaaf2f MIPS: KVM: Implement PRid CP0 register access
Implement access to the guest Processor Identification CP0 register
using the KVM_GET_ONE_REG and KVM_SET_ONE_REG ioctls. This allows the
owning process to modify and read back the value that is exposed to the
guest in this register.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:08 +00:00
James Hogan
0a5604272d MIPS: KVM: Handle TRAP exceptions from guest kernel
Trap instructions are used by Linux to implement BUG_ON(), however KVM
doesn't pass trap exceptions on to the guest if they occur in guest
kernel mode, instead triggering an internal error "Exception Code: 13,
not yet handled". The guest kernel then doesn't get a chance to print
the usual BUG message and stack trace.

Implement handling of the trap exception so that it gets passed to the
guest and the user is left with a more useful log message.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: kvm@vger.kernel.org
Cc: linux-mips@linux-mips.org
2015-03-27 21:25:07 +00:00
James Hogan
64bedffe49 MIPS: Clear [MSA]FPE CSR.Cause after notify_die()
When handling floating point exceptions (FPEs) and MSA FPEs the Cause
bits of the appropriate control and status register (FCSR for FPEs and
MSACSR for MSA FPEs) are read and cleared before enabling interrupts,
presumably so that it doesn't have to go through the pain of restoring
those bits if the process is pre-empted, since writing those bits would
cause another immediate exception while still in the kernel.

The bits aren't normally ever restored again, since userland never
expects to see them set.

However for virtualisation it is necessary for the kernel to be able to
restore these Cause bits, as the guest may have been interrupted in an
FP exception handler but before it could read the Cause bits. This can
be done by registering a die notifier, to get notified of the exception
when such a value is restored, and if the PC was at the instruction
which is used to restore the guest state, the handler can step over it
and continue execution. The Cause bits can then remain set without
causing further exceptions.

For this to work safely a few changes are made:
- __build_clear_fpe and __build_clear_msa_fpe no longer clear the Cause
  bits, and now return from exception level with interrupts disabled
  instead of enabled.
- do_fpe() now clears the Cause bits and enables interrupts after
  notify_die() is called, so that the notifier can chose to return from
  exception without this happening.
- do_msa_fpe() acts similarly, but now actually makes use of the second
  argument (msacsr) and calls notify_die() with the new DIE_MSAFP,
  allowing die notifiers to be informed of MSA FPEs too.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
2015-03-27 21:25:06 +00:00
James Hogan
98119ad533 MIPS: KVM: Handle MSA Disabled exceptions from guest
Guest user mode can generate a guest MSA Disabled exception on an MSA
capable core by simply trying to execute an MSA instruction. Since this
exception is unknown to KVM it will be passed on to the guest kernel.
However guest Linux kernels prior to v3.15 do not set up an exception
handler for the MSA Disabled exception as they don't support any MSA
capable cores. This results in a guest OS panic.

Since an older processor ID may be being emulated, and MSA support is
not advertised to the guest, the correct behaviour is to generate a
Reserved Instruction exception in the guest kernel so it can send the
guest process an illegal instruction signal (SIGILL), as would happen
with a non-MSA-capable core.

Fix this as minimally as reasonably possible by preventing
kvm_mips_check_privilege() from relaying MSA Disabled exceptions from
guest user mode to the guest kernel, and handling the MSA Disabled
exception by emulating a Reserved Instruction exception in the guest,
via a new handle_msa_disabled() KVM callback.

Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Gleb Natapov <gleb@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Cc: <stable@vger.kernel.org> # v3.15+
2015-03-27 21:25:05 +00:00