In preparation for Format NVM support, use runtime helpers instead of
the constant device parameters when getting lba size information etc.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
This patch introduces multiple LBA formats supported with the typical
logical block sizes of 512 bytes and 4096 bytes as well as metadata
sizes of 0, 8, 16 and 64 bytes. The format will be chosed based on the
lbads and ms parameters of the nvme-ns device.
Signed-off-by: Minwoo Im <minwoo.im@samsung.com>
[k.jensen: resurrected and rebased]
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Verify is not subject to MDTS, so a single Verify command may result in
excessive amounts of allocated memory. Impose a limit on the data size
by adding support for TP 4040 ("Non-MDTS Command Size Limits").
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Add support for namespaces formatted with protection information. The
type of end-to-end data protection (i.e. Type 1, Type 2 or Type 3) is
selected with the `pi` nvme-ns device parameter. If the number of
metadata bytes is larger than 8, the `pil` nvme-ns device parameter may
be used to control the location of the 8-byte DIF tuple. The default
`pil` value of '0', causes the DIF tuple to be transferred as the last
8 bytes of the metadata. Set to 1 to store this in the first eight bytes
instead.
Co-authored-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Add support for metadata in the form of extended logical blocks as well
as a separate buffer of data. The new `ms` nvme-ns device parameter
specifies the size of metadata per logical block in bytes. The `mset`
nvme-ns device parameter controls whether metadata is transfered as part
of an extended lba (set to '1') or in a separate buffer (set to '0',
the default).
Regardsless of the scheme chosen with `mset`, metadata is stored at the
end of the namespace backing block device. This requires the user
provided PRP/SGLs to be walked and "split" into data and metadata
scatter/gather lists if the extended logical block scheme is used, but
has the advantage of not breaking the deallocated blocks support.
Co-authored-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Signed-off-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
nvme_zone_mgmt_recv uses nvme_ns_nlbas() to get the number of LBAs in
the namespace and then calculates the number of zones to report by
incrementing slba with ZSZE until exceeding the number of LBAs as
returned by nvme_ns_nlbas().
This is bad because the namespace might be of such as size that some
LBAs are valid, but are not part of any zone, causing zone management
receive to report one additional (but non-existing) zone.
Fix this with a conventional loop on i < ns->num_zones instead.
Fixes: a479335bfa ("hw/block/nvme: Support Zoned Namespace Command Set")
Cc: Dmitry Fomichev <dmitry.fomichev@wdc.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Coverity complains about a possible memory corruption in the
nvme_ns_attach and _detach functions. While we should not (famous last
words) be able to reach this function without nsid having previously
been validated, this is still an open door for future misuse.
Make Coverity and maintainers happy by asserting that the index into the
array is valid. Also, while not detected by Coverity (yet), add an
assert in nvme_subsys_ns and nvme_subsys_register_ns as well since a
similar issue is exists there.
Fixes: 037953b5b2 ("hw/block/nvme: support namespace detach")
Fixes: CID 1450757
Fixes: CID 1450758
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
page_size is a uint32_t, and zasl is a uint8_t, so the expression
`page_size << zasl` is done using 32-bit arithmetic and might overflow.
Since we then compare this against a 64 bit data_size value, Coverity
complains that we might overflow unintentionally. An MDTS/ZASL value in
excess of 4GiB is probably impractical, but it is not entirely
unrealistic, so add a cast such that we handle that case properly.
Fixes: 578d914b26 ("hw/block/nvme: align zoned.zasl with mdts")
Fixes: CID 1450756
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
- get rid of legacy_s390_alloc() and phys_mem_set_alloc()
- tcg: implement the MVPG condition-code-option bit
- fix g_autofree variable handing in the pci vfio code
- use official z15 names in the cpu model definitions
-----BEGIN PGP SIGNATURE-----
iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAmBQgqUSHGNvaHVja0By
ZWRoYXQuY29tAAoJEN7Pa5PG8C+vwOkP/3K5y38/8v3VSoOxFbpTvAqSDBtHtCpz
bDIfqFmLenniFKRuGWr/rAUCbFCSymhoy3YNuWXa2wX4WVjOEx3wRCxkShC1/G5B
PBu75UmB58CivL5agHigA0r1JSwquqFfzTr+mU9GAyf5aUJj3iGqX1rx+Ldyeva1
bdSxOY8LZVuM52E5QvDFzjOCw0Ti83yje7OdPuz4FixaXoyZzNEB50E3hxgJPIYE
khHAP1LaxmOGqfCuCBYOkFOUVx3xXnevnNouP2l98fBMa8ctu1JrsEz/9WA7mbDF
1rYDtEE2l4eqWlXkMX1LcSV1eJ5rlf1l/W4uyW9Ti6gi30dZuOJnrfviP2bBPjQE
VrGsmIg35PhP9y7y8ZlBJHgxENw3qrlJ2I1R/IIjivN84EZ9OkxaW6pMt0R+1Q1o
tQAh9/2Md9IHsvZEZMTppIuRn0X0Y4Wfm3/9vY1twrrZFPzc5cSsFhrDam0KRGro
PgnsJ3b1aeKW7k9fS0hy807SKV5stdmTCAGPoP5RYXRIQhuUsaFXN04w+PuIUY7m
kd2IRKrubcWhmSnTELBO97lTmLMCb5vyOX6iKbTbbQ0kg68qB8tna7WlfvfxpH5a
NFk6yvJiygmedX4TaIMxvJt6ZIWNSksCaJWsAb72oySXsbcu5vewMRUs4/4kAtJj
gMwEILszb60S
=8eNE
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/cohuck-gitlab/tags/s390x-20210316' into staging
s390x updates:
- get rid of legacy_s390_alloc() and phys_mem_set_alloc()
- tcg: implement the MVPG condition-code-option bit
- fix g_autofree variable handing in the pci vfio code
- use official z15 names in the cpu model definitions
# gpg: Signature made Tue 16 Mar 2021 10:04:21 GMT
# gpg: using RSA key C3D0D66DC3624FF6A8C018CEDECF6B93C6F02FAF
# gpg: issuer "cohuck@redhat.com"
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>" [unknown]
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>" [full]
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>" [full]
# gpg: aka "Cornelia Huck <cohuck@kernel.org>" [unknown]
# gpg: aka "Cornelia Huck <cohuck@redhat.com>" [unknown]
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck-gitlab/tags/s390x-20210316:
s390x/pci: Add missing initialization for g_autofree variables
target/s390x: Store r1/r2 for page-translation exceptions during MVPG
target/s390x: Implement the MVPG condition-code-option bit
s390x/cpu_model: use official name for 8562
exec: Get rid of phys_mem_set_alloc()
s390x/kvm: Get rid of legacy_s390_alloc()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The original implementation of the Macintosh VIA devices in commit 6dca62a000
"hw/m68k: add VIA support" used timer optimisations to reduce high CPU usage on
the host when booting Linux. These optimisations worked by waiting until VIA1
port B was accessed before re-arming the timers.
The MacOS toolbox ROM constantly writes to VIA1 port B which calls
via1_one_second_update() and via1_sixty_hz_update() to calculate the new expiry
time, causing the timers to constantly reset and never fire. The effect of this
is that the Ticks (0x16a) global variable holding the number of 60Hz timer ticks
since reset is never incremented by the interrupt causing time to stand still.
Whilst the code was introduced as a performance optimisation, it is likely that
the high CPU usage was actually caused by the incorrect 60Hz timer interval
fixed in the previous patch. Remove the optimisation to keep everything simple
and enable the MacOS toolbox ROM to start keeping time.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20210311100505.22596-8-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The 60Hz timer is initialised using timer_new_ns() meaning that the timer
interval should be measured in ns, and therefore its period is a thousand
times too short.
Use a define for the 60Hz timer period taking the more precise value as
documented in the Guide To The Macintosh Family Hardware.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <20210311100505.22596-7-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
According to the "Guide To The Macintosh Family Hardware", the 60Hz VIA1 timer
on newer Macs such as the Quadra only exists for compatibility with old software
and is no longer synced to the VBL interval.
Rename the VBL timer to 60Hz timer to emphasise this and to prevent confusion
when the real VBL interrupt (now handled as a NuBus slot interrupt) is added in
future.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: BALATON Zoltan <balaton@eik.bme.hu>
Message-Id: <20210311100505.22596-6-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The current workaround for the Linux ADB state machine in kernels < 5.6 switching
the VIA back to IDLE state between send and receive modes is to re-inject the
first byte of the response in the IDLE state, and then force the state machine
into generating an autopoll reply.
In fact what is happening is much simpler: analysis of traces from a real Quadra
suggest that the existing data is returned as the first autopoll response rather
than generating an immediate response starting whilst still in IDLE state.
Update the ADB receive code to work in the same way, which allows the re-injection
code to be completely removed from adb_via_receive() and for adb_via_poll() to
be simplified accordingly.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Message-Id: <20210311100505.22596-5-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The MacOS SCSI driver uses a long access to read the VIA registers rather than
just a single byte during the message out phase.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20210311100505.22596-4-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
The use of the post-increment operator on adb_data_in_index meant that the
trace-event was accidentally displaying the next byte in the incoming ADB
data buffer rather than the current byte.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20210311100505.22596-3-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Since all the documentation uses the hex offsets, this makes it much easier
to see what is going on.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20210311100505.22596-2-mark.cave-ayland@ilande.co.uk>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Since 5f8e93c3e2 ("util/qemu-timer: Make timer_free() imply timer_del()", 2021-01-08)
it is not possible anymore to pass a NULL pointer to timer_free(). Previously
it would do nothing as it would simply pass NULL down to g_free().
Rectify this, which also fixes "-chardev braille" when there is no device.
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
QEMU timer of channel 0 in i8254 is used to raise irq
at the specified moment of time. This irq can be disabled
with irq_disabled flag. But when vmstate of the pit is
loaded, timer may be rearmed despite the disabled interrupts.
This patch adds irq_disabled flag check to fix that.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
Message-Id: <161537170060.6654.9430112746749476215.stgit@pasha-ThinkPad-X280>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SENSE_CODE(LUN_COMM_FAILURE) has an ABORTED COMMAND sense key,
so it results in a retry in Linux. To ensure that EREMOTEIO
is forwarded to the guest, use a HARDWARE ERROR sense key
instead. Note that the code before commit d7a84021d was incorrect
because it used HARDWARE_ERROR as a SCSI status, not as a sense
key.
Reported-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This reverts commit 3920552846.
Thomas Huth reported a failure with CentOS 6 guests:
../../devel/qemu/accel/kvm/kvm-all.c:690: kvm_log_clear_one_slot: Assertion `QEMU_IS_ALIGNED(start | size, psize)' failed.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Now, compilation of util/dbus is implicit and depends
on libgio presence on the building host.
The patch adds options to manage libgio dependencies explicitly.
Signed-off-by: Denis Plotnikov <den-plotnikov@yandex-team.ru>
Message-Id: <20210312151440.405776-1-den-plotnikov@yandex-team.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For the sparse-mem device, we want the fuzzer to populate entire DMA
reads from sparse-mem, rather than hooking into the individual MMIO
memory_region_dispatch_read operations. Otherwise, the fuzzer will treat
each sequential read separately (and populate it with a separate
pattern). Work around this by rearranging some DMA hooks. Since the
fuzzer has it's own logic to skip accidentally writing to MMIO regions,
we can call the DMA cb, outside the flatview_translate loop.
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The generic-fuzzer often provides randomized DMA addresses to
virtual-devices. For a 64-bit address-space, the chance of these
randomized addresses coinciding with RAM regions, is fairly small. Even
though the fuzzer's instrumentation eventually finds valid addresses,
this can take some-time, and slows-down fuzzing progress (especially,
when multiple DMA buffers are involved). To work around this, create
"fake" sparse-memory that spans all of the 64-bit address-space. Adjust
the DMA call-back to populate this sparse memory, correspondingly
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For testing, it can be useful to simulate an enormous amount of memory
(e.g. 2^64 RAM). This adds an MMIO device that acts as sparse memory.
When something writes a nonzero value to a sparse-mem address, we
allocate a block of memory. For now, since the only user of this device
is the fuzzer, we do not track and free zeroed blocks. The device has a
very low priority (so it can be mapped beneath actual RAM, and virtual
device MMIO regions).
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We have several scripts that help build reproducers, but no
documentation for how they should be used. Add some documentation
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, bash and C crash reproducers are be built manually. This is a
problem, as we want to integrate reproducers into the tree, for
regression testing. This patch adds a script that converts a sequence of
QTest commands into a pasteable Bash reproducer, or a libqtest-based C
program. This will try to wrap pasteable reproducers to 72 chars, but
the generated C code will not have nice formatting. Therefore, the C
output of this script should be piped through an auto-formatter, such as
clang-format
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
I noticed that with a sufficiently small timeout, the fuzzer fork-server
sometimes locks up. On closer inspection, the issue appeared to be
caused by entering our SIGALRM handler, while libfuzzer is in it's crash
handlers. Because libfuzzer relies on pipe communication with an
external child process to print out stack-traces, we shouldn't exit
early, and leave an orphan child. Check for children in the SIGALRM
handler to avoid this issue.
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Acked-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The device-type names for the pro100 network cards, are i8255.. We were
matching "eepro", which catches the PCI PIO/MMIO regions for those
devices, however misses the actual PCI device, which we use to map the
BARs, before fuzzing. Fix that
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When we started to commit the fuzzer QTest reproducers to
fuzz-test.c in commit d8dd109501 ("qtest: add fuzz test case"),
we forgot to add the corresponding MAINTAINERS entry. Do it now.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This test fails when QEMU is built without the virtio-scsi device,
restrict it to its availability.
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This test fails when QEMU is built without the megasas device,
restrict it to its availability.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
For now the switch of vfio dirty page tracking is integrated into
@vfio_save_handler. The reason is that some PCI vendor driver may
start to track dirty base on _SAVING state of device, so if dirty
tracking is started before setting device state, vfio will report
full-dirty to QEMU.
However, the dirty bmap of all ramblocks are fully set when setup
ram saving, so it's not matter whether the device is in _SAVING
state when start vfio dirty tracking.
Moreover, this logic causes some problems [1]. The object of dirty
tracking is guest memory, but the object of @vfio_save_handler is
device state, which produces unnecessary coupling and conflicts:
1. Coupling: Their saving granule is different (perVM vs perDevice).
vfio will enable dirty_page_tracking for each devices, actually
once is enough.
2. Conflicts: The ram_save_setup() traverses all memory_listeners
to execute their log_start() and log_sync() hooks to get the
first round dirty bitmap, which is used by the bulk stage of
ram saving. However, as vfio dirty tracking is not yet started,
it can't get dirty bitmap from vfio. Then we give up the chance
to handle vfio dirty page at bulk stage.
Move the switch of vfio dirty_page_tracking into vfio_memory_listener
can solve above problems. Besides, Do not require devices in SAVING
state for vfio_sync_dirty_bitmap().
[1] https://www.spinics.net/lists/kvm/msg229967.html
Reported-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20210309031913.11508-1-zhukeqian1@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
The cpu_physical_memory_set_dirty_lebitmap() can quickly deal with
the dirty pages of memory by bitmap-traveling, regardless of whether
the bitmap is aligned correctly or not.
cpu_physical_memory_set_dirty_lebitmap() supports pages in bitmap of
host page size. So it'd better to set bitmap_pgsize to host page size
to support more translation granule sizes.
[aw: The Fixes commit below introduced code to restrict migration
support to configurations where the target page size intersects the
host dirty page support. For example, a 4K guest on a 4K host.
Due to the above flexibility in bitmap handling, this restriction
unnecessarily prevents mixed target/host pages size that could
otherwise be supported. Use host page size for dirty bitmap.]
Fixes: 87ea529c50 ("vfio: Get migration capability flags for container")
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Message-Id: <20210304133446.1521-1-jiangkunkun@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
In VFIO migration resume phase and some guest startups, there are
already unmasked vectors in the vector table when calling
vfio_msix_enable(). So in order to avoid inefficiently disabling
and enabling vectors repeatedly, let's allocate all needed vectors
first and then enable these unmasked vectors one by one without
disabling.
Signed-off-by: Shenming Lu <lushenming@huawei.com>
Message-Id: <20210310030233.1133-4-lushenming@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
In the VFIO VM state change handler when stopping the VM, the _RUNNING
bit in device_state is cleared which makes the VFIO device stop, including
no longer generating interrupts. Then we can save the pending states of
all interrupts in the GIC VM state change handler (on ARM).
So we have to set the priority of the VFIO VM state change handler
explicitly (like virtio devices) to ensure it is called before the
GIC's in saving.
Signed-off-by: Shenming Lu <lushenming@huawei.com>
Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20210310030233.1133-3-lushenming@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
On ARM64 the VFIO SET_IRQS ioctl is dependent on the VM interrupt
setup, if the restoring of the VFIO PCI device config space is
before the VGIC, an error might occur in the kernel.
So we move the saving of the config space to the non-iterable
process, thus it will be called after the VGIC according to
their priorities.
As for the possible dependence of the device specific migration
data on it's config space, we can let the vendor driver to
include any config info it needs in its own data stream.
Signed-off-by: Shenming Lu <lushenming@huawei.com>
Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com>
Message-Id: <20210310030233.1133-2-lushenming@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Previous work on dev-iotlb message broke spapr_iommu/vhost integration
as it did for SMMU and virtio-iommu. The spapr_iommu currently
only sends IOMMU_NOTIFIER_UNMAP notifications. Since commit
958ec334bc ("vhost: Unbreak SMMU and virtio-iommu on dev-iotlb support"),
VHOST first tries to register IOMMU_NOTIFIER_DEVIOTLB_UNMAP notifier
and if it fails, falls back to legacy IOMMU_NOTIFIER_UNMAP. So
spapr_iommu must fail on the IOMMU_NOTIFIER_DEVIOTLB_UNMAP
registration.
Reported-by: Peter Xu <peterx@redhat.com>
Fixes: b68ba1ca57 ("memory: Add IOMMU_NOTIFIER_DEVIOTLB_UNMAP IOMMUTLBNotificationType")
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Message-Id: <20210209213233.40985-3-eric.auger@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>