Here the virtio-scsi-ccw is modified for the new API. The device
virtio-scsi-ccw extends virtio-ccw-device as before. It creates and
connects a virtio-scsi during the init. The properties are not modified.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1363875320-7985-8-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Here the virtio-scsi-s390 is modified for the new API. The device
virtio-scsi-s390 extends virtio-s390-device as before. It creates and
connects a virtio-scsi during the init. The properties are not modified.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1363875320-7985-7-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Here the virtio-scsi-pci is modified for the new API. The device virtio-scsi-pci
extends virtio-pci. It creates and connects a virtio-scsi during the init.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1363875320-7985-6-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Create virtio-scsi which extends virtio-device, so it can be connected on
virtio-bus.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1363875320-7985-5-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
host_features field is part of the transport device. So move all the
host_features related properties into transport device.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1363875320-7985-4-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Allocate/Free the cmd_vqs array separately to have a fixed size device.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1363875320-7985-3-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
The configuration field must not be a pointer as it will be used for virtio-scsi
properties. So *conf is replaced by conf.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-id: 1363875320-7985-2-git-send-email-fred.konrad@greensocs.com
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
# By Corey Bryant (2) and others
# Via Luiz Capitulino
* luiz/queue/qmp:
New QMP command query-cpu-max and HMP command cpu_max
qmp: fix handling of boolean values in qmp-shell
QMP: TPM QMP and man page documentation updates
QMP: Remove duplicate TPM type from query-tpm
This will remove an unneeded copy of guest memory pages.
For the page header and device state we still copy the data to the
static buffer the other option is to allocate the memory on demand
which is more expensive.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This allows us to add a buffer to the iovec to send without copying it
into the static buffer, the buffer will be sent later when qemu_fflush is called.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Update qemu_fflush and stdio_close to use writev ops if they are available
Use the buffers stored in the iovec.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
All data is still copied into the static buffer.
Adjacent iovecs are coalesced so we send one big buffer
instead of many small buffers.
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This will allow us to write an iovec
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
at the beginning of migration all pages are marked dirty and
in the first round a bulk migration of all pages is performed.
currently all these pages are copied to the page cache regardless
of whether they are frequently updated or not. this doesn't make sense
since most of these pages are never transferred again.
this patch changes the XBZRLE transfer to only be used after
the bulk stage has been completed. that means a page is added
to the page cache the second time it is transferred and XBZRLE
can benefit from the third time of transfer.
since the page cache is likely smaller than the number of pages
it's also likely that in the second round the page is missing in the
cache due to collisions in the bulk phase.
on the other hand a lot of unnecessary mallocs, memdups and frees
are saved.
the following results have been taken earlier while executing
the test program from docs/xbzrle.txt. (+) with the patch and (-)
without. (thanks to Eric Blake for reformatting and comments)
+ total time: 22185 milliseconds
- total time: 22410 milliseconds
Shaved 0.3 seconds, better than 1%!
+ downtime: 29 milliseconds
- downtime: 21 milliseconds
Not sure why downtime seemed worse, but probably not the end of the world.
+ transferred ram: 706034 kbytes
- transferred ram: 721318 kbytes
Fewer bytes sent - good.
+ remaining ram: 0 kbytes
- remaining ram: 0 kbytes
+ total ram: 1057216 kbytes
- total ram: 1057216 kbytes
+ duplicate: 108556 pages
- duplicate: 105553 pages
+ normal: 175146 pages
- normal: 179589 pages
+ normal bytes: 700584 kbytes
- normal bytes: 718356 kbytes
Fewer normal bytes...
+ cache size: 67108864 bytes
- cache size: 67108864 bytes
+ xbzrle transferred: 3127 kbytes
- xbzrle transferred: 630 kbytes
...and more compressed pages sent - good.
+ xbzrle pages: 117811 pages
- xbzrle pages: 21527 pages
+ xbzrle cache miss: 18750
- xbzrle cache miss: 179589
And very good improvement on the cache miss rate.
+ xbzrle overflow : 0
- xbzrle overflow : 0
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
avoid searching for dirty pages just increment the
page offset. all pages are dirty anyway.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
during bulk stage of ram migration if a page is a
zero page do not send it at all.
the memory at the destination reads as zero anyway.
even if there is an madvise with QEMU_MADV_DONTNEED
at the target upon receipt of a zero page I have observed
that the target starts swapping if the memory is overcommitted.
it seems that the pages are dropped asynchronously.
this patch also updates QMP to return the number of
skipped pages in MigrationStats.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
the first round of ram transfer is special since all pages
are dirty and thus all memory pages are transferred to
the target. this patch adds a boolean variable to track
this stage.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
virtually all dup pages are zero pages. remove
the special is_dup_page() function and use the
optimized buffer_find_nonzero_offset() function
instead.
here buffer_find_nonzero_offset() is used directly
to avoid the unnecssary additional checks in
buffer_is_zero().
raw performace gain checking 1 GByte zeroed memory
over is_dup_page() is approx. 10-12% with SSE2
and 8-10% with unsigned long arithmedtic.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
this patch adopts the loop unrolling idea of bitmap_is_zero() to
speed up the skipping of large areas with zeros in find_next_bit().
this routine is extensively used to find dirty pages in
live migration.
testing only the find_next_bit performance on a zeroed bitfield
the loop onrolling decreased executing time by approx. 50% on x86_64.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
performance gain on SSE2 is approx. 20-25%. altivec
is not tested. performance for unsigned long arithmetic
is unchanged.
Signed-off-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
this adds buffer_find_nonzero_offset() which is a SSE2/Altivec
optimized function that searches for non-zero content in a
buffer.
the function starts full unrolling only after the first few chunks have
been checked one by one. analyzing real memory page data has revealed
that non-zero pages are non-zero within the first 256-512 bits in
most cases. as this function is also heavily used to check for zero memory
pages this tweak has been made to avoid the high setup costs of the fully
unrolled check for non-zero pages.
due to the optimizations used in the function there are restrictions
on buffer address and search length. the function
can_use_buffer_find_nonzero_content() can be used to check if
the function can be used safely.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
vector optimizations will now be used at various places
not just in is_dup_page() in arch_init.c
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The VMSTATE_BUFFER_MULTIPLY macro is misnamed - it actually specifies
a variably sized buffer with VMS_VBUFFER, so should be named
VMSTATE_VBUFFER_MULTIPLY. This patch fixes this (the macro had no current
users under either name).
In addition, unlike the other VMSTATE_VBUFFER variants, this macro did not
specify VMS_POINTER. This patch fixes this bug as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Currently the savevm code contains a VMSTATE_STRUCT_VARRAY_POINTER_INT32
helper (a variably sized array with the number of elements in an int32_t),
but not VMSTATE_STRUCT_VARRAY_POINTER_UINT32 (... with the number of
elements in a uint32_t). This patch (trivially) fixes the deficiency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The current savevm code includes VMSTATE helpers for a number of commonly
used data types, but not for the float64 type used by the internal floating
point emulation code. This patch fixes the deficiency.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
This adds an _EQUAL VMSTATE helper for target_ulongs, defined in terms of
VMSTATE_UINT32_EQUAL or VMSTATE_UINT64_EQUAL as appropriate.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The savevm code already includes a number of *_EQUAL helpers which act as
sanity checks verifying that the configuration of the saved state matches
that of the machine we're loading into to work. Variants already exist
for 8 bit 16 bit and 32 bit integers, but not 64 bit integers. This patch
fills that hole, adding a UINT64 version.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Juan Quintela <quintela@redhat.com>
These commands return the maximum number of CPUs supported by the
currently running emulator instance, as defined in its QEMUMachine
struct.
Signed-off-by: Michal Novotny <minovotn@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
qmp-shell converts only integer arguments and the rest
is assumed to be strings which are faithfully sent as
quoted strings by json. But QEMU refuses to accept qmp
command with boolean argument whose value is escaped
as string.
Fix it by special-casing true/false keywords and store
value as corresponding boolean.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
# By Dmitry Fleytman (5) and others
# Via Stefan Hajnoczi
* stefanha/net:
net: increase buffer size to accommodate Jumbo frame pkts
VMXNET3 device implementation
Packet abstraction for VMWARE network devices
Common definitions for VMWARE devices
net: iovec checksum calculator
Checksum-related utility functions
net: use socket_set_nodelay() for -netdev socket
# By Liu Yuan (1) and Stefan Weil (1)
# Via Stefan Hajnoczi
* stefanha/block:
block: Add options QDict to bdrv_file_open() prototypes (fix MinGW build)
rbd: fix compile error
This solves, e.g., sticky ALT when selecting a GTK menu, switching to a
different window or selecting a different virtual console.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-id: 514F417A.6010908@web.de
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Socket buffer sizes were hard-coded to 4K for VDE and socket netdevs. Bump this
up to 68K (ala tap netdev) to handle maximum GSO packet size (64k) plus plenty
of room for the ethernet and virtio_net headers.
Originally, ran into this limitation when using -netdev UDP sockets to connect
VM-to-VM, where VM interface is configure with MTU=9000. (Using virtio_net
NIC model). Test is simple: ping -M do -s 8500 <target>. This test will
attempt to ping with unfragmented packet of given size. Without patch, size
is limited to < 4K (minus protocol hdrs). With patch, ping test works with pkt
size up to 9000 (again, minus protocol hdrs).
v2: per Stefan, increase buf size to (4096+65536) as done in tap and apply
to vde and socket netdevs.
v1: increase buf size to 12K just for -netdev UDP sockets
Signed-off-by: Scott Feldman <sfeldma@cumulusnetworks.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
net_checksum_add_cont()
checksum calculation for scattered data with odd chunk sizes
net_raw_checksum()
checksum calculation for a buffer
Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Yan Vugenfirer <yan@daynix.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reduce -netdev socket latency by disabling the Nagle algorithm on
SOCK_STREAM sockets in net/socket.c. Since we are tunelling Ethernet
over TCP we shouldn't artificially delay outgoing packets, let the guest
decide packet scheduling.
I already get sub-millisecond -netdev socket ping times on localhost, so
there was no measurable difference in my testing. This won't hurt
though and may improve remote socket performance.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
The new parameter is unused yet.
This part was missing in commit 787e4a8500.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Commit 787e4a85 [block: Add options QDict to bdrv_file_open() prototypes] didn't
update rbd.c accordingly.
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>