running tests/virtio-9p-test on SPARC hosts.
-----BEGIN PGP SIGNATURE-----
iEYEABECAAYFAljaIlUACgkQAvw66wEB28KzKQCfZRTq74rKjFUv20D0ur+8qHb5
iFwAn12UyalKt14ztoKRGyfGyYZjWe13
=XeLy
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/gkurz/tags/for-upstream' into staging
This series fixes potential memory/fd leaks in 9pfs and a crash when
running tests/virtio-9p-test on SPARC hosts.
# gpg: Signature made Tue 28 Mar 2017 09:44:05 BST
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Greg Kurz <groug@free.fr>"
# gpg: aka "Greg Kurz <gkurz@linux.vnet.ibm.com>"
# gpg: aka "Gregory Kurz (Groug) <groug@free.fr>"
# gpg: aka "[jpeg image of size 3330]"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
tests/virtio-9p-test: Don't call le*_to_cpus on fields of packed struct
9pfs: fix file descriptor leak
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For a packed struct like 'P9Hdr' the fields within it may not be
aligned as much as the natural alignment for their types. This means
it is not valid to pass the address of such a field to a function
like le32_to_cpus() which operate on uint32_t* and assume alignment.
Doing this results in a SIGBUS on hosts like SPARC which have strict
alignment requirements.
Use ldl_le_p() instead, which is specified to correctly handle
unaligned pointers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
The v9fs_create() and v9fs_lcreate() functions are used to create a file
on the backend and to associate it to a fid. The fid shouldn't be already
in-use, otherwise both functions may silently leak a file descriptor or
allocated memory. The current code doesn't check that.
This patch ensures that the fid isn't already associated to anything
before using it.
Signed-off-by: Li Qiang <liqiang6-s@360.cn>
(reworded the changelog, Greg Kurz)
Signed-off-by: Greg Kurz <groug@kaod.org>
On OpenBSD none of the ioctls probe_logical_blocksize() tries
exist, so the variable sector_size is unused. Refactor the
code to avoid this (and reduce the duplicated code).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 1490279788-12995-1-git-send-email-peter.maydell@linaro.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
When opt_xfer_len is zero, Linux ignores max_xfer_len erroneously.
While that obviously should be fixed, we do older guests a favor to
always filling in a value.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170327142625.1249-1-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Success for bdrv_flush() means that all previously written data is safe
on disk. For fdatasync(), the best semantics we can hope for on Linux
(without O_DIRECT) is that all data that was written since the last call
was successfully written back. Therefore, and because we can't redo all
writes after a flush failure, we have to give up after a single
fdatasync() failure. After this failure, we would never be able to make
the promise that a successful bdrv_flush() makes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20170322210005.16533-1-kwolf@redhat.com
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
After the switch to reading replies in a coroutine, nothing is
reentering pending receive coroutines if the connection hangs.
Move nbd_recv_coroutines_enter_all to the reply read coroutine,
which is the place where hangups are detected. nbd_teardown_connection
can simply wait for the reply read coroutine to detect the hangup
and clean up after itself.
This wouldn't be enough though because nbd_receive_reply returns 0
(rather than -EPIPE or similar) when reading from a hung connection.
Fix the return value check in nbd_read_reply_entry.
This fixes qemu-iotests 083.
Reported-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20170314111157.14464-1-pbonzini@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Printing the full help output obscures the error message for an invalid
command-line option or missing argument.
Before this patch:
$ ./qemu-img --foo
...pages of output...
After this patch:
$ ./qemu-img --foo
qemu-img: unrecognized option '--foo'
Try 'qemu-img --help' for more information
This patch adds the getopt ':' character so that it can distinguish
between missing arguments and unrecognized options. This helps provide
more detailed error messages.
Suggested-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170317104541.28979-4-stefanha@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
QEMU coding style indents 'case' to the same level as the 'switch'
statement:
switch (foo) {
case 1:
Fix this coding style violation so checkpatch.pl doesn't complain about
the next patch.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170317104541.28979-3-stefanha@redhat.com
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The qemu-img sub-command executes regardless of invalid global options:
$ qemu-img --foo info test.img
qemu-img: unrecognized option '--foo'
image: test.img
...
The unrecognized option warning may be missed by the user. This can
hide incorrect command-lines in scripts and confuse users.
This patch prints the help information and terminates instead of
executing the sub-command.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170317104541.28979-2-stefanha@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This reverts commit 07bfa35477.
The global variable is only read as part of a
apic_reset_irq_delivered();
qemu_irq_raise(s->irq);
if (!apic_get_irq_delivered()) {
sequence, so the value never matters at migration time.
Reported-by: Dr. David Alan Gilbert <dglibert@redhat.com>
Cc: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170327123223.1199-1-stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The multithreaded TCG implementation exposed deadlocks in the win32
condition variables: as implemented, qemu_cond_broadcast waited on
receivers, whereas the pthreads API it was intended to emulate does
not. This was causing a deadlock because broadcast was called while
holding the IO lock, as well as all possible waiters blocked on the
same lock.
This patch replaces all the custom synchronisation code for mutexes
and condition variables with native Windows primitives (SRWlocks and
condition variables) with the same semantics as their POSIX
equivalents. To enable that, it requires a Windows Vista or newer host
OS.
Signed-off-by: Andrey Shedel <ashedel@microsoft.com>
[AB: edited commit message]
Signed-off-by: Andrew Baumann <Andrew.Baumann@microsoft.com>
Message-Id: <20170324220141.10104-1-Andrew.Baumann@microsoft.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
vnc server in reverse mode (qemu -vnc localhost:$nr,reverse) interprets
$nr as display number (i.e. with 5900 offset) in recent qemu versions.
Historical and documented behavior is interpreting $nr as port number
though. So we should bring code and documentation in line.
Given that default listening port for viewers is 5500 the 5900 offset is
pretty inconvinient, because it is simply impossible to connect to port
5500. So, lets fix the code not the docs.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1489480018-11443-1-git-send-email-kraxel@redhat.com
Unfortunaly switching to getPlatformDisplayEXT isn't as easy as
implemented by 0ea1523fb6. See the
longish comment for the complete story.
Cc: Frediano Ziglio <fziglio@redhat.com>
Suggested-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 1489997042-1824-1-git-send-email-kraxel@redhat.com
Should be "c" not "col". The macro is used with "col" as third parameter
everywhere, so this tyops doesn't break something.
Fixes: 026aeffcb4
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1490168303-24588-1-git-send-email-kraxel@redhat.com
virtio_input_send buffers input events until it sees a SYNC. Then it
either sends or drops the entire batch, depending on whether eventq
has enough space available. The case to avoid here is partial sends
where only part of the batch would get to the guest.
Using virtqueue_get_avail_bytes to check the state of eventq was not
correct. The queue may have a smaller number of larger buffers
available so bytes may be enough but the batch would still not be
possible to send, leading to the "Huh? No vq elem available" error.
Instead of checking available bytes, this patch optimistically pops
buffers from the queue and puts them back in case it runs out of
space and the batch needs to be dropped.
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Message-id: 1490365490-4854-3-git-send-email-lprosek@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
VirtIOInput.queue was never freed. This commit adds an explicit
g_free to virtio_input_finalize and switches the allocation
function from realloc to g_realloc in virtio_input_send.
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Message-id: 1490365490-4854-2-git-send-email-lprosek@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
a qemu with an empty s390 guest will exit very quickly. This races
against the testsuite reading from the console pipe leading to
intermittent test suite failures. Using -no-shutdown will keep
the guest running.
Fixes: 864111f422 (vl: exit qemu on guest panic if -no-shutdown is not set)
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-id: 1490361570-288658-1-git-send-email-borntraeger@de.ibm.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This was spotted by Coverity, in case where sysconf(_SC_NPROCESSORS_ONLN)
fails and returns -1. This results in memset_num_threads getting set to -1.
Which we then pass to g_new0().
The patch replaces MAX_MEM_PREALLOC_THREAD_COUNT macro with a function call
get_memset_num_threads() to handle sysconf() failure gracefully. In case
sysconf() fails, we fall back to single threaded.
(Spotted by Coverity, CID 1372465.)
Signed-off-by: Jitendra Kolhe <jitendra.kolhe@hpe.com>
Message-Id: <1490079006-32495-1-git-send-email-jitendra.kolhe@hpe.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This fixes the bug: 'user-to-root privesc inside VM via bad translation
caching' reported by Jann Horn here:
https://bugs.chromium.org/p/project-zero/issues/detail?id=1122
Reviewed-by: Richard Henderson <rth@twiddle.net>
CC: Peter Maydell <peter.maydell@linaro.org>
CC: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-Id: <20170323175851.14342-1-bobby.prani@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
After the AioContext lock push down, there is a race between
virtio_scsi_dataplane_start and those "assert(s->ctx &&
s->dataplane_started)", because the latter doesn't isn't wrapped in
aio_context_acquire.
Reproducer is simply booting a Fedora guest with an empty
virtio-scsi-dataplane controller:
qemu-system-x86_64 \
-drive if=none,id=root,format=raw,file=Fedora-Cloud-Base-25-1.3.x86_64.raw \
-device virtio-scsi \
-device scsi-disk,drive=root,bootindex=1 \
-object iothread,id=io \
-device virtio-scsi-pci,iothread=io \
-net user,hostfwd=tcp::10022-:22 -net nic,model=virtio -m 2048 \
--enable-kvm
Fix this by moving acquire/release pairs from virtio_scsi_handle_*_vq to
their callers - and wrap the broken assertions in.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170317061447.16243-3-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
They will be used in virtio-scsi-dataplane.c as well, so move them to
header.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170317061447.16243-2-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
clear pending status before calling memory commit.
Otherwise when memory_region_finalize is called,
memory_region_transaction_depth is 0 and
memory_region_update_pending is true.
That's wrong.
Signed-off -by: Anthony Xu <anthony.xu@intel.com>
Message-Id: <4712D8F4B26E034E80552F30A67BE0B1A2E3D5@ORSMSX112.amr.corp.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The REG_PC define in disas/microblaze.c clashes with a define in
the Linux SPARC system headers:
/home/pm215/qemu/disas/microblaze.c:162:0: error: "REG_PC" redefined [-Werror]
#define REG_PC 32 /* PC */
In file included from /usr/include/signal.h:326:0,
from /home/pm215/qemu/include/qemu/osdep.h:86,
from /home/pm215/qemu/disas/microblaze.c:36:
/usr/include/sparc64-linux-gnu/sys/ucontext.h:96:0: note: this is the location of the previous definition
#define REG_PC (1)
Since the code doesn't actually use the REG_PC define
anywhere, the simplest fix is just to remove it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1490272961-1128-1-git-send-email-peter.maydell@linaro.org
hw/i386/trace-events has an amdvi_mmio_read trace that is used for
both normal reads (listing the register name, address, size, and
offset) and for an error case (abusing the register name to show
an error message, the address to show the maximum value supported,
then shoehorning address and size into the size and offset
parameters). The change from a wide address to a narrower size
parameter could truncate a (rather-large) bogus read attempt, so
it's better to create a separate dedicated trace with correct types,
rather than abusing the trace mechanism. Broken since its
introduction in commit d29a09c.
[Change trace event argument type from hwaddr to uint64_t since
user-defined types should not be used for trace events. This fixes a
build failure with LTTng UST.
--Stefan]
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
hw/scsi/trace-events lists cmd as the first parameter for both
megasas_iovec_overflow and megasas_iovec_underflow, but the caller
was mistakenly passing cmd->iov_size twice instead of the command
index. Also, trace_megasas_abort_invalid is called with parameters
in the wrong order. Broken since its introduction in commit
e8f943c3.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
block/trace-events lists the parameters for mirror_yield
consistently with other mirror events (cnt just after s, like in
mirror_before_sleep; in_flight last, like in mirror_yield_in_flight).
But the callers were passing parameters in the wrong order, leading
to poor trace messages, including type truncation when there are
more than 4G dirty sectors involved. Broken since its introduction
in commit bd48bde.
While touching this, ensure that all callers use the same type
(uint64_t) for cnt, as a later patch will enable the compiler to do
stricter type-checking.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Commit 9a6d1ac assumed that 'qom-type' could be removed from QemuOpts
with no ill effects. However, this command line proves otherwise:
$ ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -nographic -qmp stdio \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-pci,rng=rng0
qemu-system-x86_64: -object rng-random,filename=/dev/urandom,id=rng0: Parameter 'qom-type' is missing
Fix the regression by restoring qom-type in opts after its temporary
removal that was needed for the duration of user_creatable_add_opts().
Reported-by: Richard W. M. Jones <rjones@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Message-id: 20170323160315.19696-1-eblake@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix some cut-and-paste errors in the OS deprecation warning
pointed out by Thomas Huth.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1490119729-26206-1-git-send-email-peter.maydell@linaro.org
Just a single bugfix in this batch. It's not strictly in ppc code,
though it's for the pseries machine's benefit. Eduardo suggested it
go through my tree however.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABCAAGBQJY057NAAoJEGw4ysog2bOSllYP/1OCzuR3f8vFuAB18i2a+kzC
Bw4zD9j/3BlbJ36G6NQg53LXaFsK0w9qjIXSi3Xni+UCvat5ktXYrhgKb4nGOwaq
bYmB+GDm1573MxxeSBPE5nfuM3Zg4gG9osWryZCEJr3eDMxezdIWFaaZEWDEkywz
N5F1e1KX7NTObGuugoH/XRoUatWVYAzUqnlIVDhSta2hUKnYQJFRtU1YZqBKME/W
USRxTq57zEl3TcV0gi+eWqfnTTlcCR4+Xp2FYDg/pOReDQaO8dhPZxueiZCi4wlL
aqH8nmUuaiPOP5JAS2I7ds978PTe6HwsIn7cIpsEnRsafYZoFHzL1wlGZWMlGf/1
ReNe25opOD1FC/hfDIYFkeCcW6g2Jm75BJGqBX8VDAlkyR7V/8Iqnu1/v24X8J1l
nNNrBeQrRXx5tPORARazS8mA9LYZpY5MOh2zQ9GuXxM9aqg//KrkM+i0GFLhIIsv
/P5lcpt4m+bA2sve9PU4uFdkST7dYyEdPqFoHEVx2Y5V4+XUjPSyvkCjrM8ljhtI
ELpRxynW4s9B3SX1HeFbY1LM66emSmBtk+3gAce1wBAGxIE9TZCPfXpcfOxGIrAx
/xnwbARx+7BRPgVHSz3YAYSsvejISoBeFutnv2OhwyUJBbGoWkdaSgGbiUKX1K+Z
/orW1eJ11ASuwfe+atza
=jFKm
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.9-20170323' into staging
ppc patch queue for 2017-03-23
Just a single bugfix in this batch. It's not strictly in ppc code,
though it's for the pseries machine's benefit. Eduardo suggested it
go through my tree however.
# gpg: Signature made Thu 23 Mar 2017 10:09:17 GMT
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.9-20170323:
numa,spapr: align default numa node memory size to 256MB
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.9 (GNU/Linux)
iQEcBAABAgAGBQJY05PkAAoJEC7X/ekGPIZNa2EH/RGFDe7bqdB7ZhA9EIe2rwuE
gnNFm0rZZxooL7Bqmoy3+jrIHWz44eajTCesYQphbSTOKiUUGdL4R8hUxVNRJkgE
yXvXLjZVGmzBd02klJizXJHkCsaUo/079x7A8ne44jSsFjFSl90iGDUzMZZJcmmi
7ZWOk5fb2mEUMPVOAt+tB9tdqkv94IMxSPBmsZ+QjNoMh/DWmcC0RJ5y9kLAVWef
YcQtrT2Da8ZK69v9C/2Eh9CsgI7PaoBP3ZjgJCLOW4mDw5Wy32NQl1H24+5s7FKU
B5NFCf4kqCsYA0SU251qJBHJZ6r60f0Shc4aMpm/8hqYcy4JI5QxSGUZXkWmEoM=
=5HM7
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/gonglei/tags/cryptodev-next-20170323' into staging
cryptodev fixes
# gpg: Signature made Thu 23 Mar 2017 09:22:44 GMT
# gpg: using RSA key 0x2ED7FDE9063C864D
# gpg: Good signature from "Gonglei <arei.gonglei@huawei.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 3EF1 8E53 3459 E6D1 963A 3C05 2ED7 FDE9 063C 864D
* remotes/gonglei/tags/cryptodev-next-20170323:
cryptodev: fix asserting single queue
cryptodev: setiv only when really need
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Returning NULL from get_max_cpu_model results in a SIGSEGV runtime error.
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170130131517.8092-1-sw@weilnetz.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We already check for queues == 1 in cryptodev_builtin_init and when that
is not true raise an error. But before that error is reported the
assertion in cryptodev_builtin_cleanup kicks in (because object is being
finalized and freed).
Let's remove assert(queues == 1) form cryptodev_builtin_cleanup as it
does only harm and no good.
Reported-by: Boris Fiuczynski <fiuczy@linux.vnet.ibm.com>
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
ECB mode cipher doesn't need IV, if we setiv for it then qemu
crypto API would report "Expected IV size 0 not **", so we should
setiv only when the cipher algos really need.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
An off-by-one in commit 15c2f669e meant that we were failing to
check for unparsed input in all QemuOpts visitors. Recent testsuite
additions show that fixing the obvious bug with bogus fields will
also fix the case of an incomplete list visit; update the tests to
match the new behavior.
Simple testcase:
./x86_64-softmmu/qemu-system-x86_64 -nodefaults -nographic -qmp stdio -numa node,size=1g
failed to diagnose that 'size' is not a valid argument to -numa, and
now once again reports:
qemu-system-x86_64: -numa node,size=1g: Invalid parameter 'size'
See also https://bugzilla.redhat.com/show_bug.cgi?id=1434666
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Tested-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170322144525.18964-4-eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
A regression in commit 15c2f669e caused us to silently ignore
excess input to the QemuOpts visitor. Later, commit ea4641
accidentally abused that situation, by removing "qom-type" and
"id" from the corresponding QDict but leaving them defined in
the QemuOpts, when using the pair of containers to create a
user-defined object. Note that since we are already traversing
two separate items (a QDict and a QemuOpts), we are already
able to flag bogus arguments, as in:
$ ./x86_64-softmmu/qemu-system-x86_64 -nodefaults -nographic -qmp stdio -object memory-backend-ram,id=mem1,size=4k,bogus=huh
qemu-system-x86_64: -object memory-backend-ram,id=mem1,size=4k,bogus=huh: Property '.bogus' not found
So the only real concern is that when we re-enable strict checking
in the QemuOpts visitor, we do not want to start flagging the two
leftover keys as unvisited. Rearrange the code to clean out the
QemuOpts listing in advance, rather than removing items from the
QDict. Since "qom-type" is usually an automatic implicit default,
we don't have to restore it (this does mean that once instantiated,
QemuOpts is not necessarily an accurate representation of the
original command line - but this is not the first place to do that);
however "id" has to be put back (requiring us to cast away a const).
[As a side note, hmp_object_add() turns a QDict into a QemuOpts,
then calls user_creatable_add_opts() which converts QemuOpts into
a new QDict. There are probably a lot of wasteful conversions like
this, but cleaning them up is a much bigger task than the immediate
regression fix.]
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170322144525.18964-3-eblake@redhat.com>
Tested-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
This lets us hook into drained_begin and drained_end requests from the
backend level, which is particularly useful for making sure that all
jobs associated with a particular node (whether the source or the target)
receive a drain request.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 20170316212351.13797-4-jsnow@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Allow block backends to forward drain requests to their devices/users.
The initial intended purpose for this patch is to allow BBs to forward
requests along to BlockJobs, which will want to pause if their associated
BB has entered a drained region.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 20170316212351.13797-3-jsnow@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
The purpose of this shim is to allow us to pause pre-started jobs.
The purpose of *that* is to allow us to buffer a pause request that
will be able to take effect before the job ever does any work, allowing
us to create jobs during a quiescent state (under which they will be
automatically paused), then resuming the jobs after the critical section
in any order, either:
(1) -block_job_start
-block_job_resume (via e.g. drained_end)
(2) -block_job_resume (via e.g. drained_end)
-block_job_start
The problem that requires a startup wrapper is the idea that a job must
start in the busy=true state only its first time-- all subsequent entries
require busy to be false, and the toggling of this state is otherwise
handled during existing pause and yield points.
The wrapper simply allows us to mandate that a job can "start," set busy
to true, then immediately pause only if necessary. We could avoid
requiring a wrapper, but all jobs would need to do it, so it's been
factored out here.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 20170316212351.13797-2-jsnow@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
Streaming or any other block job hangs when performed on a block device
that has a non-default iothread. This happens because the AioContext
is acquired twice by block_job_defer_to_main_loop_bh and then released
only once by BDRV_POLL_WHILE. (Insert rants on recursive mutexes, which
unfortunately are a temporary but necessary evil for iothreads at the
moment).
Luckily, the reason for the double acquisition is simple; the function
acquires the AioContext for both the job iothread and the BDS iothread,
in case the BDS iothread was changed while the job was running. It
is therefore enough to skip the second acquisition when the two
AioContexts are one and the same.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 1490118490-5597-1-git-send-email-pbonzini@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>