this patch tries to optimize zero write requests
by automatically using bdrv_write_zeroes if it is
supported by the format.
This significantly speeds up file system initialization and
should speed zero write test used to test backend storage
performance.
I ran the following 2 tests on my internal SSD with a
50G QCOW2 container and on an attached iSCSI storage.
a) mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/vdX
QCOW2 [off] [on] [unmap]
-----
runtime: 14secs 1.1secs 1.1secs
filesize: 937M 18M 18M
iSCSI [off] [on] [unmap]
----
runtime: 9.3s 0.9s 0.9s
b) dd if=/dev/zero of=/dev/vdX bs=1M oflag=direct
QCOW2 [off] [on] [unmap]
-----
runtime: 246secs 18secs 18secs
filesize: 51G 192K 192K
throughput: 203M/s 2.3G/s 2.3G/s
iSCSI* [off] [on] [unmap]
----
runtime: 8mins 45secs 33secs
throughput: 106M/s 1.2G/s 1.6G/s
allocated: 100% 100% 0%
* The storage was connected via an 1Gbit interface.
It seems to internally handle writing zeroes
via WRITESAME16 very fast.
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Like qcow2 since commit 6d33e8e7, error out on invalid lengths instead
of silently truncating them to 1023.
Also don't rely on bdrv_pread() catching integer overflows that make len
negative, but use unsigned variables in the first place.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
A huge image size could cause s->l1_size to overflow. Make sure that
images never require a L1 table larger than what fits in s->l1_size.
This cannot only cause unbounded allocations, but also the allocation of
a too small L1 table, resulting in out-of-bounds array accesses (both
reads and writes).
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Too large L2 table sizes cause unbounded allocations. Images actually
created by qemu-img only have 512 byte or 4k L2 tables.
To keep things consistent with cluster sizes, allow ranges between 512
bytes and 64k (in fact, down to 1 entry = 8 bytes is technically
working, but L2 table sizes smaller than a cluster don't make a lot of
sense).
This also means that the number of bytes on the virtual disk that are
described by the same L2 table is limited to at most 8k * 64k or 2^29,
preventively avoiding any integer overflows.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Huge values for header.cluster_bits cause unbounded allocations (e.g.
for s->cluster_cache) and crash qemu this way. Less huge values may
survive those allocations, but can cause integer overflows later on.
The only cluster sizes that qemu can create are 4k (for standalone
images) and 512 (for images with backing files), so we can limit it
to 64k.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
The test test_stream_pause in this class uses vm.pause_drive, which
requires a blkdebug driver on top of image, otherwise it's no-op and the
test running is undeterministic.
So add it.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The shell script attempts to suppress core dumps like this:
old_ulimit=$(ulimit -c)
ulimit -c 0
$QEMU_IO arg...
ulimit -c "$old_ulimit"
This breaks the test hard unless the limit was zero to begin with!
ulimit sets both hard and soft limit by default, and (re-)raising the
hard limit requires privileges. Broken since it was added in commit
dc68afe.
Could be fixed by adding -S to set only the soft limit, but I'm not
sure how portable that is in practice. Simply do it in a subshell
instead, like this:
(ulimit -c 0; exec $QEMU_IO arg...)
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add a test for the JSON protocol driver.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add some test cases for qdict_join().
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This adds a test for VHDX images created by Microsoft's tool, Disk2VHD.
VHDX images created by this tool have 2 identical header sections, with
identical sequence numbers. This makes sure we detect VHDX images with
identical headers, and do not flag them as corrupt.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The purpose of this change is to help create a json file containing
common definitions; each bit of generated C code must be emitted
only one time.
A second history global to all QAPISchema instances has been added
to detect when a file is included more than one time and skip these
includes.
It does not act as a stack and the changes made to it by the
__init__ function are propagated back to the caller so it's really
a global state.
Signed-off-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
We commonly use the error API like this:
err = NULL;
foo(..., &err);
if (err) {
goto out;
}
bar(..., &err);
Every error source is checked separately. The second function is only
called when the first one succeeds. Both functions are free to pass
their argument to error_set(). Because error_set() asserts no error
has been set, this effectively means they must not be called with an
error set.
The qapi-generated code uses the error API differently:
// *errp was initialized to NULL somewhere up the call chain
frob(..., errp);
gnat(..., errp);
Errors accumulate in *errp: first error wins, subsequent errors get
dropped. To make this work, the second function does nothing when
called with an error set. Requires non-null errp, or else the second
function can't see the first one fail.
This usage has also bled into visitor tests, and two device model
object property getters rtc_get_date() and balloon_stats_get_all().
With the "accumulate" technique, you need fewer error checks in
callers, and buy that with an error check in every callee. Can be
nice.
However, mixing the two techniques is confusing. You can't use the
"accumulate" technique with functions designed for the "check
separately" technique. You can use the "check separately" technique
with functions designed for the "accumulate" technique, but then
error_set() can't catch you setting an error more than once.
Standardize on the "check separately" technique for now, because it's
overwhelmingly prevalent.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
When visit_start_struct() fails, visit_end_struct() must not be
called. Three out of four visit_type_TestStruct() call it anyway. As
far as I can tell, visit_start_struct() doesn't actually fail there.
Fix them anyway.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABCAAGBQJTdBkpAAoJEPSH7xhYctcj5jAQAJKiJ/43TBk0S0wlpoPjw1yA
WSC/oJ2yvf+i6EndJPdnxIA/3K3s7suRFBJVYwLVnGJTUZ6gwGcI7cc/EX/iK0vf
1Ffv/RAuh3nkohFw1ZMLQGqACFskSer0K1wZgQg+YjiJRPzzpVOpaAVm3+c04LRN
AjO9LTqudi2pFaDHfGQMFES3j/5+h2srdad7YLMu4h/17SLtcowO7Yf7GT+GHRyD
c3OHcL6vNoCKkIJQULC8X5czh6zapQZG3SRZ/nLt58yjEyVuq4L2cNRA/614K+5b
1YLz4dX6slj8y17tg8On60emU0Sdu4mOws9HtYSytfmEu045iuF+Wr4kNjO74UGd
NnnjD1pdnuF7356pz87PiVBf7qer5yrv+Bog1KRUwn3ETFqpSOMiKJo0JAGBMKdN
w6Mbkg9C7IR62OfmM5157Mj2Xd681GImNiYzerBMfuFAWk5XsMxuz9Xh03I7WpzS
ASow6SCBIr7bsiKENV7twuz1mIQclsYWXaVt8ldH8bMsTCqiefVeDs4o6A3Na+WC
LztUHzlhVLZU92uJDiIsyQnwmsw032RjhKY9sV/NMa5HLAhF/zZLcn1OfXshrv8t
mVnWngGbY185kml0GmLu9TSnSCBSD8lcwZlRA9JVyxfusxElJUOrQzKzvu69Rn6U
XBWPBwppnDQ0MjOgSZD+
=gg8m
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/juanquintela/tags/migration/20140515' into staging
migration/next for 20140515
# gpg: Signature made Thu 15 May 2014 02:32:25 BST using RSA key ID 5872D723
# gpg: Can't check signature: public key not found
* remotes/juanquintela/tags/migration/20140515:
usb: fix up post load checks
migration: show average throughput when migration finishes
savevm: Remove all the unneeded version_minimum_id_old (rest)
savevm: Remove all the unneeded version_minimum_id_old (usb)
Split ram_save_block
arch_init: Simplify code for load_xbzrle()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit f915db07ef.
This commit is broken because it does not account for the
build tree and the source tree being different, and can cause
build failures for out-of-tree builds. Revert it until we can
identify a better solution to the problem.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1400153676-30180-1-git-send-email-peter.maydell@linaro.org
Acked-by: Kevin Wolf <kwolf@redhat.com>
After previous Peter patch, they are redundant. This way we don't
assign them except when needed. Once there, there were lots of case
where the ".fields" indentation was wrong:
.fields = (VMStateField []) {
and
.fields = (VMStateField []) {
Change all the combinations to:
.fields = (VMStateField[]){
The biggest problem (appart from aesthetics) was that checkpatch complained
when we copy&pasted the code from one place to another.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJTbSUxAAoJEJykq7OBq3PIh+EH/1pfLspteDS4hlTZZ8D5r+iN
AEmemUQpMDGawLHQSJcK3xgNWSz5ei3HxLuXz9+5f3ZhP+ECsrTnf+60uzHkdd6j
axG1viAHEBtX0ZumTdo0XY6FtCZmCRqRz8nfqxs1Q3O7UtZaDqLf1m/BNguw5K8G
VHtuPAVidTWcS6QT6CoEdJ4coA3F8ZuK1viTU2nsBE28lqB99ZG9Zkr2pOCXXra2
5d6OIZYyc+PNW2HuNZTmma41aVoYJnT797qr2cLbZ3q38ykwmWU6cNrLsf+O91yT
wnsCG6g1MdQb9mwVp0spPU/X/IuKbRg449XOzY9Ko4HmuSn1Inf6gUIBMigecjQ=
=wmRq
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging
Block pull request
# gpg: Signature made Fri 09 May 2014 19:57:53 BST using RSA key ID 81AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
* remotes/stefanha/tags/block-pull-request:
glib: fix g_poll early timeout on windows
block: qemu-iotests - test for live migration
block: qemu-iotests - update 085 to use common.qemu
block: qemu-iotests - add common.qemu, for bash-controlled qemu tests
block/raw-posix: Try both FIEMAP and SEEK_HOLE
gluster: Correctly propagate errors when volume isn't accessible
vl.c: remove init_clocks call from main
block: Fix open flags with BDRV_O_SNAPSHOT
qemu-iotests: Test converting to streamOptimized from small cluster size
vmdk: Implement .bdrv_get_info()
vmdk: Implement .bdrv_write_compressed
qemu-img: Convert by cluster size if target is compressed
block/iscsi: bump year in copyright notice
block/nfs: Check for NULL server part
qemu-img: sort block formats in help message
iotests: Use configured python
qcow2: Fix alloc_clusters_noref() overflow detection
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This is an initial, simple live migration test from one
running VM to another, using monitor commands.
This is also an example of using the new common.qemu functions
for controlling multiple running qemu instances, for tests that
need a live qemu vm.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The new functionality of common.qemu implements the QEMU control
and communication functionality that was originally in test 085.
This removes that now-duplicate functionality, and uses the
common.qemu functions.
The QEMU commandline changes slightly due to this; in addition to
monitor and qmp i/o options, the new QEMU commandline from inside
common.qemu now introduces -machine accel=qtest.
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This creates some common functions for bash language qemu-iotests
to control, and communicate with, a running QEMU process.
4 functions are introduced:
1. _launch_qemu()
This launches the QEMU process(es), and sets up the file
descriptors and fifos for communication. You can choose to
launch each QEMU process listening for either QMP or HMP
monitor. You can call this function multiple times, and
save the handle returned from each. The returned handle is
in $QEMU_HANDLE. You must copy this value.
Commands 2 and 3 use the handle received from _launch_qemu(), to talk
to the appropriate process.
2. _send_qemu_cmd()
Sends a command string, specified by $2, to QEMU. If $3 is
non-NULL, _send_qemu_cmd() will wait to receive $3 as a
required result string from QEMU. Failure to receive $3 will
cause the test to fail. The command can optionally be retried
$qemu_cmd_repeat number of times. Set $qemu_error_no_exit
to not force the test the fail on exit; in this case,
$QEMU_STATUS[$1] will be set to -1 on failure.
3. _timed_wait_for()
Waits for a response, for up to a default of 10 seconds. If
$2 is not seen in that time (anywhere in the response), then
the test fails. Primarily used by _send_qemu_cmd, but could
be useful standalone, as well. To prevent automatic exit
(and therefore test failure), set $qemu_error_no_exit to a
non-NULL value. If $silent is a non-NULL value, then output
to stdout will be suppressed.
4. _cleanup_qemu()
Kills the running QEMU processes, and removes the fifos.
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The immediately visible effect of this patch is that it fixes committing
a temporary snapshot to its backing file. Previously, it would fail with
a "permission denied" error because bdrv_inherited_flags() forced the
backing file to be read-only, ignoring the r/w reopen of bdrv_commit().
The bigger problem this revealed is that the original open flags must
actually only be applied to the temporary snapshot, and the original
image file must be treated as a backing file of the temporary snapshot
and get the right flags for that.
Reported-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
visit_type_TestStruct() does nothing when called with an error set.
Callers shouldn't do that, and no caller does. Drop the superfluous
test.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
This will return cluster_size and needs_compressed_writes to caller, if all the
extents have the same value (or there's only one extent). Otherwise return
-ENOTSUP.
cluster_size is only reported for sparse formats.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Currently, QEMU's iotests rely on /usr/bin/env to start the correct
Python (that is, at least Python 2.4, but not 3). On systems where
Python 3 is the default, the user has no clean way of making the iotests
use the correct binary.
This commit makes the iotests use the Python selected by configure.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The primitive uses JSON syntax, and include paths are relative to the file using the directive:
{ 'include': 'path/to/file.json' }
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Use an explicit input file on the command-line instead of reading from standard
input.
It also outputs the proper file name when there's an error.
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
The test_path binary is (unlike the other test binaries in tests/tcg)
actually intended to be compiled with the same compiler used to build
the main QEMU executables. It actually #includes a number of the
QEMU source files in an attempt to unit-test the util/path.c functions,
and so if it is not compiled with the same compiler used by configure
to set CONFIG_ settings then it is liable to fail to build.
Fix the makefile to build it with the default C compiler rules, not
CC_I386, and fix the test itself not to include a lot of unnecessary
trace related source files which cause the build to fail if the trace
backend is anything other than 'simple'.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Test both the ich6 and the ich9 version (cf. q35 config) and all the
codecs.
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
POSIX specifies that address_len shall on output specify the length of
the stored address; it does not however specify whether it may get
updated on failure as well to, e.g., zero.
In case EINTR occurs, re-initialize the variable to the desired value.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
We're not using the GLib infrastructure here, to allow cleaning up the
sockets. Still, knowing why a certain test run failed can be valuable.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
In practice this seems very unlikely, so cleanup is neglected, as done
for bind().
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJTYT60AAoJEH8JsnLIjy/WQIoP/2n2/YoSOgtv3IwV1YdjzmLK
fAVYhkZz+ElvMuoM/Mj9k1VEYlIImCrx2Y8vwLEM0tDbRa4+J2vORAOgt4EaY2G7
OxEQoxdX380Cug94pJtupGhZmvkLPa/slnoOHoL2X1IfhD52k0xORVRPFy4nSaV0
f2zOjhPNHCE5ZOnHZshgUjtPUqglJlsBKBlWnXE0EZLBk0jbZO0oRWNEtXZRS3Tp
GQQEqtG46gEuGJcJ6F1f9bDCLJ6KAdCmHMK3weY7niDSem84p35I76xfM/oGHYmb
Y5CgQXDys2STXshw9mAeSUfn4QfYW9JZvQAWGrTYM/y4/hbQKsi8jfbqSYinWcjm
qfBH/nEAqKGgVEmQ4jLlWIa7/Lr0WsirdZZmhBXZcgkfg6+gUP6uw+kDJprqd385
45mVu1AldLKVV4jBhAahuSLGdv8CErndGqVckVHaSPtl4XJxhZmemPJB4JAbczdM
LZiQwRuVeYZC+ssQotr2ov5pzWg3n6IzFW6Zt0T7YWmluOAy9vTfm29IdUV7nxKl
Ht6sm4vLLMtB3snHvLdpTOAXxTJ66J2IqpRGhIgNu71dKLjFU5IEgj5oddw38iiG
sY2CA6DfWSwRmPfaVsH/F8o/T0vONQ9SrX40459UNADm81qvu70IuR8cr8b4xKXX
vC9SwuNBO1upONNkX/4B
=VH6g
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block patches
# gpg: Signature made Wed 30 Apr 2014 19:19:32 BST using RSA key ID C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
* remotes/kevin/tags/for-upstream: (31 commits)
curl: Fix hang reading from slow connections
curl: Ensure all informationals are checked for completion
curl: Eliminate unnecessary use of curl_multi_socket_all
curl: Remove unnecessary explicit calls to internal event handler
curl: Remove erroneous sleep waiting for curl completion
curl: Fix return from curl_read_cb with invalid state
curl: Remove unnecessary use of goto
curl: Fix long line
block/vdi: Error out immediately in vdi_create()
block/bochs: Fix error handling for seek_to_sector()
qcow2: Check min_size in qcow2_grow_l1_table()
qcow2: Catch bdrv_getlength() error
block: Use correct width in format strings
qcow2: Avoid overflow in alloc_clusters_noref()
block: Use error_abort in bdrv_image_info_specific_dump()
block: Fix open_flags in bdrv_reopen()
Revert "block: another bdrv_append fix"
block: Unlink temporary files in raw-posix/win32
block: Remove BDRV_O_COPY_ON_READ for bs->file
block: Create bdrv_backing_flags()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a test which discards a compressed cluster on qcow2. This should
work without any problems.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Both tests 019 and 086 need proper quotations to work with pathnames
that contain spaces.
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The _rm_test_img() function in common.rc did not quote the image
file, which left droppings in the scratch directory (and performed
a potentially unsafe rm -f).
This adds the necessary quotes.
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
error_is_set(&var) is the same as var != NULL, but it takes
whole-program analysis to figure that out. Unnecessarily hard for
optimizers, static checkers, and human readers. Commit 84d18f0 dumbed
it down to obvious, but a few more have crept in since, and
documentation was overlooked. Dumb these down, too.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
A name that is taken by an ID can't be taken by a node-name at the same
time. Check that conflicts are correctly detected.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Since commit f298d071, block devices added with blockdev-add don't have
a QemuOpts around in dinfo->opts. Consequently, we can't rely any more
on QemuOpts catching duplicate IDs for block devices.
This patch adds a new check for duplicate IDs to bdrv_new(), and moves
the existing check that the ID isn't already taken for a node-name there
as well.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Insanely large requests could cause an integer overflow in
bdrv_rw_co() while converting sectors to bytes. This patch catches the
problem and returns an error (if we hadn't overflown the integer here,
bdrv_check_byte_request() would have rejected the request, so we're not
breaking anything that was supposed to work before).
We actually do have a test case that triggers behaviour where we
accidentally let such a request pass, so that it would return success,
but read 0 bytes instead of the requested 4 GB. It fails now like it
should.
If the vdi block driver wants to be able to deal with huge images, it
can't read the whole block bitmap at once into memory like it does
today, but needs to use a metadata cache like qcow2 does.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
commit 58b035c7354afc0c5351ea62264c01d74196ec26
acpi: fix incorrect encoding for 0x{F-1}FFFF
changes the SSDT, update expected files accordingly.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The old check was off by a factor of 512 and didn't consider cases where
we don't get an exact division. This could lead to an out-of-bounds
array access in seek_to_sector().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
commit f2ccc311df
dsdt: tweak ACPI ID for hotplug resource device
changes the DSDT, update test expected files to match
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reported-by: Igor Mammedov <imammedo@redhat.com>
Only i386, x86_64, sparc and sparc64 qtests were cleaned up.
Make this more generic to not miss any newly tested targets.
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Since commit 9fd3171a, BDRV_O_SNAPSHOT uses an option QDict to specify
the originally requested image as the backing file of the newly created
temporary snapshot. This means that the filename is stored in
"file.filename", which is an option that is not parsed for protocol
names. Therefore things like -drive file=nbd:localhost:10809 were
broken because it looked for a local file with the literal name
'nbd:localhost:10809'.
This patch changes the way BDRV_O_SNAPSHOT works once again. We now open
the originally requested image as normal, and then do a similar
operation as for live snapshots to put the temporary snapshot on top.
This way, both driver specific options and parsed filenames work.
As a nice side effect, this results in code movement to factor
bdrv_append_temp_snapshot() out. This is a good preparation for moving
its call to drive_init() and friends eventually.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
qemu doesn't print these CRs any more. The test still didn't fail
because the output comparison ignores line endings, but the change turns
up each time when you want to update the output.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
When using the QDict option 'filename', it is supposed to be interpreted
literally. The code did correctly avoid guessing the protocol from any
string before the first colon, but it still called bdrv_parse_filename()
which would, for example, incorrectly remove a 'file:' prefix in the
raw-posix driver.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
If lazy refcounts are enabled for a backing file, committing to this
backing file may leave it in a dirty state even if the commit succeeds.
The reason is that the bdrv_flush() call in bdrv_commit() doesn't flush
refcount updates with lazy refcounts enabled, and qcow2_reopen_prepare()
doesn't take care to flush metadata.
In order to fix this, this patch also fixes qcow2_mark_clean(), which
contains another ineffective bdrv_flush() call beause lazy refcounts are
disabled only afterwards. All existing callers of qcow2_mark_clean()
either don't modify refcounts or already flush manually, so that this
fixes only a latent, but not yet actually triggerable bug.
Another instance of the same problem is live snapshots. Again, a real
corruption is prevented by an explicit flush for non-read-only images in
external_snapshot_prepare(), but images using lazy refcounts stay dirty.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
This avoids a possible division by zero.
Convert s->tracks to unsigned as well because it feels better than
surviving just because the results of calculations with s->tracks are
converted to unsigned anyway.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The first test case would cause a huge memory allocation, leading to a
qemu abort; the second one to a too small malloc() for the catalog
(smaller than s->catalog_size), which causes a read-only out-of-bounds
array access and on big endian hosts an endianess conversion for an
undefined memory area.
The sample image used here is not an original Parallels image. It was
created using an hexeditor on the basis of the struct that qemu uses.
Good enough for trying to crash the driver, but not for ensuring
compatibility.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This avoids an unbounded allocation.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
For the L1 table to loaded for an internal snapshot, the code allocated
only enough memory to hold the currently active L1 table. If the
snapshot's L1 table is actually larger than the current one, this leads
to a buffer overflow.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The qcow2 code assumes that s->snapshots is non-NULL if s->nb_snapshots
!= 0. By having the initialisation of both fields separated in
qcow2_open(), any error occuring in between would cause the error path
to dereference NULL in qcow2_free_snapshots() if the image had any
snapshots.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
bs->total_sectors is not the highest possible sector number that could
be involved in a copy on write operation: VM state is after the end of
the virtual disk. This resulted in wrong values for the number of
sectors to be copied (n).
The code that checks for the end of the image isn't required any more
because the code hasn't been calling the block layer's bdrv_read() for a
long time; instead, it directly calls qcow2_readv(), which doesn't error
out on VM state sector numbers.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This test checks for proper bounds checking of some VDI input
headers. The following is checked:
1. Max image size (1024TB) with the appropriate Blocks In Image
value (0x3fffffff) is detected as valid.
2. Image size exceeding max (1024TB) is seen as invalid
3. Valid image size but with Blocks In Image value that is too
small fails
4. Blocks In Image size exceeding max (0x3fffffff) is seen as invalid
5. 64MB image, with 64 Blocks In Image, and 1MB Block Size is seen
as valid
6. Block Size < 1MB not supported
7. Block Size > 1MB not supported
[Max Reitz <mreitz@redhat.com> pointed out that "1MB + 1" in the test
case is wrong. Change to "1MB + 64KB" to match the 0x110000 value.
--Stefan]
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
free_cluster_index is only correct if update_refcount() was called from
an allocation function, and even there it's brittle because it's used to
protect unfinished allocations which still have a refcount of 0 - if it
moves in the wrong place, the unfinished allocation can be corrupted.
So not using it any more seems to be a good idea. Instead, use the
first requested cluster to do the calculations. Return -EAGAIN if
unfinished allocations could become invalid and let the caller restart
its search for some free clusters.
The context of creating a snapsnot is one situation where
update_refcount() is called outside of a cluster allocation. For this
case, the change fixes a buffer overflow if a cluster is referenced in
an L2 table that cannot be represented by an existing refcount block.
(new_table[refcount_table_index] was out of bounds)
[Bump the qemu-iotests 026 refblock_alloc.write leak count from 10 to
11.
--Stefan]
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
len could become negative and would pass the check then. Nothing bad
happened because bdrv_pread() happens to return an error for negative
length values, but make variables for sizes unsigned anyway.
This patch also changes the behaviour to error out on invalid lengths
instead of silently truncating it to 1023.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This avoids an unbounded allocation.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This avoid unbounded memory allocation and fixes a potential buffer
overflow on 32 bit hosts.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The end of the refcount table must not exceed INT64_MAX so that integer
overflows are avoided.
Also check for misaligned refcount table. Such images are invalid and
probably the result of data corruption. Error out to avoid further
corruption.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Limit the in-memory reference count table size to 8 MB, it's enough in
practice. This fixes an unbounded allocation as well as a buffer
overflow in qcow2_refcount_init().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Header, header extension and the backing file name must all be stored in
the first cluster. Setting the backing file to a much higher value
allowed header extensions to become much bigger than we want them to be
(unbounded allocation).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This fixes an unbounded allocation for s->unknown_header_fields.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This fixes some cases of division by zero crashes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This fixes two possible division by zero crashes: In bochs_open() and in
seek_to_sector().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
It should neither become negative nor allow unbounded memory
allocations. This fixes aborts in g_malloc() and an s->catalog_bitmap
buffer overflow on big endian hosts.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Gets us rid of integer overflows resulting in negative sizes which
aren't correctly checked.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
cloop stores the number of compressed blocks in the n_blocks header
field. The file actually contains n_blocks + 1 offsets, where the extra
offset is the end-of-file offset.
The following line in cloop_read_block() results in an out-of-bounds
offsets[] access:
uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num];
This patch allocates and loads the extra offset so that
cloop_read_block() works correctly when the last block is accessed.
Notice that we must free s->offsets[] unconditionally now since there is
always an end-of-file offset.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The offsets[] array allows efficient seeking and tells us the maximum
compressed data size. If the offsets are bogus the maximum compressed
data size will be unrealistic.
This could cause g_malloc() to abort and bogus offsets mean the image is
broken anyway. Therefore we should refuse such images.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Limit offsets_size to 512 MB so that:
1. g_malloc() does not abort due to an unreasonable size argument.
2. offsets_size does not overflow the bdrv_pread() int size argument.
This limit imposes a maximum image size of 16 TB at 256 KB block size.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The following integer overflow in offsets_size can lead to out-of-bounds
memory stores when n_blocks has a huge value:
uint32_t n_blocks, offsets_size;
[...]
ret = bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4);
[...]
s->n_blocks = be32_to_cpu(s->n_blocks);
/* read offsets */
offsets_size = s->n_blocks * sizeof(uint64_t);
s->offsets = g_malloc(offsets_size);
[...]
for(i=0;i<s->n_blocks;i++) {
s->offsets[i] = be64_to_cpu(s->offsets[i]);
offsets_size can be smaller than n_blocks due to integer overflow.
Therefore s->offsets[] is too small when the for loop byteswaps offsets.
This patch refuses to open files if offsets_size would overflow.
Note that changing the type of offsets_size is not a fix since 32-bit
hosts still only have 32-bit size_t.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Avoid unbounded s->uncompressed_block memory allocation by checking that
the block_size header field has a reasonable value. Also enforce the
assumption that the value is a non-zero multiple of 512.
These constraints conform to cloop 2.639's code so we accept existing
image files.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add a cloop format-specific test case. Later patches add tests for
input validation to the script.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Add the cloop block driver to qemu-iotests.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This adds a regression test for commit
efdf6a56a7 (tmp105: Read temperature in
milli-celsius).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This will make it easier to reach the device under test via QOM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
The next patches will add more reads and writes. Add a simple testing
API for this.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
qtest test cases only work on POSIX hosts. The following line only
defines dependencies for qtest binaries on POSIX hosts:
check-qtest-$(CONFIG_POSIX)=$(foreach TARGET,$(TARGETS),$(check-qtest-$(TARGET)-y))
But the QTEST_TARGETS definition earlier in the Makefile fails to check
CONFIG_POSIX. This causes make targets to be generated for qtest test
cases even though we don't know how to build the binaries.
The following error message is printed when trying to run gtester on a
binary that was never built:
GLib-WARNING **: Failed to execute test binary: tests/endianness-test.exe: Failed to execute child process "tests/endianness-test.exe" (No such file or directory)
This patch makes QTEST_TARGETS empty on non-POSIX hosts. This prevents
the targets from being generated.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
test-rfifolock and test-vmstate only build on POSIX hosts. Exclude them
if building for Windows.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Andreas Färber <afaerber@suse.de>
test_timer_schedule and test_source_timer_schedule don't compile for MinGW
because some functions are not implemented for MinGW (qemu_pipe,
aio_set_fd_handler).
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Keep track of active qtest instances so we can kill them when the test
aborts. This ensures no QEMU processes are left running after test
failure.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel.a@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
It turns out there are test cases that use multiple libqtest instances.
We cannot use a global qtest instance in the SIGABRT handler.
This reverts commit cb201b4872.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel.a@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Add U suffix when doing "1 << 31" to avoid undefined behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This reverts commit d07e0e9cdd.
Since
commit b4f4d54812
acpi: make SSDT 1.0 spec compliant when possible
We are back to old encoding.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
More small fixes all over the place.
Notably fixes for big-endian hosts by Marcel.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJTMAvDAAoJECgfDbjSjVRptUEIAMYNC76eQSPNoVv9vP/XaTT1
c9TE67jo6HfxO7JaHSishyaf0bNrGIske+ua3J4NbiEAHnX22SDjn0o/CmX+tbjb
n70hpjF+KNgt0SR/Wxsl8nOa+nwsrbrlv/ReN7UehGicH+Af2OR65PZFwKwC3pjF
nupmucOmCBQzcmWDzx+DgSXulh02bfmpRHJo/EMhg7RXnkdNPnlwh5klycotJVgW
ggnY9IRuPr1m4Aq4V7wN/I8kIpkcAJxF5RlxdyopsdQtklLItSRi4xiMJlkhIPjA
lLdkOiFnVFKSggiVy9LFTdQWtGog1H4sVypM6J6Z2zOIKQsJFvHMpCrbcE8+7CY=
=C8sW
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
acpi,pc,test bug fixes
More small fixes all over the place.
Notably fixes for big-endian hosts by Marcel.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Mon 24 Mar 2014 10:41:07 GMT using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream:
tests/acpi-test: do not fail if iasl is broken
vl.c: Use MAX_CPUMASK_BITS macro instead of hardcoded constant
sysemu.h: Document what MAX_CPUMASK_BITS really limits
acpi: fix endian-ness for table ids
acpi-test: signature endian-ness fixes
i386/acpi-build: support hotplug of VCPU with APIC ID 0xFF
acpi-test: rebuild SSDT
i386/acpi-build: allow more than 255 elements in CPON
pc: Refuse max_cpus if it results in too large APIC ID
acpi: Don't use MAX_CPUMASK_BITS for APIC ID bitmap
acpi: Assert sts array limit on AcpiCpuHotplug_add()
pc: Refuse CPU hotplug if the resulting APIC ID is too large
acpi: Add ACPI_CPU_HOTPLUG_ID_LIMIT macro
acpi-test: update expected SSDT files
acpi-build: fix misaligned access
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
There is an issue with iasl on big endian machines: It
cannot disassemble acpi tables taken from little endian
machines, so we cannot check the expected tables.
The acpi test will check if the expected aml files
can be disassembled, and will issue an warning not
failing the test on those machines until this
problem is solved by the acpica community.
Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
QEMU supports firmware names for all devices in the QEMU tree but
some architectures expect some parts of firmware path names in different
format.
This introduces a firmware-pathname-change interface definition.
If some machines needs to redefine the firmware path format, it has
to add the TYPE_FW_PATH_PROVIDER interface to an object that is above
the device on the QOM tree (typically /machine).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Andreas Färber <afaerber@suse.de>
commit 9bcc80cd71
i386/acpi-build: allow more than 255 elements in CPON
Replaces 0x1 with a smaller One constant.
rebuild expected SSDT.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iJwEAAECAAYFAlMkI1YACgkQUlPFrXTwyDheOAP+JbRS4/vuZW8MWcGNeReqmK8P
/33svZGdoLFcSjv23dpJ9lBlo+JmUywGksAe6+D8DcxNKDOk1N+KGFTIeWAlQi/z
QTIRmQAAgAEZwu32jptOzQLtjVC9f+tbZPh/BTaiivWIptp0qAdhyUKQOevrZR+d
9MVdWj1XBGkM1HF13rM=
=JppC
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/mjt/tags/trivial-patches-2014-03-15' into staging
trivial patches for 2014-03-15
# gpg: Signature made Sat 15 Mar 2014 09:54:30 GMT using RSA key ID 74F0C838
# gpg: Good signature from "Michael Tokarev <mjt@tls.msk.ru>"
# gpg: aka "Michael Tokarev <mjt@corpit.ru>"
# gpg: aka "Michael Tokarev <mjt@debian.org>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6EE1 95D1 886E 8FFB 810D 4324 457C E0A0 8044 65C5
# Subkey fingerprint: E190 8639 3B10 B51B AC2C 8B73 5253 C5AD 74F0 C838
* remotes/mjt/tags/trivial-patches-2014-03-15:
FSL eTSEC: Fix typo in rx ring
scripts/make-release: Don't distribute .git directories
configure: Don't use __int128_t for clang versions before 3.2
audio: Add 'static' attributes to several variables
tests: Fix 'make test' for i686 hosts (build regression)
misc: Fix typos in comments
Add qga/qapi-generated to .gitignore
hw/timer/grlib_gptimer: Avoid integer overflows
.travis.yml: add IRC notifications for build failures
.travis.yml: trivial whitespace fixup
.travis.yml: re-enable lttng user space trace test
.travis.yml: add a new build target with non-core devlibs
sasl: Avoid 'Could not find keytab file' in syslog
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
'make test' is broken at least since commit
baacf04799. Several source files were moved
to util/, and some of them there split, so add the missing prefix and new
files to fix the compiler and linker errors.
There remain more issues, but these changes allow running the test on a
Linux i686 host.
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The 'quick' group in qemu-iotests are not allowed to run QEMU since we
don't know which targets are available. In other words, they may only
use qemu-img, qemu-io, and qemu-nbd.
Drop 085 and 087 from the 'quick' group since they run QEMU. This
makes "make check-block" pass again.
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This new test case uses nbd-fault-injector.py to simulate broken TCP
connections at each stage in the NBD protocol. This way we can exercise
block/nbd-client.c's socket error handling code paths.
In particular, this serves as a regression test to make sure
nbd-client.c doesn't cause an infinite loop by leaving its
nbd_receive_reply() fd handler registered after the connection has been
closed. This bug was fixed in an earlier patch.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The nbd-fault-injector.py script is a special kind of NBD server. It
throws away all writes and produces zeroes for reads. Given a list of
fault injection rules, it can simulate NBD protocol errors and is useful
for testing NBD client error handling code paths.
See the patch for documentation. This scripts is modelled after Kevin
Wolf <kwolf@redhat.com>'s blkdebug block driver.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Opening an encrypted image takes an additional step: setting the key.
Between open and the key set, the image must not be used.
We have some protection against accidental use in place: you can't
unpause a guest while we're missing keys. You can, however, hot-plug
block devices lacking keys into a running guest just fine, or insert
media lacking keys. In the latter case, notifying the guest of the
insert is delayed until the key is set, which may suffice to protect
at least some guests in common usage.
This patch makes the protection apply in more cases, in a rather
heavy-handed way: it doesn't let you open encrypted images unless
we're in a paused state.
It doesn't extend the protection to users other than the guest (block
jobs?). Use of runstate_check() from block.c is disgusting. Best I
can do right now.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
If an assertion fails during qtest_init() the SIGABRT handler is
invoked. This is the correct behavior since we need to kill the QEMU
process to avoid leaking it when the test dies.
The global_qtest pointer used by the SIGABRT handler is currently only
assigned after qtest_init() returns. This results in a segfault if an
assertion failure occurs during qtest_init().
Move global_qtest assignment inside qtest_init(). Not pretty but let's
face it - the signal handler depends on global state.
Reported-by: Marcel Apfelbaum <marcel.a@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Marcel Apfelbaum <marcel.a@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
A test is only as good as its coverage - testing virtserialport in
addition to virtconsole showed that commit
0399a3819b (virtio-console: QOM cast
cleanup for VirtConsole) broke virtserialport.
Acked-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
It can be useful to run an AioContext from a thread which normally does
not "own" the AioContext. For example, request draining can be
implemented by acquiring the AioContext and looping aio_poll() until all
requests have been completed.
The following pattern should work:
/* Event loop thread */
while (running) {
aio_context_acquire(ctx);
aio_poll(ctx, true);
aio_context_release(ctx);
}
/* Another thread */
aio_context_acquire(ctx);
bdrv_read(bs, 0x1000, buf, 1);
aio_context_release(ctx);
This patch implements aio_context_acquire() and aio_context_release().
Note that existing aio_poll() callers do not need to worry about
acquiring and releasing - it is only needed when multiple threads will
call aio_poll() on the same AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
QemuMutex does not guarantee fairness and cannot be acquired
recursively:
Fairness means each locker gets a turn and the scheduler cannot cause
starvation.
Recursive locking is useful for composition, it allows a sequence of
locking operations to be invoked atomically by acquiring the lock around
them.
This patch adds RFifoLock, a recursive lock that guarantees FIFO order.
Its first user is added in the next patch.
RFifoLock has one additional feature: it can be initialized with an
optional contention callback. The callback is invoked whenever a thread
must wait for the lock. For example, it can be used to poke the current
owner so that they release the lock soon.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Extend test file 060 by a test case for corruption occuring concurrently
to a COW request. QEMU should not crash but rather return an appropriate
error message.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Older versions of gcc (eg 4.6) can't handle varargs functions declared
inline for anything other than completely trivial uses, and complain:
tests/qom-test.c: In function 'qmp': tests/libqtest.h:359:60: sorry,
unimplemented: function 'qmp' can never be inlined because it uses
variable argument lists
Avoid this problem by putting the functions into libqtest.c instead
of using inline definitions in libqtest.h.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Andreas Färber <afaerber@suse.de>
'socket_accept' waits for QEMU to init its unix socket.
If QEMU encounters an error during command line parsing,
it can exit before initializing the communication channel.
Using a timeout for sockets fixes the issue.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
This adds a test whether sPAPR PHB can be added via the command line.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Recursively walk all properties under /machine and try to retrieve their
value. This is a regression test for link<> properties and the
DeviceState::hotpluggable property.
Cf. be2f78b6b0 and
1a37eca107
Signed-off-by: Andreas Färber <afaerber@suse.de>
Test the error class instead. Expecting a specific message is
fragile. In fact, it broke once already, in commit 75884af. Restore
the test of error member "class" dropped there, and drop the test of
error member "desc".
There are no other tests of "desc" as far as I can tell.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
Use g_assert_cmpstr() instead of combining g_assert() and strcmp(3).
This simplifies the code since we no longer have to play games to
distinguish NULL from "" using "(null)".
gcc extension haters will also be happy that ?: was dropped.
Suggested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
* remotes/qmp-unstable/queue/qmp:
tests: test-qmp-commands: Fix double free
qapi script: do not add "_" for every capitalized char in enum
qapi script: do not allow string discriminator
qapi: convert BlockdevOptions to use enum discriminator
qapi script: support enum type as discriminator in union
qapi script: use same function to generate enum string
qapi script: code move for generate_enum_name()
qapi script: check correctness of union
qapi script: remember line number in schema parsing
qapi script: add check for duplicated key
qapi script: remember explicitly defined enum values
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ret variable is freed twice, but on the second time we actually want
to free ret3 instead. Don't know why this didn't explode.
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Since enum based discriminators provide better type-safety and
ensure that future qapi additions do not forget to adjust dependent
unions, forbid using string as discriminator from now on.
Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
By default, any union will automatically generate a enum type as
"[UnionName]Kind" in C code, and it is duplicated when the discriminator
is specified as a pre-defined enum type in schema. After this patch,
the pre-defined enum type will be really used as the switch case
condition in generated C code, if discriminator is an enum field.
Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Since line info is remembered as QAPISchema.line now, this patch
uses it as additional info for every expr in QAPISchema inside qapi.py,
then improves error message with it in checking of exprs.
For common union the patch will check whether base is a valid complex
type if specified. For flat union it will check whether base presents,
whether discriminator is found in base, whether the key of every branch
is correct when discriminator is an enum type.
Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
It is bad that same key was specified twice, especially when a union has
two branches with same condition. This patch can prevent it.
Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
Later other scripts will need to check the enum values.
Signed-off-by: Wenchao Xia <wenchaoqemu@gmail.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
If the expected (offline) acpi tables loaded correctly,
it is safe to assume the iasl installation is OK and
issue an error if the actual tables failed to load.
Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Updated the error message while at it.
Signed-off-by: Marcel Apfelbaum <marcel.a@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This adds tests for live snapshots, both through the single
snapshot command, and the transaction group snapshot command.
The snapshots are done through the QMP interface, using the
following commands for snapshots:
Single snapshot:
{ 'execute': 'blockdev-snapshot-sync', 'arguments':
{ 'device': 'virtio0', 'snapshot-file':'...',
'format': 'qcow2' } }"
Group snapshot:
{ 'execute': 'transaction', 'arguments':
{'actions': [
{ 'type': 'blockdev-snapshot-sync', 'data' :
{ 'device': 'virtio0', 'snapshot-file': '...' } },
{ 'type': 'blockdev-snapshot-sync', 'data' :
{ 'device': 'virtio1', 'snapshot-file': '...' } } ]
} }
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Benoit Canet <benoit@irqsave.net>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>