Now that iotest 093 test proves that the throttling configuration
survives a blockdev-remove-medium/blockdev-insert-medium pair, the
original reason for declaring these commands experimental is gone
(see commit 6e0abc251d).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20171110224302.14424-5-mreitz@redhat.com
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This test hotplugs a CD drive to a VM and checks that I/O limits can
be set only when the drive has media inserted and that they are kept
when the media is replaced.
This also tests the removal of a device with valid I/O limits set but
no media inserted. This involves deleting and disabling the limits
of a BlockBackend without BlockDriverState, a scenario that has been
crashing until the fixes from the last couple of patches.
[Python PEP8 fixup: "Don't use spaces are the = sign when used to
indicate a keyword argument or a default parameter value"
--Stefan]
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 071eb397118ed207c5a7f01d58766e415ee18d6a.1510339534.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The 093 throttling test submits twice as many requests as the throttle
limit in order to ensure that we reach the limit. The remaining
requests are left in-flight at the end of each test iteration.
Commit 452589b6b4 ("vl.c/exit: pause cpus
before closing block devices") exposed a hang in 093. This happens
because requests are still in flight when QEMU terminates but
QEMU_CLOCK_VIRTUAL time is frozen. bdrv_drain_all() hangs forever since
throttled requests cannot complete.
Step the clock at the end of each test iteration so in-flight requests
actually finish. This solves the hang and is cleaner than leaving tests
in-flight.
Note that this could also be "fixed" by disabling throttling when drives
are closed in QEMU. That approach has two issues:
1. We must drain requests before disabling throttling, so the hang
cannot be easily avoided!
2. Any time QEMU disables throttling internally there is a chance that
malicious users can abuse the code path to bypass throttling limits.
Therefore it makes more sense to fix the test case than to modify QEMU.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170815130502.8736-1-stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
iotest 093 contains a test that creates a throttling group with
several drives and performs I/O in all of them. This patch adds a new
test that creates a similar setup but only performs I/O in one of the
drives at the same time.
This is useful to test that the round robin algorithm is behaving
properly in these scenarios, and is specifically written using the
regression introduced in 27ccdd5259 as an example.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Throttling groups are named using the 'group' parameter of the
block_set_io_throttle command and the throttling.group command-line
option. If that parameter is unspecified the groups get the name of
the block device.
This patch adds a new test to check the naming of throttling groups.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: d87d02823a6b91609509d8bb18e2f5dbd9a6102c.1467986342.git.berto@igalia.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
This patch adds a new test that checks that the burst settings
('iops_max', 'iops_max_length', etc.) of the throttling code work as
expected.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch improves the test by attaching a different number of drives
to the VM and putting them in the same throttling group. The test
verifies that the I/O is evenly distributed among all members of the
group, and that the limits are enforced.
By default the test is repeated 3 times with 1, 2 and 3 drives, but
the maximum number of simultaneous drives is configurable.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 513df1da5c658878191b579ebcddd985adcd4122.1433779731.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This case utilizes qemu-io command "aio_{read,write} -q" to verify the
effectiveness of IO throttling options.
It's implemented by driving the vm timer from qtest protocol, so the
throttling timers are signaled with determinied time duration. Then we
verify the completed IO requests are within 10% error of bps and iops
limits.
"null" protocol is used as the disk backend so that no actual disk IO is
performed on host, this will make the blockstats much more
deterministic. Both "null-aio" and "null-co" are covered, which is also
a simple cross validation test for the driver code.
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1422586186-9925-6-git-send-email-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>