This patch replaces the bdrv_reopen() calls that set and remove the
BDRV_O_RDWR flag with the new bdrv_reopen_set_read_only() function.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch replaces the bdrv_reopen() calls that set and remove the
BDRV_O_RDWR flag with the new bdrv_reopen_set_read_only() function.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch replaces the bdrv_reopen() calls that set and remove the
BDRV_O_RDWR flag with the new bdrv_reopen_set_read_only() function.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch replaces the bdrv_reopen() calls that set and remove the
BDRV_O_RDWR flag with the new bdrv_reopen_set_read_only() function.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch replaces the bdrv_reopen() calls that set and remove the
BDRV_O_RDWR flag with the new bdrv_reopen_set_read_only() function.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Most callers of bdrv_reopen() only use it to switch a BlockDriverState
between read-only and read-write, so this patch adds a new function
that does just that.
We also want to get rid of the flags parameter in the bdrv_reopen()
API, so this function sets the "read-only" option and passes the
original flags (which will then be updated in bdrv_reopen_prepare()).
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aio_worker() doesn't add anything interesting, it's only a useless
indirection. Call the handler function directly instead.
As we know that this handler function is only called from coroutine
context and the coroutine stays around until the worker thread finishes,
we can keep RawPosixAIOData on the stack.
This was the last user of aio_worker(), so the function goes away now.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
No real reason to keep using the callback based mechanism here when the
rest of the file-posix driver is coroutine based. Changing it brings
ioctls more in line with how other request types work.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aio_worker() doesn't add anything interesting, it's only a useless
indirection. Call the handler function directly instead.
As we know that this handler function is only called from coroutine
context and the coroutine stays around until the worker thread finishes,
we can keep RawPosixAIOData on the stack.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aio_worker() for reads and writes isn't boring enough yet. It still does
some postprocessing for handling short reads and turning the result into
the right return value.
However, there is no reason why handle_aiocb_rw() couldn't do the same,
and even without duplicating code between the read and write path. So
move the code there.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aio_worker() doesn't add anything interesting, it's only a useless
indirection. Call the handler function directly instead.
As we know that this handler function is only called from coroutine
context and the coroutine stays around until the worker thread finishes,
we can keep RawPosixAIOData on the stack.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aio_worker() doesn't add anything interesting, it's only a useless
indirection. Call the handler function directly instead.
As we know that this handler function is only called from coroutine
context and the coroutine stays around until the worker thread finishes,
we can keep RawPosixAIOData on the stack.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aio_worker() doesn't add anything interesting, it's only a useless
indirection. Call the handler function directly instead.
As we know that this handler function is only called from coroutine
context and the coroutine stays around until the worker thread finishes,
we can keep RawPosixAIOData on the stack.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aio_worker() doesn't add anything interesting, it's only a useless
indirection. Call the handler function directly instead.
As we know that this handler function is only called from coroutine
context and the coroutine stays around until the worker thread finishes,
we can keep RawPosixAIOData on the stack.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
aio_worker() doesn't add anything interesting, it's only a useless
indirection. Call the handler function directly instead.
As we know that this handler function is only called from coroutine
context and the coroutine stays around until the worker thread finishes,
we can keep RawPosixAIOData on the stack.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Getting the thread pool of the AioContext of a block node and scheduling
some work in it is an operation that is already done twice, and we'll
get more instances. Factor it out into a separate function.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
RawPosixAIOData contains a lot of fields for several separate operations
that are to be processed in a worker thread and that need different
parameters. The struct is currently rather unorganised, with unions that
cover some, but not all operations, and even one #define for field names
instead of a union.
Clean this up to have some common fields and a single union. As a side
effect, on x86_64 the struct shrinks from 72 to 48 bytes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Do decompression in threads, like it is already done for compression.
This improves asynchronous compressed reads performance.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Allocate buffers locally and release qcow2 lock. Than, reads inside
qcow2_co_preadv_compressed may be done in parallel, however all
decompression is still done synchronously. Let's improve it in the
following commit.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually moving away from sector-based interfaces, towards
byte-based. Get rid of it here too.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
- make it look more like a pair of qcow2_compress - rename the function
and its parameters
- drop extra out_len variable, check filling of output buffer by strm
structure itself
- fix code style
- add some documentation
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Compression is done in threads in qcow2.c. We want to do decompression
in the same way, so, firstly, move it to the same file.
The only change is braces around if-body in decompress_buffer, to
satisfy checkpatch.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Give explicit size both for source and destination buffers, to make it
similar with decompression path and than cleanly reuse parameter
structure for decompression threads.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Use appropriate macro, corresponding to deflateInit2 spec.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
After commit f8d59dfb40
"block/backup: fix fleecing scheme: use serialized writes" fleecing
(specifically reading from backup target, when backup source is in
backing chain of backup target) is safe, because all backup-job writes
to target are serialized. Therefore we don't need additional
synchronization for these reads.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This change is better to understand what kind of block type is being
handled by the code. Using a syntax similar to the DMG documentation is
easier than tracking all hex values assigned to a block type.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit includes the support to new module dmg-lzfse into dmg block
driver. It includes the support for block type ULFO (0x80000007).
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit includes the support to lzfse opensource library. With this
library dmg block driver can decompress images with this type of
compression inside.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
QEMU dmg support includes zlib and bzip2, but it does not contains lzfse
support. This commit adds the source file to extend compression support
for new DMGs.
Signed-off-by: Julio Faracco <jcfaracco@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJcBpyAAAoJEHWtZYAqC0IRFugH/RxVUA48nJjOW2xVgxeSbBhn
LrkkNPtRp24Nr1dBpmcNRiOyFS5xmkAL6G6KdkZ21To5m7UBx7Lr+X1InSK2ePHS
jfTTuEA0w5yWHv95tNgJs0u6x2m58TKLiUoNWS5nAeOizgQ7k7s/Y6QEVdDrlHoO
CCaUxvDtuU9sYEUO8ccYx72GlK+Ak3BSYyFsiCIC6bT2aYa+RA74vlovmKP0RB5w
utjUuI3OOfS5MRAxBJ9OUiD55aufJQL7454wGb0tXWAurCnCAzTjNSCTD3psmEsg
xAInKFUVxYvnx+qEeOMgnf2dKuaPCJyYud6gztmIikEmJPXyZWiP3lViWxCIPEQ=
=9Br9
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stefanberger/tags/pull-tpm-2018-12-04-1' into staging
Merge tpm 2018/12/04 v1
# gpg: Signature made Tue 04 Dec 2018 15:25:52 GMT
# gpg: using RSA key 75AD65802A0B4211
# gpg: Good signature from "Stefan Berger <stefanb@linux.vnet.ibm.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: B818 B9CA DF90 89C2 D5CE C66B 75AD 6580 2A0B 4211
* remotes/stefanberger/tags/pull-tpm-2018-12-04-1:
tpm: Make sure the locality received from backend is valid
tpm: Make sure new locality passed to tpm_tis_prep_abort() is valid
tpm: Remove unused locty parameter from tpm_tis_abort()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The two thing that should be handled are cipher and ivgen. For ivgen
the solution is just mutex, as iv calculations should not be long in
comparison with encryption/decryption. And for cipher let's just keep
per-thread ciphers.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Introduce QCryptoBlock-based functions and use them where possible.
This is needed to implement thread-safe encrypt/decrypt operations.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Rename qcrypto_block_*crypt_helper to qcrypto_block_cipher_*crypt_helper,
as it's not about QCryptoBlock. This is needed to introduce
qcrypto_block_*crypt_helper in the next commit, which will have
QCryptoBlock pointer and than will be able to use additional fields of
it, which in turn will be used to implement thread-safe QCryptoBlock
operations.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
qcrypto_block_encrypt_helper and qcrypto_block_decrypt_helper are
almost identical, let's reduce code duplication and simplify further
improvements.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Free block->cipher and block->ivgen on error path.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
The values specified in the documentation don't match the actual
defaults set in qcrypto_block_luks_create().
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
This changes two lines in simple.c that end with a comma, and replaces them
with a semi-colon.
Signed-off-by: Larry Dewey <ldewey@suse.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20181127190849.10558-1-ldewey@suse.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Currently the log backend prints the process id of QEMU at the start
of each output line, but since threads share the same PID there is no
clear distinction between their outputs.
Having the thread id present in the log makes it easier to see when
output comes from different threads. E.g.:
12423@1538597569.672527:qemu_mutex_lock waiting on mutex 0x1103ee60 (/root/qemu/util/main-loop.c:236)
...
12430@1538597569.503928:qemu_mutex_unlock released mutex 0x1103ee60 (/root/qemu/cpus.c:1238)
12431@1538597569.503937:qemu_mutex_locked taken mutex 0x1103ee60 (/root/qemu/cpus.c:1257)
^here
In the above, 12423 is the main process id and 12430 & 12431 are the
two vcpu threads.
(qemu) info cpus
* CPU #0: thread_id=12430
CPU #1: thread_id=12431
Suggested-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Taking the address of a field in a packed struct is a bad idea, because
it might not be actually aligned enough for that pointer type (and
thus cause a crash on dereference on some host architectures). Newer
versions of clang warn about this. Avoid the bug by not using the
"modify in place" byte swapping functions.
Patch produced with scripts/coccinelle/inplace-byteswaps.cocci
(with a couple of long lines manually wrapped).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20181210120436.30522-1-peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
vfio-ap devices do not pin any pages in the host. Therefore, they
are compatible with memory ballooning.
Flag them as compatible, so both vfio-ap and a balloon can be
used simultaneously.
Cc: qemu-stable@nongnu.org
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Tony Krowiak <akrowiak@linux.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Just like on other architectures, we should stop the clock while the guest
is not running. This is already properly done for TCG. Right now, doing an
offline migration (stop, migrate, cont) can easily trigger stalls in the
guest.
Even doing a
(hmp) stop
... wait 2 minutes ...
(hmp) cont
will already trigger stalls.
So whenever the guest stops, backup the KVM TOD. When continuing to run
the guest, restore the KVM TOD.
One special case is starting a simple VM: Reading the TOD from KVM to
stop it right away until the guest is actually started means that the
time of any simple VM will already differ to the host time. We can
simply leave the TOD running and the guest won't be able to recognize
it.
For migration, we actually want to keep the TOD stopped until really
starting the guest. To be able to catch most errors, we should however
try to set the TOD in addition to simply storing it. So we can still
catch basic migration problems.
If anything goes wrong while backing up/restoring the TOD, we have to
ignore it (but print a warning). This is then basically a fallback to
old behavior (TOD remains running).
I tested this very basically with an initrd:
1. Start a simple VM. Observed that the TOD is kept running. Old
behavior.
2. Ordinary live migration. Observed that the TOD is temporarily
stopped on the destination when setting the new value and
correctly started when finally starting the guest.
3. Offline live migration. (stop, migrate, cont). Observed that the
TOD will be stopped on the source with the "stop" command. On the
destination, the TOD is temporarily stopped when setting the new
value and correctly started when finally starting the guest via
"cont".
4. Simple stop/cont correctly stops/starts the TOD. (multiple stops
or conts in a row have no effect, so works as expected)
In the future, we might want to send the guest a special kind of time sync
interrupt under some conditions, so it can synchronize its tod to the
host tod. This is interesting for migration scenarios but also when we
get time sync interrupts ourselves. This however will most probably have
to be handled in KVM (e.g. when the tods differ too much) and is not
desired e.g. when debugging the guest (single stepping should not
result in permanent time syncs). I consider something like that an add-on
on top of this basic "don't break the guest" handling.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20181130094957.4121-1-david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Halil does more work in this area than I do right now. Lets add Halil.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20181204133802.100998-1-borntraeger@de.ibm.com>
Acked-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>