Now that AIOPool no longer keeps a freelist, it isn't really a "pool"
anymore. Rename it to AIOCBInfo and make it const since it no longer
needs to be modified.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
AIO control blocks are frequently acquired and released because each aio
request involves at least one AIOCB. Therefore, we pool them to avoid
heap allocation overhead.
The problem with the freelist approach in AIOPool is thread-safety. If
we want BlockDriverStates to associate with AioContexts that execute in
multiple threads, then a global freelist becomes a problem.
This patch drops the freelist and instead uses g_slice_alloc() which is
tuned for per-thread fixed-size object pools. qemu_aio_get() and
qemu_aio_release() are now thread-safe.
Note that the change from g_malloc0() to g_slice_alloc() should be safe
since the freelist reuse case doesn't zero the AIOCB either.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Using appropriate types for variables is a good thing :). All users
simply do sizeof(MyType) and the value is passed to a memory allocator,
it should be size_t.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Some cleanups can now be made, now that the main loop does not anymore need
hooks into the bottom half code.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With this change async.c does not rely anymore on any service from
main-loop.c, i.e. it is completely self-contained.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This adds a GPollFD to each AioHandler. It will then be possible to
attach these GPollFDs to a GSource, and from there to the main loop.
aio_wait examines the GPollFDs and avoids calling select() if any
is set (similar to what it does if bottom halves are available).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will be used when polling the GSource attached to an AioContext.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
With this patch, I/O handlers (including event notifier handlers) can be
attached to a single AioContext.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Start introducing AioContext, which will let us remove globals from
aio.c/async.c, and introduce multiple I/O threads.
The bottom half functions now take an additional AioContext argument.
A bottom half is created with a specific AioContext that remains the
same throughout the lifetime. qemu_bh_new is just a wrapper that
uses a global context.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This adds to aio.c a platform-independent API based on EventNotifiers, that
can be used by both POSIX and Win32.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The definition of when qemu_aio_flush should loop is much simpler
than it looks. It just has to call qemu_aio_wait until it makes
no progress and all flush callbacks return false. qemu_aio_wait
is the logical place to tell the caller about this.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
And remove several block_int.h inclusions that should not be there.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We'll leave some AIO completions unhandled when we can't call the callback.
qemu_aio_process_queue() is used later to run any callbacks that are left and
can be run then.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
qemu_aio_wait by invoking the bh or one of the aio completion
callbacks, could end up submitting new pending aio, breaking the
invariant that qemu_aio_flush returns only when no pending aio is
outstanding (possibly a problem for migration as such).
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Kevin Wolf <kwolf@redhat.com>
This patch refactors the AIO layer to allow multiple AIO implementations. It's
only possible because of the recent signalfd() patch.
Right now, the AIO infrastructure is pretty specific to the block raw backend.
For other block devices to implement AIO, the qemu_aio_wait function must
support registration. This patch introduces a new function,
qemu_aio_set_fd_handler, which can be used to register a file descriptor to be
called back. qemu_aio_wait() now polls a set of file descriptors registered
with this function until one becomes readable or writable.
This patch should allow the implementation of alternative AIO backends (via a
thread pool or linux-aio) and AIO backends in non-traditional block devices
(like NBD).
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5297 c046a42c-6fe2-441c-8c8c-71466251a162