Okay, I started looking into how to handle scsi-generic I/O in the
new world order.
I think the best is to use the SG_IO ioctl instead of the read/write
interface as that allows us to support scsi passthrough on disk/cdrom
devices, too. See Hannes patch on the kvm list from August for an
example.
Now that we always do ioctls we don't need another abstraction than
bdrv_ioctl for the synchronous requests for now, and for asynchronous
requests I've added a aio_ioctl abstraction keeping it simple.
Long-term we might want to move the ops to a higher-level abstraction
and let the low-level code fill out the request header, but I'm lazy
enough to leave that to the people trying to support scsi-passthrough
on a non-Linux OS.
Tested lightly by issuing various sg_ commands from sg3-utils in a guest
to a host CDROM device.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6895 c046a42c-6fe2-441c-8c8c-71466251a162
pthread_cond_timedwait is allowed to both consume the signal and
return with the value indicating the timeout, hence predicate should
always be (re)checked before taking an action
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6634 c046a42c-6fe2-441c-8c8c-71466251a162
Avoid repeated creation/initalization/destruction of attr and calls to
getpid
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6633 c046a42c-6fe2-441c-8c8c-71466251a162
Broadcast was used so that the I/O threads would wakeup, reset their
ts values and all but one go to sleep, in other words an optimization
to prevent threads from exiting in presence of continuing I/O
activity. Spurious wakeups make the looping around cond_timedwait with
ever reinitialized ts potentially unsafe and as such ts in no longer
reinitilized inside the loop, hence switch to signal is warranted and
this benefits of this particlaur optimization are lost.
(It's worth noting that timed variants of pthread calls use realtime
clock by default, and therefore can hang "forever" should the host
time be changed. Unfortunatelly not all host systems QEMU runs on
support CLOCK_MONOTONIC and/or pthread_condattr_setclock with this
value)
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6632 c046a42c-6fe2-441c-8c8c-71466251a162
When we cancel an AIO request that is already being processed by
aio_thread, qemu_paio_cancel should return QEMU_PAIO_NOTCANCELED as long
as aio_thread isn't done with this request. But as the latter currently
updates aiocb->ret after every block of the request, we may report
QEMU_PAIO_ALLDONE too early.
Futhermore, in case some zero-length request should have been queued,
aiocb->ret is never set to != -EINPROGRESS and callers like
raw_aio_cancel could get stuck in an endless loop.
Fix those issues by updating aiocb->ret _after_ the request has been
fully processed. This also simplifies the locking.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@6278 c046a42c-6fe2-441c-8c8c-71466251a162
glibc implements posix-aio as a thread pool and imposes a number of limitations.
1) it limits one request per-file descriptor. we hack around this by dup()'ing
file descriptors which is hideously ugly
2) it's impossible to add new interfaces and we need a vectored read/write
operation to properly support a zero-copy API.
What has been suggested to me by glibc folks, is to implement whatever new
interfaces we want and then it can eventually be proposed for standardization.
This requires that we implement our own posix-aio implementation though.
This patch implements posix-aio using pthreads. It immediately eliminates the
need for fd pooling.
It performs at least as well as the current posix-aio code (in some
circumstances, even better).
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@5996 c046a42c-6fe2-441c-8c8c-71466251a162