mirror of
https://github.com/xemu-project/xemu.git
synced 2024-11-27 13:30:52 +00:00
virtio-net: fix bottom-half packet TX on asynchronous completion
When virtio-net is used with the socket netdev backend, the backend
can be busy and not able to collect new packets.
In this case, net_socket_receive() returns 0 and registers a poll function
to detect when the socket is ready again.
In virtio_net_tx_bh(), virtio_net_flush_tx() forwards the 0, the virtio
notifications are disabled and the function is not re-scheduled, waiting
for the backend to be ready.
When the socket netdev backend is again able to send packets, the poll
function re-starts to flush remaining packets. This is done by
calling virtio_net_tx_complete(). It re-enables notifications and calls
again virtio_net_flush_tx().
But it seems if virtio_net_flush_tx() reaches the tx_burst value all
the queue is not flushed and no new notification is sent to re-schedule
virtio_net_tx_bh(). Nothing re-start to flush the queue and remaining
packets are stuck in the queue.
To fix that, detect in virtio_net_tx_complete() if virtio_net_flush_tx()
has been stopped by tx_burst and if yes re-schedule the bottom half
function virtio_net_tx_bh() to flush the remaining packets.
This is what is done in virtio_net_tx_bh() when the virtio_net_flush_tx()
is synchronous, and completly by-passed when the operation needs to be
asynchronous.
Fixes: a697a334b3
("virtio-net: Introduce a new bottom half packet TX")
Cc: alex.williamson@redhat.com
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
This commit is contained in:
parent
344744e148
commit
df8d070817
@ -2526,6 +2526,7 @@ static void virtio_net_tx_complete(NetClientState *nc, ssize_t len)
|
||||
VirtIONet *n = qemu_get_nic_opaque(nc);
|
||||
VirtIONetQueue *q = virtio_net_get_subqueue(nc);
|
||||
VirtIODevice *vdev = VIRTIO_DEVICE(n);
|
||||
int ret;
|
||||
|
||||
virtqueue_push(q->tx_vq, q->async_tx.elem, 0);
|
||||
virtio_notify(vdev, q->tx_vq);
|
||||
@ -2534,7 +2535,17 @@ static void virtio_net_tx_complete(NetClientState *nc, ssize_t len)
|
||||
q->async_tx.elem = NULL;
|
||||
|
||||
virtio_queue_set_notification(q->tx_vq, 1);
|
||||
virtio_net_flush_tx(q);
|
||||
ret = virtio_net_flush_tx(q);
|
||||
if (q->tx_bh && ret >= n->tx_burst) {
|
||||
/*
|
||||
* the flush has been stopped by tx_burst
|
||||
* we will not receive notification for the
|
||||
* remainining part, so re-schedule
|
||||
*/
|
||||
virtio_queue_set_notification(q->tx_vq, 0);
|
||||
qemu_bh_schedule(q->tx_bh);
|
||||
q->tx_waiting = 1;
|
||||
}
|
||||
}
|
||||
|
||||
/* TX */
|
||||
|
Loading…
Reference in New Issue
Block a user