linux/drivers/hv
Vitaly Kuznetsov 0a1a86ac04 Drivers: hv: hv_balloon: survive ballooning request with num_pages=0
... and simplify alloc_balloon_pages() interface by removing redundant
alloc_error from it.

If we happen to enter balloon_up() with balloon_wrk.num_pages = 0 we will enter
infinite 'while (!done)' loop as alloc_balloon_pages() will be always returning
0 and not setting alloc_error. We will also be sending a meaningless message to
the host on every iteration.

The 'alloc_unit == 1 && alloc_error -> num_ballooned == 0' change and
alloc_error elimination requires a special comment. We do alloc_balloon_pages()
with 2 different alloc_unit values and there are 4 different
alloc_balloon_pages() results, let's check them all.

alloc_unit = 512:
1) num_ballooned = 0, alloc_error = 0: we do 'alloc_unit=1' and retry pre- and
  post-patch.
2) num_ballooned > 0, alloc_error = 0: we check 'num_ballooned == num_pages'
  and act accordingly,  pre- and post-patch.
3) num_ballooned > 0, alloc_error > 0: we report this chunk and remain within
  the loop, no changes here.
4) num_ballooned = 0, alloc_error > 0: we do 'alloc_unit=1' and retry pre- and
  post-patch.

alloc_unit = 1:
1) num_ballooned = 0, alloc_error = 0: this can happen in two cases: when we
  passed 'num_pages=0' to alloc_balloon_pages() or when there was no space in
  bl_resp to place a single response. The second option is not possible as
  bl_resp is of PAGE_SIZE size and single response 'union dm_mem_page_range' is
  8 bytes, but the first one is (in theory, I think that Hyper-V host never
  places such requests). Pre-patch code loops forever, post-patch code sends
  a reply with more_pages = 0 and finishes.
2) num_ballooned > 0, alloc_error = 0: we ran out of space in bl_resp, we
  report partial success and remain within the loop, no changes pre- and
  post-patch.
3) num_ballooned > 0, alloc_error > 0: pre-patch code finishes, post-patch code
  does one more try and if there is no progress (we finish with
  'num_ballooned = 0') we finish. So we try a bit harder with this patch.
4) num_ballooned = 0, alloc_error > 0: both pre- and post-patch code enter
 'more_pages = 0' branch and finish.

So this patch has two real effects:
1) We reply with an empty response to 'num_pages=0' request.
2) We try a bit harder on alloc_unit=1 allocations (and reply with an empty
   tail reply in case we fail).

An empty reply should be supported by host as we were able to send it even with
pre-patch code when we were not able to allocate a single page.

Suggested-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-04-03 16:18:02 +02:00
..
channel_mgmt.c hv: remove the per-channel workqueue 2015-04-03 16:18:02 +02:00
channel.c Drivers: hv: vmbus: Fix a siganlling host signalling issue 2015-03-25 11:53:55 +01:00
connection.c hv: don't schedule new works in vmbus_onoffer()/vmbus_onoffer_rescind() 2015-04-03 16:18:02 +02:00
hv_balloon.c Drivers: hv: hv_balloon: survive ballooning request with num_pages=0 2015-04-03 16:18:02 +02:00
hv_fcopy.c hv: hv_fcopy: drop the obsolete message on transfer failure 2015-01-25 09:17:58 -08:00
hv_kvp.c Drivers: hv: kvp,vss: Fast propagation of userspace communication failure 2014-11-26 19:00:32 -08:00
hv_snapshot.c Drivers: hv: kvp,vss: Fast propagation of userspace communication failure 2014-11-26 19:00:32 -08:00
hv_util.c Drivers: hv: util: On device remove, close the channel after de-initializing the service 2015-03-01 19:31:02 -08:00
hv.c Drivers: hv: vmbus: Teardown clockevent devices on module unload 2015-03-01 19:30:07 -08:00
hyperv_vmbus.h hv: don't schedule new works in vmbus_onoffer()/vmbus_onoffer_rescind() 2015-04-03 16:18:02 +02:00
Kconfig
Makefile
ring_buffer.c Drivers: hv: vmbus: Enable interrupt driven flow control 2014-09-23 23:31:22 -07:00
vmbus_drv.c hv: run non-blocking message handlers in the dispatch tasklet 2015-04-03 16:18:01 +02:00