Use the program search path to find the Python 3 interpreter.
Patch created mechanically by running:
$ sed -i "s,^#\!/usr/bin/\(env\ \)\?python$,#\!/usr/bin/env python3," \
$(git grep -lF '#!/usr/bin/env python' \
| xargs grep -L 'if __name__.*__main__')
Reported-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20200130163232.10446-11-philmd@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
The "migration completed" event may be sent (on the source, to be
specific) before the migration is actually completed, so the VM runstate
will still be "finish-migrate" instead of "postmigrate". So ask the
users of VM.wait_migration() to specify the final runstate they desire
and then poll the VM until it has reached that state. (This should be
over very quickly, so busy polling is fine.)
Without this patch, I see intermittent failures in the new iotest 280
under high system load. I have not yet seen such failures with other
iotests that use VM.wait_migration() and query-status afterwards, but
maybe they just occur even more rarely, or it is because they also wait
on the destination VM to be running.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
234 implements functions that are useful for doing migration between two
VMs. Move them to iotests.py so that other test cases can use them, too.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This test waits for a MIGRATION event with status=completed on the
source VM before querying the migration status on both source and
destination. However, just because the source says migration has
completed does not mean the destination thinks the same. Therefore, in
some cases, the destination VM may still report "active" instead of
"completed" when asked for its migration status.
Fix this by enabling migration events on both VMs and waiting until both
source and destination emit a status=completed MIGRATION event.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Check that block node activation and inactivation works with a block
graph that is built with individually created nodes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>