Masked entries will not generate interrupt messages, thus do no need to
be routed by KVM. This is a cosmetic cleanup, just avoiding warnings of
the kind
qemu-system-x86_64: vtd_irte_get: detected non-present IRTE (index=0, high=0xff00, low=0x100)
if the masked entry happens to reference a non-present IRTE.
Cc: qemu-stable@nongnu.org
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <a84b7e03-f9a8-b577-be27-4d93d1caa1c9@siemens.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
When very large regions (32GB sized in our case, PCI pass-through of GPUs)
are compared substraction result does not fit into gint.
As a result crs_replace_with_free_ranges does not get sorted ranges and
incorrectly computes PCI64 free space regions. Which then makes linux
guest complain about device and PCI64 hole intersection and device
becomes unusable.
Fix that by returning exactly fitting ranges.
Also fix indentation of an entire crs_replace_with_free_ranges to make
checkpatch happy.
Cc: qemu-stable@nongnu.org
Signed-off-by: Evgeny Yakovlev <wrfsh@yandex-team.ru>
Message-Id: <1563466463-26012-1-git-send-email-wrfsh@yandex-team.ru>
Signed-off-by: Evgeny Yakovlev <wrfsh@yandex-team.ru>
The vhost-user specification does not explain when
VHOST_USER_PROTOCOL_F_MQ must be implemented. This may lead
implementors of vhost-user masters to believe that this protocol feature
is required for any device that has multiple virtqueues. That would be
a mistake since existing vhost-user slaves offer multiple virtqueues but
do not advertise VHOST_USER_PROTOCOL_F_MQ.
For example, a vhost-net device with one rx/tx queue pair is not
multiqueue. The slave does not need to advertise
VHOST_USER_PROTOCOL_F_MQ. Therefore the master must assume it has these
virtqueues and cannot rely on askingt the slave how many virtqueues
exist.
Extend the specification to explain the different between true
multiqueue and regular devices with a fixed virtqueue layout.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20190624091304.666-1-stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
I will not be able to continue with my maintainership responsibilities
going forward, so remove myself as the maintainer.
Signed-off-by: Farhan Ali <alifm@linux.ibm.com>
Message-Id: <355a4ac2923ff3dcf2171cb23d477440bd010b34.1564003698.git.alifm@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
When migrate_cancel a multifd migration, if run sequence like this:
[source] [destination]
multifd_send_sync_main[finish]
multifd_recv_thread wait &p->sem_sync
shutdown to_dst_file
detect error from_src_file
send RAM_SAVE_FLAG_EOS[fail] [no chance to run multifd_recv_sync_main]
multifd_load_cleanup
join multifd receive thread forever
will lead destination qemu hung at following stack:
pthread_join
qemu_thread_join
multifd_load_cleanup
process_incoming_migration_co
coroutine_trampoline
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <1561468699-9819-4-git-send-email-ivanren@tencent.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
When we 'migrate_cancel' a multifd migration, live_migration thread may
hung forever at some points, because of multifd_send_thread has already
exit for socket error:
1. multifd_send_pages may hung at qemu_sem_wait(&multifd_send_state->
channels_ready)
2. multifd_send_sync_main my hung at qemu_sem_wait(&multifd_send_state->
sem_sync)
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Message-Id: <1561468699-9819-3-git-send-email-ivanren@tencent.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
Remove spurious not needed bits
When we 'migrate_cancel' a multifd migration, live_migration thread may
go into endless loop in multifd_send_pages functions.
Reproduce steps:
(qemu) migrate_set_capability multifd on
(qemu) migrate -d url
(qemu) [wait a while]
(qemu) migrate_cancel
Then may get live_migration 100% cpu usage in following stack:
pthread_mutex_lock
qemu_mutex_lock_impl
multifd_send_pages
multifd_queue_page
ram_save_multifd_page
ram_save_target_page
ram_save_host_page
ram_find_and_save_block
ram_find_and_save_block
ram_save_iterate
qemu_savevm_state_iterate
migration_iteration_run
migration_thread
qemu_thread_start
start_thread
clone
Signed-off-by: Ivan Ren <ivanren@tencent.com>
Message-Id: <1561468699-9819-2-git-send-email-ivanren@tencent.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Writing the nested state e.g. after a vmport access can invalidate
important parts of the kernel-internal state, and it is not needed as
well. So leave this out from KVM_PUT_RUNTIME_STATE.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-Id: <bdd53f40-4e60-f3ae-7ec6-162198214953@siemens.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
commit a6f230c move blockbackend back to main AioContext on unplug. It set the AioContext of
SCSIDevice to the main AioContex, but s->ctx is still the iothread AioContex(if the scsi controller
is configure with iothread). So if there are having in-flight requests during unplug, a failing assertion
happend. The bt is below:
(gdb) bt
#0 0x0000ffff86aacbd0 in raise () from /lib64/libc.so.6
#1 0x0000ffff86aadf7c in abort () from /lib64/libc.so.6
#2 0x0000ffff86aa6124 in __assert_fail_base () from /lib64/libc.so.6
#3 0x0000ffff86aa61a4 in __assert_fail () from /lib64/libc.so.6
#4 0x0000000000529118 in virtio_scsi_ctx_check (d=<optimized out>, s=<optimized out>, s=<optimized out>) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:246
#5 0x0000000000529ec4 in virtio_scsi_handle_cmd_req_prepare (s=0x2779ec00, req=0xffff740397d0) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:559
#6 0x000000000052a228 in virtio_scsi_handle_cmd_vq (s=0x2779ec00, vq=0xffff7c6d7110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:603
#7 0x000000000052afa8 in virtio_scsi_data_plane_handle_cmd (vdev=<optimized out>, vq=0xffff7c6d7110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi-dataplane.c:59
#8 0x000000000054d94c in virtio_queue_host_notifier_aio_poll (opaque=<optimized out>) at /home/qemu-4.0.0/hw/virtio/virtio.c:2452
assert(blk_get_aio_context(d->conf.blk) == s->ctx) failed.
To avoid assertion failed, moving the "if" after qdev_simple_device_unplug_cb.
In addition, to avoid another qemu crash below, add aio_disable_external before
qdev_simple_device_unplug_cb, which disable the further processing of external clients
when doing qdev_simple_device_unplug_cb.
(gdb) bt
#0 scsi_req_unref (req=0xffff6802c6f0) at hw/scsi/scsi-bus.c:1283
#1 0x00000000005294a4 in virtio_scsi_handle_cmd_req_submit (req=<optimized out>,
s=<optimized out>) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:589
#2 0x000000000052a2a8 in virtio_scsi_handle_cmd_vq (s=s@entry=0x9c90e90,
vq=vq@entry=0xffff7c05f110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi.c:625
#3 0x000000000052afd8 in virtio_scsi_data_plane_handle_cmd (vdev=<optimized out>,
vq=0xffff7c05f110) at /home/qemu-4.0.0/hw/scsi/virtio-scsi-dataplane.c:60
#4 0x000000000054d97c in virtio_queue_host_notifier_aio_poll (opaque=<optimized out>)
at /home/qemu-4.0.0/hw/virtio/virtio.c:2447
#5 0x00000000009b204c in run_poll_handlers_once (ctx=ctx@entry=0x6efea40,
timeout=timeout@entry=0xffff7d7f7308) at util/aio-posix.c:521
#6 0x00000000009b2b64 in run_poll_handlers (ctx=ctx@entry=0x6efea40,
max_ns=max_ns@entry=4000, timeout=timeout@entry=0xffff7d7f7308) at util/aio-posix.c:559
#7 0x00000000009b2ca0 in try_poll_mode (ctx=ctx@entry=0x6efea40, timeout=0xffff7d7f7308,
timeout@entry=0xffff7d7f7348) at util/aio-posix.c:594
#8 0x00000000009b31b8 in aio_poll (ctx=0x6efea40, blocking=blocking@entry=true)
at util/aio-posix.c:636
#9 0x00000000006973cc in iothread_run (opaque=0x6ebd800) at iothread.c:75
#10 0x00000000009b592c in qemu_thread_start (args=0x6efef60) at util/qemu-thread-posix.c:502
#11 0x0000ffff8057f8bc in start_thread () from /lib64/libpthread.so.0
#12 0x0000ffff804e5f8c in thread_start () from /lib64/libc.so.6
(gdb) p bus
$1 = (SCSIBus *) 0x0
Signed-off-by: Zhengui li <lizhengui@huawei.com>
Message-Id: <1563696502-7972-1-git-send-email-lizhengui@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1563829520-17525-1-git-send-email-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- docker sphinx updates
- windows build re-enabled in CI
- travis_retry for make check
- build fixes
- docker cache fixes
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAl03M8AACgkQ+9DbCVqe
KkSmFwf/X5SSwUBxyncNuIoDLwUw9Q8XWIni1UxfQnxH/vWZvUh9H62ORuPuADyy
E6GJALc/9vfDxpTFymRpikVK/uVZbR3vnlQEf9iggX7B7wHTRMgdwHRSmd8Vokpy
zSK50k8FOF6azhRP67U/geVMpwVDVOq9V7J1rI9GkPqjMMBxR+lLiDv5/34EeVPh
Va1aDl1ptoQIEuR5apMzN608IeCW0yiqJ0EEskgIdMyE3lCo0DnC2oar4aQam6dJ
kmHOeuLQHUsV6kOsYf5hxumxrK9USRg5GE3U8LGWEz3mS80Q18LuZLVc4m9oUxL+
jiSoOaK9d/EEuBujbtU7/WkQ755EBA==
=ETuE
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/stsquad/tags/pull-testing-230719-4' into staging
Final testing updates:
- docker sphinx updates
- windows build re-enabled in CI
- travis_retry for make check
- build fixes
- docker cache fixes
# gpg: Signature made Tue 23 Jul 2019 17:20:16 BST
# gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-testing-230719-4: (23 commits)
tests/docker: Refresh APT cache before installing new packages on Debian
tests/qemu-iotests: Don't use 'seq' in the iotests
tests/qemu-iotests/group: Remove some more tests from the "auto" group
tests/qemu-iotests/check: Allow tests without groups
tests/docker: invoke the DEBUG shell with --noprofile/--norc
travis: enable travis_retry for check phase
hw/i386: also turn off VMMOUSE is VMPORT is disabled
NSIS: Add missing firmware blobs
tests/docker: Let the test-mingw test generate a NSIS installer
buildsys: The NSIS Windows build requires qemu-nsis.bmp installed
buildsys: The NSIS Windows build requires the documentation installed
tests/docker: Install texinfo in the Fedora image
tests/docker: Set the correct cross-PKG_CONFIG_PATH in the MXE images
tests/docker: Install the NSIS tools in the MinGW capable images
tests/docker: Install Sphinx in the Debian images
shippable: re-enable the windows cross builds
tests/dockerfiles: update the win cross builds to stretch
tests/migration-test: don't spam the logs when we fail
tests/docker: Install Ubuntu images noninteractively
tests/docker: Install Sphinx in the Fedora image
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since docker caches the different layers, updating the package
list does not invalidate the previous "apt-get update" layer,
and it is likely "apt-get install" hits an outdated repository.
See commit beac6a98f6 and
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#apt-get
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190723141528.18023-1-philmd@redhat.com>
[AJB: manually applies and fixed up]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The 'seq' command is not available by default on OpenBSD, so these
iotests are currently failing there. It could be installed as 'gseq'
from the coreutils package - but since it is using a different name
there and we are running the iotests with the "bash" shell anyway,
let's simply use the built-in double parentheses for the for-loops
instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190723111201.1926-1-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Remove some more tests from the "auto" group that either have issues
in certain environments (like macOS or FreeBSD, or on certain file systems
like ZFS or tmpfs), do not work with the qcow2 format, or that are simply
taking too much time.
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190717111947.30356-3-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The regular expressions in the "check" script currently expect that there
is always a space after the test number in the group file, so you can't
have a test in there without a group unless the line still ends with a
space - which is quite error prone since some editors might remove spaces
at the end of lines automatically.
Thus let's fix the regular expressions so that it is also possible to
have lines with one test number only in the group file.
Suggested-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20190717111947.30356-2-thuth@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
It's very confusing when things work in the debug shell because the
environment is different from what the test is running. Fix this by
ensuring we only have the inherited environment from the run shell.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
We have some flaky tests and usually the test passes on a retry.
Enable travis_retry for the test phase and see if that helps keep
things green.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Commit 97fd1ea8c1 broke the build for --without-default-devices as
VMMOUSE depends on VMPORT.
Fixes: 97fd1ea8c1
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Various firmwares has been added in the pc-bios/ directory:
- CCW (since commit 0c1fecdd52)
- skiboot (since commit bcad45de6a)
- EDK2 (since commit f7fa38b74c)
Since we install qemu-system able to run the architectures
targetted by these firmware, include them in the NSIS exe.
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190723070218.3606-1-philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The NSIS installer generates an executable suitable to install
QEMU on Windows.
Suggested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190715174817.18981-9-philmd@redhat.com>
[AJB: also --enable-docs in configure step]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The 'makeinfo' is required to generate the documentation from
the 'html' Makefile rule (called by 'install-doc').
The NSIS installer uses these files.
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190715174817.18981-6-philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
This silents a bunch of warnings while compiling the Slirp objects:
$ make
[...]
CC slirp/src/tftp.o
Package glib-2.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `glib-2.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'glib-2.0' found
CC slirp/src/udp6.o
Package glib-2.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing `glib-2.0.pc'
to the PKG_CONFIG_PATH environment variable
No package 'glib-2.0' found
[...]
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190715174817.18981-5-philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Since commit 5f71eac06e the Sphinx tool is required
to build the rST documentation.
This fixes:
$ ./configure --enable-docs
ERROR: User requested feature docs
configure was not able to find it.
Install texinfo, Perl/perl-podlators and python-sphinx
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190715174817.18981-3-philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The pkg.mxe.cc repo has been restored.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
While fixing up pkg.mxe.cc they move the URLs around a bit and dropped
Jessie support in favour of Stretch. We also need to update the keys
used to verify the packages.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Quite often the information about which test failed is hidden by the
wall of repeated failures for each page. Stop outputting the error
after 10 bad pages and just summarise the total damage at the end.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
We correctly use the DEBIAN_FRONTEND environment variable on
the Debian images, but forgot the Ubuntu ones are based on it.
Since building docker images is not interactive, we need to
inform the APT tools about it using the DEBIAN_FRONTEND
environment variable (we already use it on our Debian images).
This fixes:
$ make docker-image-ubuntu V=1
[...]
Setting up tzdata (2019b-0ubuntu0.19.04) ...
debconf: unable to initialize frontend: Dialog
debconf: (TERM is not set, so the dialog frontend is not usable.)
debconf: falling back to frontend: Readline
Configuring tzdata
------------------
Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing
the time zones in which they are located.
1. Africa 4. Australia 7. Atlantic 10. Pacific 13. Etc
2. America 5. Arctic 8. Europe 11. SystemV
3. Antarctica 6. Asia 9. Indian 12. US
Geographic area: 12
[HANG]
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190711124805.26476-1-philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Since commit 5f71eac06e the Sphinx tool is required
to build the rST documentation.
This fixes:
$ ./configure --enable-docs
ERROR: User requested feature docs
configure was not able to find it.
Install texinfo, Perl/perl-podlators and python-sphinx
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190711102710.2263-1-philmd@redhat.com>
[AJB: also add /usr/libexec/python3-sphinx/ to PATH]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Since commit 5f71eac06e the Sphinx tool is required
to build the rST documentation.
This fixes:
$ ./configure --enable-docs
ERROR: User requested feature docs
configure was not able to find it.
Install texinfo, Perl/perl-podlators and python-sphinx
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190711120609.12773-1-philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Add yet another test type so we cna quickly exercise the miscellaneous
build products of the build system under various docer configurations.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
"git archive" fails when a submodule has a modification, because "git
stash create" doesn't handle submodules. Let's teach our
archive-source.sh to handle modifications in submodules the same way
as qemu tree, by creating a stash.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20190708200250.12017-1-marcandre.lureau@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJdNfOlAAoJENSXKoln91ploGoH/2Y1eSXlPL1ZWTy+7esI3S8x
6De9vFz1N8qROOnRMvgCiV+Oh2XwHHZVBHlNYZXt/qWwZnLY36U7TeBgGxfr/fLJ
K9THDWHD4mwKDR+anvVUBe6NkRetmkY9iszvD3OhMcC/KPFx37rn1+l8aRI8kY3y
cswp5ySYbGe1OdhjH11zPOT8jIm3mL0hKpMvRYEEXKn9wUJ59++JO1c5q9KobCvw
1FRWr5weoYbuxwzxvhV854veYglTYFMeQa12O+gmWkG7REFRTXDCiarUKowznHT2
ZtSO9dFjs9cIBl4coypHX+8Z/OWnxJpkzstkJvgZ4zevy8U1YjS+FWfm6as92JI=
=Dqa6
-----END PGP SIGNATURE-----
Merge remote-tracking branch 'remotes/amarkovic2/tags/mips-queue-jul-23-2019' into staging
MIPS queue for July 23rd, 2019
# gpg: Signature made Mon 22 Jul 2019 18:34:29 BST
# gpg: using RSA key D4972A8967F75A65
# gpg: Good signature from "Aleksandar Markovic <amarkovic@wavecomp.com>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8526 FBF1 5DA3 811F 4A01 DD75 D497 2A89 67F7 5A65
* remotes/amarkovic2/tags/mips-queue-jul-23-2019:
target/mips: Fix emulation of MSA pack instructions on big endian hosts
target/mips: Add 'fall through' comments for handling nanoMips' SHXS, SWXS
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
GCC9 is confused by this comment when building with CFLAG
-Wimplicit-fallthrough=2:
hw/block/pflash_cfi02.c: In function ‘pflash_write’:
hw/block/pflash_cfi02.c:574:16: error: this statement may fall through [-Werror=implicit-fallthrough=]
574 | if (boff == 0x55 && cmd == 0x98) {
| ^
hw/block/pflash_cfi02.c:581:9: note: here
581 | default:
| ^~~~~~~
cc1: all warnings being treated as errors
Rewrite the comment using 'fall through' which is recognized by
GCC and static analyzers.
Reported-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20190719131425.10835-4-philmd@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
To avoid incoherent states when the machine resets (see bug report
below), add the device reset callback.
A "system reset" sets the device state machine in READ_ARRAY mode
and, after some delay, set the SR.7 READY bit.
Since we do not model timings, we set the SR.7 bit directly.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1678713
Reported-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Tested-by: Laszlo Ersek <lersek@redhat.com>
[Laszlo Ersek: Regression tested EDK2 OVMF IA32X64, ArmVirtQemu Aarch64
https://lists.gnu.org/archive/html/qemu-devel/2019-07/msg04373.html]
Message-Id: <20190718104837.13905-2-philmd@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Fix emulation of MSA pack instructions on big endian hosts.
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Message-Id: <1563812573-30309-3-git-send-email-aleksandar.markovic@rt-rk.com>
This was found by GCC 8.3 static analysis.
Missed in commit fb32f8c856.
Signed-off-by: Aleksandar Markovic <amarkovic@wavecomp.com>
Reviewed-by: Aleksandar Rikalo <arikalo@wavecomp.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <1563812573-30309-2-git-send-email-aleksandar.markovic@rt-rk.com>
bdrv_set_aio_context_ignore() can only work in the main loop:
bdrv_drained_begin() only works in the main loop and the node's (old)
AioContext; and bdrv_drained_end() really only works in the main loop
and the node's (new) AioContext (contrary to its current comment, which
is just wrong).
Consequentially, bdrv_set_aio_context_ignore() must be called from the
main loop. Luckily, assuming that we can make block graph changes only
from the main loop as well, all its callers do that already.
Note that changing a node's context in a sense is an operation that
changes the block graph, so it actually makes sense to require this
function to be called from the main loop.
Also, fix bdrv_drained_end()'s description. You can only use it from
the main loop or the node's AioContext, and in the latter case, the
whole subtree must be in the same context.
Fixes: e037c09c78
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190722133054.21781-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Decrementing drained_end_counter after bdrv_dec_in_flight() (which in
turn invokes bdrv_wakeup() and thus aio_wait_kick()) is not very clever.
We should decrement it beforehand, so that any waiting aio_poll() that
is woken by bdrv_dec_in_flight() sees the decremented
drained_end_counter.
Because the time window between decrementing drained_end_counter and
aio_wait_kick() is very small, I cannot supply a reliable regression
test. However, running e.g. the /bdrv-drain/blockjob/iothread/drain_all
test in test-bdrv-drain has a small chance of hanging without this
patch (about 1/200 or so; it gets to nearly 100 % if you add e.g. an
fputc(' ', stderr); after the bdrv_dec_in_flight()).
Fixes: e037c09c78
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190722133054.21781-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Completion entries are meant to be only read by the host and written by the device.
The driver is supposed to scan the completions from the last point where it left,
and until it sees a completion with non flipped phase bit.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190716163020.13383-4-mlevitsk@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Currently the driver hardcodes the sector size to 512,
and doesn't check the underlying device. Fix that.
Also fail if underlying nvme device is formatted with metadata
as this needs special support.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Message-id: 20190716163020.13383-3-mlevitsk@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
Fix the math involving non standard doorbell stride
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190716163020.13383-2-mlevitsk@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
contrib/elf2dmp has a source file which uses curl/curl.h;
although we link the final executable with CURL_LIBS, we
forgot to build this source file with CURL_CFLAGS, so if
the curl header is in a place that's not already on the
system include path then it will fail to build.
Add a line specifying the cflags needed for download.o;
while we are here, bring the specification of the libs
into line with this, since using a per-object variable
setting is preferred over adding them to the final
executable link line.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20190719100955.17180-1-peter.maydell@linaro.org