Commit Graph

851 Commits

Author SHA1 Message Date
Richard Henderson
bb8e3ea6fa exec: Cache TARGET_PAGE_MASK for TARGET_PAGE_BITS_VARY
This eliminates a set of runtime shifts.  It turns out that we
require TARGET_PAGE_MASK more often than TARGET_PAGE_SIZE, so
redefine TARGET_PAGE_SIZE based on TARGET_PAGE_MASK instead of
the other way around.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28 10:35:20 +01:00
Richard Henderson
f048b8a7ce exec: Promote TARGET_PAGE_MASK to target_long
There are some uint64_t uses that expect TARGET_PAGE_MASK to
extend for a 32-bit, so this must continue to be a signed type.
Define based on TARGET_PAGE_BITS not TARGET_PAGE_SIZE; this
will make a following patch more clear.

This should not have a functional effect so far.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28 10:30:21 +01:00
Richard Henderson
639044b5f1 exec: Restrict TARGET_PAGE_BITS_VARY assert to CONFIG_DEBUG_TCG
This reduces the size of a release build by about 10k.
Noticably, within the tlb miss helpers.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28 10:30:19 +01:00
Richard Henderson
bbc17caf81 exec: Use const alias for TARGET_PAGE_BITS_VARY
Using a variable that is declared "const" for this tells the
compiler that it may read the value once and assume that it
does not change across function calls.

For target_page_size, this means we have only one assert per
function, and one read of the variable.

This reduces the size of qemu-system-aarch64 by 8k.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28 10:29:22 +01:00
Wei Yang
50276a79aa cpu: use ROUND_UP() to define xxx_PAGE_ALIGN
Use ROUND_UP() to define, which is a little bit easy to read.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20191013021145.16011-2-richardw.yang@linux.intel.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28 10:26:02 +01:00
Alex Bennée
4cef72d042 cputlb: ensure _cmmu helper functions follow the naming standard
We document this in docs/devel/load-stores.rst so lets follow it. The
32 bit and 64 bit access functions have historically not included the
sign so we leave those as is. We also introduce some signed helpers
which are used for loading immediate values in the translator.

Fixes: 282dffc8
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20191021150910.23216-1-alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-10-28 10:26:02 +01:00
Peter Maydell
c760cb77e5 Migration pull 2019-10-11
Mostly cleanups and minor fixes
 
 [Note I'm seeing a hang on the aarch64 hosted x86-64 tcg migration
 test in xbzrle; but I'm seeing that on current head as well]
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEERfXHG0oMt/uXep+pBRYzHrxb/ecFAl2g1JcACgkQBRYzHrxb
 /efl1RAAjYukmf+kCFCw4Ws6nJ4000O85mpj0117SJpgTck1ivTC968REpl5pD0C
 aHDzamNW82fiqjRxwF6KJRWic217NrmR1Z/j++SDyIjOc1ERQdB+RdCc7T2NkBT5
 2HiPaceNiu9wOpqX/bto/xAug9vAxq5/1jeq+vhKxd+IcvAZII0SwKWn9mWA2209
 H4i3v8OCv9isT6MRNitfWT/giYkI5HwFzA9a13S+zXioEGnoAmqzrrAQs2/MkyDt
 bIeLbZyonH9hKbdrwmIXCvNEHA32BOPQyrsRp9CPZwRKVP2AzRYU9K9UjKncmYJS
 bPdLYFmqEQm8ILQI6lyJ+pW1r/cyAUQBQii6NA+9ZfimxCSB06ArU+JeM0csl7HV
 b4cG/bENFmtOzaoc3SrE6t1APlTiS9nxW6iH8zW3ozMEQGGihru7/6VIlwKTOfeX
 kXKF92FTiTBpJ1u3/t05TPnxo4c2bKWM+Gj1okDAUsP8HovQpvJa8r92n1cC0+l8
 l3pkFnrejzTcrexWIiKXYnPnO7Ez/Dm+0aCzlQkX7DSFxDnwI2T/BYk21FNlcI/L
 rCHnkSLjYMWPelTLo9ZNuFaKL9UMeMtLPaIU9NBSSmsQ32/d8EXpDQwe8uAq+9Z/
 qBir/mKyDe7I/InumtWQS46SS1/E1VyxDG2dxRWK9lN8DDOXRlM=
 =Jouv
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/dgilbert/tags/pull-migration-20191011a' into staging

Migration pull 2019-10-11

Mostly cleanups and minor fixes

[Note I'm seeing a hang on the aarch64 hosted x86-64 tcg migration
test in xbzrle; but I'm seeing that on current head as well]

# gpg: Signature made Fri 11 Oct 2019 20:14:31 BST
# gpg:                using RSA key 45F5C71B4A0CB7FB977A9FA90516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>" [full]
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A  9FA9 0516 331E BC5B FDE7

* remotes/dgilbert/tags/pull-migration-20191011a: (21 commits)
  migration: Support gtree migration
  migration/multifd: pages->used would be cleared when attach to multifd_send_state
  migration/multifd: initialize packet->magic/version once at setup stage
  migration/multifd: use pages->allocated instead of the static max
  migration/multifd: fix a typo in comment of multifd_recv_unfill_packet()
  migration/postcopy: check PostcopyState before setting to POSTCOPY_INCOMING_RUNNING
  migration/postcopy: rename postcopy_ram_enable_notify to postcopy_ram_incoming_setup
  migration/postcopy: postpone setting PostcopyState to END
  migration/postcopy: mis->have_listen_thread check will never be touched
  migration: report SaveStateEntry id and name on failure
  migration: pass in_postcopy instead of check state again
  migration/postcopy: fix typo in mark_postcopy_blocktime_begin's comment
  migration/postcopy: map large zero page in postcopy_ram_incoming_setup()
  migration/postcopy: allocate tmp_page in setup stage
  migration: Don't try and recover return path in non-postcopy
  rcu: Use automatic rc_read unlock in core memory/exec code
  migration: Use automatic rcu_read unlock in rdma.c
  migration: Use automatic rcu_read unlock in ram.c
  migration: Fix missing rcu_read_unlock
  rcu: Add automatically released rcu_read_lock variants
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-10-14 16:09:52 +01:00
Dr. David Alan Gilbert
694ea274d9 rcu: Use automatic rc_read unlock in core memory/exec code
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20191007143642.301445-6-dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-10-11 14:20:02 +01:00
David Hildenbrand
1f6493be08 s390x/tcg: MVCL: Exit to main loop if requested
MVCL is interruptible and we should check for interrupts and process
them after writing back the variables to the registers. Let's check
for any exit requests and exit to the main loop. Introduce a new helper
function for that: cpu_loop_exit_requested().

When booting Fedora 30, I can see a handful of these exits and it seems
to work reliable. Also, Richard explained why this works correctly even
when MVCL is called via EXECUTE:

    (1) TB with EXECUTE runs, at address Ae
        - env->psw_addr stored with Ae.
        - helper_ex() runs, memory address Am computed
          from D2a(X2a,B2a) or from psw.addr+RI2.
        - env->ex_value stored with memory value modified by R1a

    (2) TB of executee runs,
        - env->ex_value stored with 0.
        - helper_mvcl() runs, using and updating R1b, R1b+1, R2b, R2b+1.

    (3a) helper_mvcl() completes,
         - TB of executee continues, psw.addr += ilen.
         - Next instruction is the one following EXECUTE.

    (3b) helper_mvcl() exits to main loop,
         - cpu_loop_exit_restore() unwinds psw.addr = Ae.
         - Next instruction is the EXECUTE itself...
         - goto 1.

As the PoP mentiones that an interruptible instruction called via EXECUTE
should avoid modifying storage/registers that are used by EXECUTE itself,
it is fine to retrigger EXECUTE.

Cc: Alex Bennée <alex.bennee@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
2019-10-10 12:27:15 +02:00
Eric Auger
549d400587 memory: allow memory_region_register_iommu_notifier() to fail
Currently, when a notifier is attempted to be registered and its
flags are not supported (especially the MAP one) by the IOMMU MR,
we generally abruptly exit in the IOMMU code. The failure could be
handled more nicely in the caller and especially in the VFIO code.

So let's allow memory_region_register_iommu_notifier() to fail as
well as notify_flag_changed() callback.

All sites implementing the callback are updated. This patch does
not yet remove the exit(1) in the amd_iommu code.

in SMMUv3 we turn the warning message into an error message saying
that the assigned device would not work properly.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-10-04 18:49:18 +02:00
Peter Maydell
deee6ff7b7 Fixes for TLB_BSWAP
Coversion of NOTDIRTY and ROM handling to cputlb
 Followup cleanups to cputlb
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAl2LtM0dHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/sZwf/exhxY+VEyK8bFyE7
 DCiVZf7vc0kc1mK79SFN59cGlTx6vtG51ZgmbsrE2niJanrq5oj/iefrzVcP1WQE
 LuMqeTWKUJjpT0Nm7E5QIzMyYELjhE4ldEzzh8meHeqLYWTXdXD3/gHGiJFdqic7
 /2c8zDYpkVp6ss7ryppT7vtfsHhG33TMoKb+TLUgYdr3VU5bfKVmVtXto23YDmp+
 +ZZHczhFy6FB+k3V4+ClyGcaoVwvsVx3AhGIuFDZCS64QuHmWkM4YuWFNjzjX2KV
 EYmp3aK728DWUbLax9LClks9hDSZvuX8m4+dDDt4ykOgwhMzJtYM1e/HgEVWa7bk
 nH/koA==
 =/g/U
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190925' into staging

Fixes for TLB_BSWAP
Coversion of NOTDIRTY and ROM handling to cputlb
Followup cleanups to cputlb

# gpg: Signature made Wed 25 Sep 2019 19:41:17 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth/tags/pull-tcg-20190925:
  cputlb: Pass retaddr to tb_check_watchpoint
  cputlb: Pass retaddr to tb_invalidate_phys_page_fast
  cputlb: Remove tb_invalidate_phys_page_range is_cpu_write_access
  cputlb: Remove cpu->mem_io_vaddr
  cputlb: Handle TLB_NOTDIRTY in probe_access
  cputlb: Merge and move memory_notdirty_write_{prepare,complete}
  cputlb: Partially inline memory_region_section_get_iotlb
  cputlb: Move NOTDIRTY handling from I/O path to TLB path
  cputlb: Move ROM handling from I/O path to TLB path
  exec: Adjust notdirty tracing
  cputlb: Introduce TLB_BSWAP
  cputlb: Split out load/store_memop
  cputlb: Use qemu_build_not_reached in load/store_helpers
  qemu/compiler.h: Add qemu_build_not_reached
  cputlb: Disable __always_inline__ without optimization
  exec: Use TARGET_PAGE_BITS_MIN for TLB flags

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-27 11:10:49 +01:00
Peter Maydell
1329132d28 vhost: fixes
Misc fixes related to memory region handling.
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJdi3mHAAoJECgfDbjSjVRp41wH/3l7E5zcJPJfzr9OTe0UmmtO
 PKzQrvdrUdLPJfxEY2dtTmJ/u3LpBB9PpAjPfEWaxFsI43OCbs7rwrDvgZsnEixI
 Ib1mFk1o8LUH2Df9dvlGXogcX413i/H1UQF3kLte1JL8iyPmo6HUXO8n6GZ6fo6x
 w7sGvC/NO5sNLPVadw2EOVPNSonp58zQFJB7i056cZ81ooUh4Rm09nQzgC2NmIA4
 dRs/HrxM6BUYDRwwUN3x8Z4KMj138PxZ8QvF5DB54a9rX9EpL/iRl14zlYsNwi8D
 V32+FnH5kOrPvA2D2az2e0q//J2DiQu87Rkcv5B2b8Sys1HVnPVF12KD+VHaUTk=
 =C3aK
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging

vhost: fixes

Misc fixes related to memory region handling.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# gpg: Signature made Wed 25 Sep 2019 15:28:23 BST
# gpg:                using RSA key 281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>" [full]
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>" [full]
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17  0970 C350 3912 AFBE 8E67
#      Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA  8A0D 281F 0DB8 D28D 5469

* remotes/mst/tags/for_upstream:
  vhost: Fix memory region section comparison
  memory: Provide an equality function for MemoryRegionSections
  memory: Align MemoryRegionSections fields

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-26 16:14:03 +01:00
Richard Henderson
707526ad86 cputlb: Merge and move memory_notdirty_write_{prepare,complete}
Since 9458a9a1df, all readers of the dirty bitmaps wait
for the rcu lock, which means that they wait until the end
of any executing TranslationBlock.

As a consequence, there is no need for the actual access
to happen in between the _prepare and _complete.  Therefore,
we can improve things by merging the two functions into
notdirty_write and dropping the NotDirtyInfo structure.

In addition, the only users of notdirty_write are in cputlb.c,
so move the merged function there.  Pass in the CPUIOTLBEntry
from which the ram_addr_t may be computed.

Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25 10:44:29 -07:00
Richard Henderson
8f5db64153 cputlb: Partially inline memory_region_section_get_iotlb
There is only one caller, tlb_set_page_with_attrs.  We cannot
inline the entire function because the AddressSpaceDispatch
structure is private to exec.c, and cannot easily be moved to
include/exec/memory-internal.h.

Compute is_ram and is_romd once within tlb_set_page_with_attrs.
Fold the number of tests against these predicates.  Compute
cpu_physical_memory_is_clean outside of the tlb lock region.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25 10:44:27 -07:00
Richard Henderson
08565552f7 cputlb: Move NOTDIRTY handling from I/O path to TLB path
Pages that we want to track for NOTDIRTY are RAM.  We do not
really need to go through the I/O path to handle them.

Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25 10:43:22 -07:00
Richard Henderson
7b0d792ce1 cputlb: Move ROM handling from I/O path to TLB path
It does not require going through the whole I/O path
in order to discard a write.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25 10:42:51 -07:00
Richard Henderson
5b87b3e671 cputlb: Introduce TLB_BSWAP
Handle bswap on ram directly in load/store_helper.  This fixes a
bug with the previous implementation in that one cannot use the
I/O path for RAM.

Fixes: a26fc6f515
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25 10:36:43 -07:00
Richard Henderson
1f6f2b34ad exec: Use TARGET_PAGE_BITS_MIN for TLB flags
These bits do not need to vary with the actual page size
used by the guest.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-25 10:11:15 -07:00
Wei Yang
1e7cf8c323 migration/postcopy: unsentmap is not necessary for postcopy
Commit f3f491fcd6 ('Postcopy: Maintain unsentmap') introduced
unsentmap to track not yet sent pages.

This is not necessary since:

    * unsentmap is a sub-set of bmap before postcopy start
    * unsentmap is the summation of bmap and unsentmap after canonicalizing

This patch just removes it.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190819061843.28642-3-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2019-09-25 15:51:19 +01:00
Dr. David Alan Gilbert
9366cf02e4 memory: Provide an equality function for MemoryRegionSections
Provide a comparison function that checks all the fields are the same.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190814175535.2023-3-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-09-25 10:15:59 -04:00
Dr. David Alan Gilbert
44f85d3276 memory: Align MemoryRegionSections fields
MemoryRegionSection includes an Int128 'size' field;
on some platforms the compiler causes an alignment of this to
a 128bit boundary, leaving 8 bytes of dead space.
This deadspace can be filled with junk.

Move the size field to the top avoiding unnecessary alignment.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190814175535.2023-2-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2019-09-25 10:15:59 -04:00
Paolo Bonzini
7a3df11c2a memory: inline and optimize devend_memop
devend_memop can rely on the fact that the result is always either
0 or MO_BSWAP, corresponding respectively to host endianness and
the opposite.  Native (target) endianness in turn can be either
the host endianness, in which case MO_BSWAP is only returned for
host-opposite endianness, or the opposite, in which case 0 is only
returned for host endianness.

With this in mind, devend_memop can be compiled as a setcond+shift
for every target.  Do this and, while at it, move it to
include/exec/memory.h since !NEED_CPU_H files do not (and should not)
need it.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-09-16 13:47:37 +02:00
Peter Maydell
9de65783e1 Allow page table bit to swap endianness.
Reorganize watchpoints out of i/o path.
 Return host address from probe_write / probe_access.
 -----BEGIN PGP SIGNATURE-----
 
 iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAl1uiyYdHHJpY2hhcmQu
 aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV8AuwgAnYLQQbL8kjSqzp7q
 gRlj0M2SX41ZW3fMkI794RwsljD9Z0QS7YGnpzHolig9XUYrGnip7STrMvlCr/1L
 CIMWNHlgitgBMszLqg42/TB+6RxXn+DMX/ShUzTagC6xQhinCIpdEjoLaTKSgeP+
 foIyJ2uoJLKOBP8cPTQp8evongtoQIljpsZZ0K8a4sreO1d6ytH+olkuoGiROft+
 VoJkA+kNHd9cE+LPCva8UFGu1QE6uCySvhepzOpnvOtK+SXKUm2yLOFGu7RWP1pT
 RkE0oRyRnImtg+cViHfUUFogIffFROdL5tuYMQVuqbINeROPUgJPav+R1Nz1P60a
 xM2HEw==
 =bLLU
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/pull-tcg-20190903' into staging

Allow page table bit to swap endianness.
Reorganize watchpoints out of i/o path.
Return host address from probe_write / probe_access.

# gpg: Signature made Tue 03 Sep 2019 16:47:50 BST
# gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
# gpg:                issuer "richard.henderson@linaro.org"
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F

* remotes/rth/tags/pull-tcg-20190903: (36 commits)
  tcg: Factor out probe_write() logic into probe_access()
  tcg: Make probe_write() return a pointer to the host page
  s390x/tcg: Pass a size to probe_write() in do_csst()
  hppa/tcg: Call probe_write() also for CONFIG_USER_ONLY
  mips/tcg: Call probe_write() for CONFIG_USER_ONLY as well
  tcg: Enforce single page access in probe_write()
  tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
  s390x/tcg: Fix length calculation in probe_write_access()
  s390x/tcg: Use guest_addr_valid() instead of h2g_valid() in probe_write_access()
  tcg: Check for watchpoints in probe_write()
  cputlb: Handle watchpoints via TLB_WATCHPOINT
  cputlb: Remove double-alignment in store_helper
  cputlb: Fix size operand for tlb_fill on unaligned store
  exec: Factor out cpu_watchpoint_address_matches
  cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
  exec: Factor out core logic of check_watchpoint()
  exec: Move user-only watchpoint stubs inline
  target/sparc: sun4u Invert Endian TTE bit
  target/sparc: Add TLB entry with attributes
  cputlb: Byte swap memory transaction attribute
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-04 16:29:18 +01:00
David Hildenbrand
c25c283df0 tcg: Factor out probe_write() logic into probe_access()
Let's also allow to probe other access types.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190830100959.26615-3-david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:34:18 -07:00
David Hildenbrand
fef39ccd56 tcg: Make probe_write() return a pointer to the host page
... similar to tlb_vaddr_to_host(); however, allow access to the host
page except when TLB_NOTDIRTY or TLB_MMIO is set.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190830100959.26615-2-david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:34:18 -07:00
David Hildenbrand
59e96ac6cb tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
Factor it out into common code. Similar to the !CONFIG_USER_ONLY variant,
let's not allow to cross page boundaries.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20190826075112.25637-4-david@redhat.com>
[rth: Move cpu & cc variables inside if block.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:34:18 -07:00
Richard Henderson
50b107c5d6 cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside
of the I/O subsystem, is fundamentally broken.  We do not have
the helper return address with which we can unwind guest state.

Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT.
Move the call to cpu_check_watchpoint into the cputlb helpers
where we do have the helper return address.

This allows watchpoints on RAM to bypass the full i/o access path.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:39 -07:00
Richard Henderson
30d7e098d5 cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
We had two different mechanisms to force a recheck of the tlb.

Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit
that would immediate set TLB_INVALID_MASK, which automatically
means that a second check of the tlb entry fails.

We can use the same mechanism to handle small pages.
Conserve TLB_* bits by removing TLB_RECHECK.

Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:39 -07:00
Tony Nguyen
a26fc6f515 cputlb: Byte swap memory transaction attribute
Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <2a10a1f1c00a894af1212c8f68ef09c2966023c1.1566466906.git.tony.nguyen@bt.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:39 -07:00
Tony Nguyen
be5c4787e9 cputlb: Replace size and endian operands for MemOp
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <755b7104410956b743e1f1e9c34ab87db113360f.1566466906.git.tony.nguyen@bt.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:39 -07:00
Tony Nguyen
d5d680cacc memory: Access MemoryRegion with endianness
Preparation for collapsing the two byte swaps adjust_endianness and
handle_bswap into the former.

Call memory_region_dispatch_{read|write} with endianness encoded into
the "MemOp op" operand.

This patch does not change any behaviour as
memory_region_dispatch_{read|write} is yet to handle the endianness.

Once it does handle endianness, callers with byte swaps can collapse
them into adjust_endianness.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Message-Id: <8066ab3eb037c0388dfadfe53c5118429dd1de3a.1566466906.git.tony.nguyen@bt.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:39 -07:00
Tony Nguyen
e67c904668 memory: Access MemoryRegion with MemOp
Convert memory_region_dispatch_{read|write} operand "unsigned size"
into a "MemOp op".

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <1dd82df5801866743f838f1d046475115a1d32da.1566466906.git.tony.nguyen@bt.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:39 -07:00
Tony Nguyen
66b9b24375 memory: Introduce size_memop
The memory_region_dispatch_{read|write} operand "unsigned size" is
being converted into a "MemOp op".

Introduce no-op size_memop to aid preparatory conversion of
interfaces.

Once interfaces are converted, size_memop will be implemented to
return a MemOp from size in bytes.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <35b8ee74020f67cf40848fb7d5f127cf96c851d6.1566466906.git.tony.nguyen@bt.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:38 -07:00
Tony Nguyen
14776ab5a1 tcg: TCGMemOp is now accelerator independent MemOp
Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalized upon NEED_CPU_H.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <81d9cd7d7f5aaadfa772d6c48ecee834e9cf7882.1566466906.git.tony.nguyen@bt.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2019-09-03 08:30:38 -07:00
Alex Bennée
1eb21c428b include/exec/cpu-defs.h: fix typo
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190828165307.18321-10-alex.bennee@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03 16:20:35 +01:00
Eric Auger
8ccb563849 memory: Remove unused memory_region_iommu_replay_all()
memory_region_iommu_replay_all is not used. Remove it.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-id: 20190822172350.12008-2-eric.auger@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-09-03 16:20:34 +01:00
Peter Maydell
f3b8f18ebf Monitor patches for 2019-08-21
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEENUvIs9frKmtoZ05fOHC0AOuRhlMFAl1dZKsSHGFybWJydUBy
 ZWRoYXQuY29tAAoJEDhwtADrkYZTJ4QP/10izA+dSofQ9404GRq3TNzwRCKugU44
 nES9CqDh6x5emx+ADQWYkugblgfH9GOvUaAUNtY+uFaEr55yC/F+VWeVXvyjt5U6
 ZpPZqIRDOHo2+PZrddr/KcKmiomS6plz03m9bzb3pYN1yIl2ZzgClAhAqWQLk0WB
 wwiY+YsJ83YR4sdiRMZkuF+UL7N8fSqYvIIj0yzM8+8ONDor9n16PoPeFg3JSsyG
 aMxXIUnSBZAVtClaNkUPtS0Wf9XEuqoG1rvMRV4Vv+eeb7fwA414DqanRJdLlGMA
 yNRtFcVyztCfjgVEXnY9JJlFe6pDkoe8ycoimQ4YA60C9c1DIMHqyjFWXRHfDwk8
 bYMSX6CTpfoEvbTfmwqYR6KSkb/KuXiFDmcYlTYFvIt3grhhdHQbru9vy+E5sm/b
 j3CPV2DTCkeGY+oZFfKIaQT9yoWZOhmMY5doMTYyinXygPTGQROUrHtzUeRXKmJZ
 arqDRmh+mlEiGETNeYQCI45eYCSDYxO+UNrhszxhmv6B1+ixhIrV2oXhi61vVBeY
 yngY4EILbuA2Z/E4BevJk91ESWJTr3UP13c6p7yf21iN4BD1KkHy5HoXCgYfQDeV
 4kar49g6WQ/VQEiwhi65Xd0OwstynkcV69F+kMagVMgaLeRsdU5ikGJQzxTeWJRl
 SPpc7oDwuAS+
 =2F3E
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/armbru/tags/pull-monitor-2019-08-21' into staging

Monitor patches for 2019-08-21

# gpg: Signature made Wed 21 Aug 2019 16:35:07 BST
# gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
# gpg:                issuer "armbru@redhat.com"
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
# gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867  4E5F 3870 B400 EB91 8653

* remotes/armbru/tags/pull-monitor-2019-08-21:
  monitor/qmp: Update comment for commit 4eaca8de26
  qdev: Collect HMP handlers command handlers in qdev-monitor.c
  qapi: Move query-target from misc.json to machine.json
  hw/core: Move cpu.c, cpu.h from qom/ to hw/core/

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2019-08-22 10:31:21 +01:00
Markus Armbruster
2e5b09fd0e hw/core: Move cpu.c, cpu.h from qom/ to hw/core/
Suggested-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20190709152053.16670-2-armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h
in comments replaced]
2019-08-21 13:24:01 +02:00
Pavel Dovgalyuk
9e9b10c649 icount: remove unnecessary gen_io_end calls
Prior patch resets can_do_io flag at the TB entry. Therefore there is no
need in resetting this flag at the end of the block.
This patch removes redundant gen_io_end calls.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Message-Id: <156404429499.18669.13404064982854123855.stgit@pasha-Precision-3630-Tower>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@gmail.com>
2019-08-20 17:26:22 +02:00
Pavel Dovgalyuk
ba3e792669 icount: clean up cpu_can_io at the entry to the block
Most of IO instructions can be executed only at the end of the block in
icount mode. Therefore translator can set cpu_can_io flag when translating
the last instruction.
But when the blocks are chained, then this flag is not reset and may
remain set at the beginning of the next block.
This patch resets the flag at the entry of any translation block,
making I/O operations impossible by default.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>

--

v2 changes:
 - reset can_do_io at the start of every TB (suggested by Paolo Bonzini)
Message-Id: <156404428943.18669.15747009371169578935.stgit@pasha-Precision-3630-Tower>

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-08-20 17:26:22 +02:00
Paolo Bonzini
9458a9a1df memory: fix race between TCG and accesses to dirty bitmap
There is a race between TCG and accesses to the dirty log:

      vCPU thread                  reader thread
      -----------------------      -----------------------
      TLB check -> slow path
        notdirty_mem_write
          write to RAM
          set dirty flag
                                   clear dirty flag
      TLB check -> fast path
                                   read memory
        write to RAM

Fortunately, in order to fix it, no change is required to the
vCPU thread.  However, the reader thread must delay the read after
the vCPU thread has finished the write.  This can be approximated
conservatively by run_on_cpu, which waits for the end of the current
translation block.

A similar technique is used by KVM, which has to do a synchronous TLB
flush after doing a test-and-clear of the dirty-page flags.

Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2019-08-20 17:26:20 +02:00
tony.nguyen@bt.com
52bf9771fd configure: Define target access alignment in configure
This patch moves the define of target access alignment earlier from
target/foo/cpu.h to configure.

Suggested in Richard Henderson's reply to "[PATCH 1/4] tcg: TCGMemOp is now
accelerator independent MemOp"

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Message-Id: <11e818d38ebc40e986cfa62dd7d0afdc@tpw09926dag18e.domain1.systemhost.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: tony.nguyen@bt.com <tony.nguyen@bt.com>
2019-08-20 17:26:19 +02:00
Markus Armbruster
dc5e9ac716 Include qemu/queue.h slightly less
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190812052359.30071-20-armbru@redhat.com>
2019-08-16 13:31:52 +02:00
Markus Armbruster
1b53ecd909 memory: Fix type of IOMMUMemoryRegionClass member @parent_class
TYPE_IOMMU_MEMORY_REGION is a direct subtype of TYPE_MEMORY_REGION.
Its instance struct is IOMMUMemoryRegion, and its first member is a
MemoryRegion.  Correct.  Its class struct is IOMMUMemoryRegionClass,
and its first member is a DeviceClass.  Wrong.  Messed up when commit
1221a47467 introduced the QOM type.  It even included hw/qdev-core.h
just for that.

TYPE_MEMORY_REGION doesn't bother to define a class struct.  This is
fine, it simply defaults to its super-type TYPE_OBJECT's class struct
ObjectClass.  Changing IOMMUMemoryRegionClass's first member's type to
ObjectClass would be a minimal fix, if a bit brittle: if
TYPE_MEMORY_REGION ever acquired own class struct, we'd have to update
IOMMUMemoryRegionClass to use it.

Fix it the clean and robust way instead: give TYPE_MEMORY_REGION its
own class struct MemoryRegionClass now, and use it for
IOMMUMemoryRegionClass's first member.

Revert the include of hw/qdev-core.h, and fix the few files that have
come to rely on it.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20190812052359.30071-5-armbru@redhat.com>
2019-08-16 13:31:51 +02:00
Markus Armbruster
ec150c7e09 include: Make headers more self-contained
Back in 2016, we discussed[1] rules for headers, and these were
generally liked:

1. Have a carefully curated header that's included everywhere first.  We
   got that already thanks to Peter: osdep.h.

2. Headers should normally include everything they need beyond osdep.h.
   If exceptions are needed for some reason, they must be documented in
   the header.  If all that's needed from a header is typedefs, put
   those into qemu/typedefs.h instead of including the header.

3. Cyclic inclusion is forbidden.

This patch gets include/ closer to obeying 2.

It's actually extracted from my "[RFC] Baby steps towards saner
headers" series[2], which demonstrates a possible path towards
checking 2 automatically.  It passes the RFC test there.

[1] Message-ID: <87h9g8j57d.fsf@blackfin.pond.sub.org>
    https://lists.nongnu.org/archive/html/qemu-devel/2016-03/msg03345.html
[2] Message-Id: <20190711122827.18970-1-armbru@redhat.com>
    https://lists.nongnu.org/archive/html/qemu-devel/2019-07/msg02715.html

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-Id: <20190812052359.30071-2-armbru@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
2019-08-16 13:31:51 +02:00
Rémi Denis-Courmont
0acd4ab849 linux-user: check valid address in access_ok()
Fix a crash with LTP testsuite and aarch64:

  tst_test.c:1015: INFO: Timeout per run is 0h 05m 00s
  qemu-aarch64: .../qemu/accel/tcg/translate-all.c:2522: page_check_range: Assertion `start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)' failed.
  qemu:handle_cpu_signal received signal outside vCPU context @ pc=0x60001554

page_check_range() should never be called with address outside the guest
address space. This patch adds a guest_addr_valid() check in access_ok()
to only call page_check_range() with a valid address.

Fixes: f6768aa1b4 ("target/arm: fix AArch64 virtual address space size")
Signed-off-by: Rémi Denis-Courmont <remi@remlab.net>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20190704084115.24713-1-lvivier@redhat.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2019-07-18 13:57:28 +02:00
Peter Xu
002cad6b16 migration: Split log_clear() into smaller chunks
Currently we are doing log_clear() right after log_sync() which mostly
keeps the old behavior when log_clear() was still part of log_sync().

This patch tries to further optimize the migration log_clear() code
path to split huge log_clear()s into smaller chunks.

We do this by spliting the whole guest memory region into memory
chunks, whose size is decided by MigrationState.clear_bitmap_shift (an
example will be given below).  With that, we don't do the dirty bitmap
clear operation on the remote node (e.g., KVM) when we fetch the dirty
bitmap, instead we explicitly clear the dirty bitmap for the memory
chunk for each of the first time we send a page in that chunk.

Here comes an example.

Assuming the guest has 64G memory, then before this patch the KVM
ioctl KVM_CLEAR_DIRTY_LOG will be a single one covering 64G memory.
If after the patch, let's assume when the clear bitmap shift is 18,
then the memory chunk size on x86_64 will be 1UL<<18 * 4K = 1GB.  Then
instead of sending a big 64G ioctl, we'll send 64 small ioctls, each
of the ioctl will cover 1G of the guest memory.  For each of the 64
small ioctls, we'll only send if any of the page in that small chunk
was going to be sent right away.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190603065056.25211-12-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-07-15 15:39:03 +02:00
Peter Xu
077874e01f memory: Introduce memory listener hook log_clear()
Introduce a new memory region listener hook log_clear() to allow the
listeners to hook onto the points where the dirty bitmap is cleared by
the bitmap users.

Previously log_sync() contains two operations:

  - dirty bitmap collection, and,
  - dirty bitmap clear on remote site.

Let's take KVM as example - log_sync() for KVM will first copy the
kernel dirty bitmap to userspace, and at the same time we'll clear the
dirty bitmap there along with re-protecting all the guest pages again.

We add this new log_clear() interface only to split the old log_sync()
into two separated procedures:

  - use log_sync() to collect the collection only, and,
  - use log_clear() to clear the remote dirty bitmap.

With the new interface, the memory listener users will still be able
to decide how to implement the log synchronization procedure, e.g.,
they can still only provide log_sync() method only and put all the two
procedures within log_sync() (that's how the old KVM works before
KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is introduced).  However with this
new interface the memory listener users will start to have a chance to
postpone the log clear operation explicitly if the module supports.
That can really benefit users like KVM at least for host kernels that
support KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2.

There are three places that can clear dirty bits in any one of the
dirty bitmap in the ram_list.dirty_memory[3] array:

        cpu_physical_memory_snapshot_and_clear_dirty
        cpu_physical_memory_test_and_clear_dirty
        cpu_physical_memory_sync_dirty_bitmap

Currently we hook directly into each of the functions to notify about
the log_clear().

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20190603065056.25211-7-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-07-15 15:39:02 +02:00
Peter Xu
5dea4079ad memory: Pass mr into snapshot_and_clear_dirty
Also we change the 2nd parameter of it to be the relative offset
within the memory region. This is to be used in follow up patches.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20190603065056.25211-6-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-07-15 15:39:02 +02:00
Peter Xu
ae7a2bca8a memory: Don't set migration bitmap when without migration
Similar to 9460dee4b2 ("memory: do not touch code dirty bitmap unless
TCG is enabled", 2015-06-05) but for the migration bitmap - we can
skip the MIGRATION bitmap update if migration not enabled.

Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <20190603065056.25211-4-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
2019-07-15 15:39:02 +02:00