Commit Graph

482 Commits

Author SHA1 Message Date
Jason Baron
ddb97f1deb memory: add -machine dump-guest-core=on|off
Add a new '[,dump-guest-core=on|off]' option to the '-machine' option. When
'dump-guest-core=off' is specified, guest memory is omitted from the core dump.
The default behavior continues to be to include guest memory when a core dump is
triggered. In my testing, this brought the core dump size down from 384MB to 6MB
on a 2GB guest.

Is anything additional required to preserve this setting for migration or
savevm? I don't believe so.

Changelog:
v3:
    Eliminate globals as per Anthony's suggestion
    set no dump from qemu_ram_remap() as well
v2:
    move the option from -m to -machine, rename option dump -> dump-guest-core

Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-08-16 13:41:15 -05:00
Igor Mitsyanko
5fda043f9c exec.c: fix dirty bitmap reallocation
For each newly created RAM block, dirty bitmap is reallocated with g_realloc, which doesn't
make any promises on initial content of new extra data in returned buffer. In theory,
we initialize this new data with cpu_physical_memory_set_dirty_range() call. The
problem is, cpu_physical_memory_set_dirty_range() has a side effect of incrementing
ram_list.dirty_pages variable, but only for pages which are not already dirty. And
page "cleanliness" is determined using the same not yet uninitialized dirty bitmap
we've just reallocated. This results in inconsistency between real dirty page number
and value in ram_list.dirty_pages variable, which in turn could (and will) result
in errors during VM migration.
Zero initialize new dirty bitmap bytes to fix this problem.

Signed-off-by: Igor Mitsyanko <i.mitsyanko@samsung.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-08-11 12:23:46 +00:00
Peter Maydell
c308efe63a exec.c: Remove out of date comment
Remove an out of date comment: this comment used to be attached to
cpu_register_physical_memory_log(), before commit 0f0cb164 accidentally
inserted a couple of other functions between the comment and its function.
It is in any case obsolete since (a) the function arguments it refers
to have been replaced with a single MemoryRegionSection* argument and
(b) the inability to handle regions whose offset_within_address_space
and offset_within_region aren't equally aligned was fixed as part of
the rewrite of this code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-08-03 14:25:22 +01:00
Tyler Hall
69b67646bc exec.c: Use subpages for large unaligned mappings
Registering a multi-page memory region that is non-page-aligned results
in a subpage from the start to the page boundary, some number of full
pages, and possibly another subpage from the last page boundary to the
end. The full pages will have a value for offset_within_region that is
not a multiple of TARGET_PAGE_SIZE. Accesses through softmmu are unable
to handle this and will segfault.

Handling full pages through subpages is not optimal, but only
non-page-aligned mappings take the penalty.

Signed-off-by: Tyler Hall <tylerwhall@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-08-03 14:25:22 +01:00
Tyler Hall
adb2a9b5d4 exec.c: Fix off-by-one error in register_subpage
subpage_register() expects "end" to be the last byte in the mapping.
Registering a non-page-aligned memory region that extends up to or
beyond a page boundary causes subpage_register() to silently fail
through the (end >= PAGE_SIZE) check.

This bug does not cause noticeable problems for mappings that do not
extend to a page boundary, though they do register an extra byte.

Signed-off-by: Tyler Hall <tylerwhall@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-08-03 14:25:22 +01:00
Anthony Liguori
09f06a6c60 Merge remote-tracking branch 'qemu-kvm/uq/master' into staging
* qemu-kvm/uq/master:
  virtio: move common irqfd handling out of virtio-pci
  virtio: move common ioeventfd handling out of virtio-pci
  event_notifier: add event_notifier_set_handler
  memory: pass EventNotifier, not eventfd
  ivshmem: wrap ivshmem_del_eventfd loops with transaction
  ivshmem: use EventNotifier and memory API
  event_notifier: add event_notifier_init_fd
  event_notifier: remove event_notifier_test
  event_notifier: add event_notifier_set
  apic: Defer interrupt updates to VCPU thread
  apic: Reevaluate pending interrupts on LVT_LINT0 changes
  apic: Resolve potential endless loop around apic_update_irq
  kvm: expose tsc deadline timer feature to guest
  kvm_pv_eoi: add flag support
  kvm: Don't abort on kvm_irqchip_add_msi_route()
2012-07-18 14:44:43 -05:00
Paolo Bonzini
753d5e14c4 memory: pass EventNotifier, not eventfd
Under Win32, EventNotifiers will not have event_notifier_get_fd, so we
cannot call it in common code such as hw/virtio-pci.c.  Pass a pointer to
the notifier, and only retrieve the file descriptor in kvm-specific code.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-07-12 14:08:10 +03:00
Christian Borntraeger
fdec991857 s390: autodetect map private
By default qemu will use MAP_PRIVATE for guest pages. This will write
protect pages and thus break on s390 systems that dont support this feature.
Therefore qemu has a hack to always use MAP_SHARED for s390. But MAP_SHARED
has other problems (no dirty pages tracking, a lot more swap overhead etc.)
Newer systems allow the distinction via KVM_CAP_S390_COW. With this feature
qemu can use the standard qemu alloc if available, otherwise it will use
the old s390 hack.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Jens Freimann <jfrei@linux.vnet.ibm.com>
Acked-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2012-07-10 18:27:33 +02:00
Juan Quintela
1720aeee72 dirty bitmap: abstract its use
Always use accessors to read/set the dirty bitmap.

Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-06-29 13:31:07 +02:00
Juan Quintela
d24981d37e Only TCG needs TLB handling
Refactor the code that is only needed for tcg to an static function.
Call that only when tcg is enabled.  We can't refactor to a dummy
function in the kvm case, as qemu can be compiled at the same time
with tcg and kvm.

Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-06-29 13:27:28 +02:00
Blue Swirl
5726c27fa9 qemu-log: move logging to qemu-log.c
Move logging functions from exec.c to qemu-log.c,
compile it only once.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-06-21 18:45:16 +00:00
Anthony Liguori
09e5ab6360 qdev: Use wrapper for qdev_get_path
This makes it easier to remove it from BusInfo.

Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[AF: Drop now unnecessary NULL initialization in scsibus_get_dev_path()]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2012-06-18 15:14:38 +02:00
Anthony Liguori
3525c42fd3 Merge remote-tracking branch 'stefanha/trivial-patches' into staging
* stefanha/trivial-patches:
  configure: report missing libraries for virtfs
  trace/simple.c: fix deprecated glib2 interface
  Clarify comments of tb_invalidate_phys_[page_]range
2012-06-11 12:15:51 -05:00
Max Filippov
9d70c4b7b8 exec: fix TB invalidation after breakpoint insertion/deletion
tb_invalidate_phys_addr has to be called with the exact physical address of
the breakpoint we add/remove, not just the page's base address.
Otherwise we easily fail to flush the right TB.

This breakage was introduced by the commit f3705d5329 "memory: make
phys_page_find() return an unadjusted".

This appeared to work for some guest architectures because their
cpu_get_phys_page_debug implementation returns full translated physical
address, not just the base of the TARGET_PAGE_SIZE-sized page.

Reported-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-06-09 10:49:19 +00:00
Jan Kiszka
8e0fdce32d Clarify comments of tb_invalidate_phys_[page_]range
They could suggest that all TBs of the page containing the range would
be invalidated.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-06-08 09:32:26 +01:00
Wen Congyang
76f3553883 Add API to check whether a physical address is I/O address
This API will be used in the following patch.

Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
2012-06-04 13:49:33 -03:00
Alexander Graf
77a8f1a512 linux-user: Fix stale tbs after mmap
If we execute linux-user code that does the following:

  * A = mmap()
  * execute code in A
  * munmap(A)
  * B = mmap(), but mmap returns the same address as A
  * execute code in B

we end up executing a stale cached tb that contains translated code
from A, while we want new code from B.

This patch adds a TB flush for mmap'ed regions, before we return them,
avoiding the whole issue. It also adds a flush for munmap, so that we
don't execute stale TBs instead of getting a segfault.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Riku Voipio <riku.voipio@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-19 15:49:40 +00:00
Blue Swirl
fd06257351 memory: move functions is_romd and section_addr to memory API
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:07 +00:00
Blue Swirl
cc5bea608d cputlb: prepare private memory API for public consumption
Fold is_ram_rom and is_ram_rom_romd() into callers.

Change is_romd() and section_addr() to take MemoryRegion
instead of MemoryRegionSection for consistency and
use memory_region_ prefix.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:05 +00:00
Blue Swirl
0cac1b66c8 cputlb: move TLB handling to a separate file
Move TLB handling and softmmu code load helpers to cputlb.c,
compile only for softmmu targets.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:04 +00:00
Blue Swirl
e554861766 exec: prepare for splitting
Make s_cputlb_empty_entry 'const'.

Rename tlb_flush_jmp_cache() to tb_flush_jmp_cache().

Refactor code to add cpu_tlb_reset_dirty_all(),
memory_region_section_get_iotlb() and
memory_region_is_unassigned().

Remove unused cpu_tlb_update_dirty().

Fix coding style in areas to be moved.

Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-05-01 10:45:02 +00:00
Stefan Weil
8efe0ca83e w64: Use uintptr_t in exec.c
Replace all type casts to 'long' or 'unsigned long' by 'intptr_t' or 'uintptr_t'.

For type casts which are only used to extract the lower bits of an address
or to modify those bits, signedness does not matter. There I always use 'uintptr_t'.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:17 +02:00
Stefan Weil
6840981dfb w64: Use larger alignment for section with generated code
The MinGW-w64 compiler allows __attribute__((aligned (32)).

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:16 +02:00
Stefan Weil
c6d506742f w64: Fix data types in cpu-all.h, exec.c
w64 needs uintptr_t instead of unsigned long.
For other hosts, nothing changes.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
2012-04-15 21:25:16 +02:00
Max Filippov
1e7855a558 exec: provide tb_invalidate_phys_addr function
Allow TB invalidation by its physical address, extract implementation
from the breakpoint_invalidate function.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-14 15:25:36 +00:00
Blue Swirl
2050396801 Use uintptr_t for various op related functions
Use uintptr_t instead of void * or unsigned long in
several op related functions, env->mem_io_pc and
GETPC() macro.

Reviewed-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-14 14:23:37 +00:00
Stefan Weil
6375e09e79 w64: Fix data type of tb_next and other variables used for host addresses
QEMU host addresses must use uintptr_t to be portable for hosts with
an unusual size of long (w64).

tb_jmp_offset is an uint16_t value, therefore the local variable offset
in function tb_set_jmp_target was changed from unsigned long to uint16_t.

The type cast to long in function tb_add_jump now also uses uintptr_t.
For the bit operation used here, the signedness of the type cast does
not matter.

Some remaining unsigned long values are either only used for ARM assembler
code or will be fixed in a later patch for PPC.

v2:
Fix signature of tb_find_pc in exec.c, too (hint from Blue Swirl, thanks).
There remain lots of other long / unsigned long in exec.c which must be
replaced by uintptr_t. This will be done in a separate patch. Here
only one of these type casts is fixed.

v3:
Also fix signature of page_unprotect.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-04-07 11:27:45 +00:00
Richard Henderson
813da6277c tcg: Use the GDB JIT debugging interface.
This allows us to generate unwind info for the dynamicly generated
code in the code_gen_buffer.  Only i386 is converted at this point.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-24 13:07:48 +00:00
Anthony PERARD
0a1b357f15 exec: fix guest memory access for Xen
In cpu_physical_memory_rw, a change has been introduced and qemu_get_ram_ptr is
no longuer called with the ram addr we want to access, but only with the
section address. This patch fixes this. (All other call to qemu_get_ram_ptr are
already called with the right address.)

This patch fixes Xen guest.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 19:13:30 +02:00
Avi Kivity
32b089808f memory: check for watchpoints when getting code ram_addr
The code to get the ram_addr from a (tlb entry, vaddr) pair
checks that the resulting memory is not MMIO, but neglects to
check whether the region is hidden by a watchpoint page.

Add the missing check.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 11:15:01 +02:00
Avi Kivity
7859cc6e39 exec: fix write tlb entry misused as iotlb
A couple of code paths check the lower bits of CPUTLBEntry::addr_write
against io_mem_ram as a way of looking for a dirty RAM page.  This works
by accident since the value is zero, which matches all clear bits for
TLB_INVALID, TLB_MMIO, and TLB_NOTDIRTY (indicating dirty RAM).

Make it work by design by checking for the proper bits.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-19 11:15:00 +02:00
Blue Swirl
e141ab52d2 softmmu templates: optionally pass CPUState to memory access functions
Optionally, make memory access helpers take a parameter for CPUState
instead of relying on global env.

On most targets, perform simple moves to reorder registers. On i386,
switch from regparm(3) calling convention to standard stack-based
version.

Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
2012-03-18 12:21:52 +00:00
Andreas Färber
9349b4f9fd Rename CPUState -> CPUArchState
Scripted conversion:
  for file in *.[hc] hw/*.[hc] hw/kvm/*.[hc] linux-user/*.[hc] linux-user/m68k/*.[hc] bsd-user/*.[hc] darwin-user/*.[hc] tcg/*/*.[hc] target-*/cpu.h; do
    sed -i "s/CPUState/CPUArchState/g" $file
  done

All occurrences of CPUArchState are expected to be replaced by QOM CPUState,
once all targets are QOM'ified and common fields have been extracted.

Signed-off-by: Andreas Färber <afaerber@suse.de>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
2012-03-14 22:20:27 +01:00
Avi Kivity
97161e177b memory: get rid of cpu_register_io_memory()
The return value of cpu_register_io_memory() is no longer used anywhere, so
we can remove it and all associated data and code.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 19:16:39 +02:00
Avi Kivity
37ec01d433 memory: dispatch directly via MemoryRegion
Instead of indirecting via io_mem_region, dispatch directly
through the MemoryRegion obtained from the iotlb or phys_page_find().

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 19:06:11 +02:00
Avi Kivity
ce5d64c2d0 exec: fix code tlb entry misused as iotlb in get_page_addr_code()
get_page_addr_code() reads a code tlb entry, but interprets it as an
iotlb entry.  This works by accident since the low bits of a RAM code
tlb entry are clear, and match a RAM iotlb entry.  This accident is
about to unhappen, so fix the code to use an iotlb entry (using the
code entry with TLB_MMIO may fail if the page is a watchpoint).

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 18:54:20 +02:00
Avi Kivity
aa102231f0 memory: store section indices in iotlb instead of io indices
A step towards eliminating io indices.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 17:06:55 +02:00
Avi Kivity
f3705d5329 memory: make phys_page_find() return an unadjusted section
We'd like to store the section index in the iotlb, so we can't
adjust it before returning.  Return an unadjusted section and
instead introduce section_addr(), which does the adjustment later.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-08 16:16:34 +02:00
Avi Kivity
a2d335214a memory: fix I/O port aliases
Commit e58ac72b6a0 ("ioport: change portio_list not to use
memory_region_set_offset()") started using aliases of I/O memory
regions.  Since the IORange used for the I/O was contained in the
target region, the alias information (specifically, the offset
into the region) was lost.  This broke -vga std.

Fix by allocating an independent object to hold the IORange and
also the new offset.

Note that I/O memory regions were conceptually broken wrt aliases
in a different way: an alias can cause the same region to appear
twice in an address space, but we had just one IORange to service it.
This patch fixes that problem as well, since we can now have multiple
IORange/MemoryRegion associations.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05 17:40:12 +02:00
Blue Swirl
b3e54c689c Merge branch 'xtensa' of git://jcmvbkbc.spb.ru/dumb/qemu-xtensa
* 'xtensa' of git://jcmvbkbc.spb.ru/dumb/qemu-xtensa:
  target-xtensa: add breakpoint tests
  target-xtensa: add DEBUG_SECTION to overlay tool
  target-xtensa: add DBREAK data breakpoints
  exec: let cpu_watchpoint_insert accept larger watchpoints
  exec: fix check_watchpoint exiting cpu_loop
  exec: add missing breaks to the watch_mem_write
  target-xtensa: add ICOUNT SR and debug exception
  target-xtensa: implement instruction breakpoints
  target-xtensa: add DEBUGCAUSE SR and configuration
  target-xtensa: fetch 3rd opcode byte only when needed
  target-xtensa: implement info tlb monitor command
  target-xtensa: define TLB_TEMPLATE for MMU-less cores
2012-03-03 17:53:41 +00:00
Avi Kivity
07f07b31e5 memory: allow phys_map tree paths to terminate early
When storing large contiguous ranges in phys_map, all values tend to
be the same pointers to a single MemoryRegionSection.  Collapse them
by marking nodes with level > 0 as leaves.  This reduces tree memory
usage dramatically.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
c19e8800d4 memory: unify PhysPageEntry::node and ::leaf
They have the same type, unify them.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
2999097bf1 memory: change phys_page_set() to set multiple pages
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
f7bf546118 memory: switch phys_page_set() to a recursive implementation
Setting multiple pages at once requires backtracking to previous
nodes; easiest to achieve via recursion.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
a391843286 memory: replace phys_page_find_alloc() with phys_page_set()
By giving the function the value we want to set, we make it
more flexible for the next patch.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:45 +02:00
Avi Kivity
0f0cb164cc memory: simplify multipage/subpage registration
Instead of considering subpage on a per-page basis, split each section
into a subpage head, multipage body, and subpage tail, and register
each separately.  This simplifies the registration functions.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
31ab2b4a46 memory: give phys_page_find() its own tree search loop
We'll change phys_page_find_alloc() soon, but phys_page_find()
doesn't need to bear the consequences.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
06ef3525e1 memory: make phys_page_find() return a MemoryRegionSection
We no longer describe memory in terms of individual pages; use sections
throughout instead.

PhysPageDesc no longer used - remove.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
117712c3e4 memory: move tlb flush to MemoryListener commit callback
This way, if we have several changes in a single transaction, we flush just
once.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00
Avi Kivity
717cb7b259 memory: unify the two branches of cpu_register_physical_memory_log()
Identical except that the second branch knows its not modifying an existing
subpage.

Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29 13:44:44 +02:00