ppc hypervisors have delivered system reset and machine check exception
interrupts to guests in some situations (e.g., see FWNMI feature of LoPAPR,
or NMI injection in QEMU).
These exceptions are architected to set the HV bit in hardware, however
when injected into a guest, the HV bit should be cleared. Current code
masks off the HV bit before setting the new MSR, however this happens after
the interrupt delivery model has calculated delivery mode for the exception.
This can result in the guest's MSR LE bit being lost.
Account for this in the exception handler and don't set HV bit for guest
delivery.
Also add another sanity check to ensure similar bugs get caught.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add support to hot remove pc-dimm memory devices.
Since we're introducing a machine-level unplug_request hook, we also
had handling for CPU unplug there as well to ensure CPU unplug
continues to work as it did before.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
* add hooks to CAS/cmdline enablement of hotplug ACR support
* add hook for CPU unplug
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit 0a417869:
spapr: Move memory hotplug to RTAS_LOG_V6_HP_ID_DRC_COUNT type
dropped per-DRC/per-LMB hotplugs event in favor of a bulk add via a
single LMB count value. This was to avoid overrunning the guest EPOW
event queue with hotplug events. This works fine, but relies on the
guest exhaustively scanning for pluggable LMBs to satisfy the
requested count by issuing rtas-get-sensor(DR_ENTITY_SENSE, ...) calls
until all the LMBs associated with the DIMM are identified.
With newer support for dedicated hotplug event source, this queue
exhaustion is no longer as much of an issue due to implementation
details on the guest side, but we still try to avoid excessive hotplug
events by now supporting both a count and a starting index to avoid
unecessary work. This patch makes use of that approach when the
capability is available.
Cc: bharata@linux.vnet.ibm.com
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add support for DRC count indexed hotplug ID type which is primarily
needed for memory hot unplug. This type allows for specifying the
number of DRs that should be plugged/unplugged starting from a given
DRC index.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
* updated rtas_event_log_v6_hp to reflect count/index field ordering
used in PAPR hotplug ACR
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This adds machine options of the form:
-machine pseries,modern-hotplug-events=true
-machine pseries,modern-hotplug-events=false
If false, QEMU will force the use of "legacy" style hotplug events,
which are surfaced through EPOW events instead of a dedicated
hot plug event source, and lack certain features necessary, mainly,
for memory unplug support.
If true, QEMU will enable support for "modern" dedicated hot plug
event source. Note that we will still default to "legacy" style unless
the guest advertises support for the "modern" hotplug events via
ibm,client-architecture-support hcall during early boot.
For pseries-2.7 and earlier we default to false, for newer machine
types we default to true.
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Hotplug events were previously delivered using an EPOW interrupt
and were queued by linux guests into a circular buffer. For traditional
EPOW events like shutdown/resets, this isn't an issue, but for hotplug
events there are cases where this buffer can be exhausted, resulting
in the loss of hotplug events, resets, etc.
Newer-style hotplug event are delivered using a dedicated event source.
We enable this in supported guests by adding standard an additional
event source in the guest device-tree via /event-sources, and, if
the guest advertises support for the newer-style hotplug events,
using the corresponding interrupt to signal the available of
hotplug/unplug events.
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This updates the existing documentation to reflect recent updates to
the hotplug event structure, which are in draft form but slated
for inclusion in PAPR/LoPAPR.
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Now that we also support the "-prom-env" parameter for the pseries
machine, we can enable this test for this machine, too. Since booting
with TCG is rather slow with the pseries machine, we also enable
the "-nodefaults" parameter for this test now, so that SLOF does not
have to check that much devices during boot and thus runs a little
bit faster.
Signed-off-by: Thomas Huth <thuth@redhat.com>
[dwg: Don't add -nodefaults to the command line, it causes extra warnings
for the sparc testcases]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In case we do not load the NVRAM contents from a file and the user
specified the "-prom-env" parameter, use the new CHRP NVRAM helper
functions to pre-initialize the NVRAM partitions, so that the SLOF
firmware now can pick up the environment variables from the -prom-env
parameter, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The usual use model for the libqos PCI functions is to map a specific PCI
BAR using qpci_iomap() then pass the returned token into IO accessor
functions. This, and the fact that iomap() returns a (void *) which
actually contains a PCI space address, kind of suggests that the return
value from iomap is supposed to be an opaque token.
..except that the callers expect to be able to add offsets to it. Which
also assumes the compiler will support pointer arithmetic on a (void *),
and treat it as working with byte offsets.
To clarify this situation change iomap() and the IO accessors to take
a definitely opaque BAR handle (enforced with a wrapper struct) along with
an offset within the BAR. This changes both the functions and all the
callers.
There were a number of places that checked if iomap() returned non-NULL,
and or initialized it to NULL before hand. Since iomap() already assert()s
if it fails to map the BAR, these tests were mostly pointless and are
removed.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
In a couple of places ahci-test makes assumptions about how the tokens
returned from qpci_iomap() are formatted in ways it probably shouldn't.
First in verify_state() it uses a non-NULL token to indicate that the AHCI
device has been enabled (part of enabling is to iomap()). This changes it
to use an explicit 'enabled' flag instead.
Second, it uses the fact that the token contains a PCI address, stored when
the BAR is mapped during initialization to check that the BAR has the same
value after a migration. This changes it to explicitly read the BAR
register before and after the migration and compare.
Together, these changes will make the test more robust against changes to
the internals of the libqos PCI layer.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
ivshmem implements a block of shared memory in a PCI BAR. Currently our
test case accesses this using qtest_mem{read,write}. However, deducing
the correct addresses for these requires making assumptions about the
internel format returned by qpci_iomap(), along with some ugly casts.
This patch changes the test to use the new qpci_mem{read,write} interfaces
which is neater.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently the libqos PCI layer includes accessor helpers for 8, 16 and 32
bit reads and writes. It's likely that we'll want 64-bit accesses in the
future (plenty of modern peripherals will have 64-bit reigsters). This
adds them.
For PIO (not MMIO) accesses on the PC backend, this is implemented as two
32-bit ins or outs. That's not ideal but AFAICT x86 doesn't have 64-bit
versions of in and out.
This patch also converts the single current user of 64-bit accesses -
virtio-pci.c to use the new mechanism, rather than a sequence of 8 byte
reads.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
ide-test uses many explicit inb() / outb() operations for its IO, which
means it's not portable to non-x86 platforms. This cleans it up to use
the libqos PCI accessors instead.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
In the libqos PCI code we now have accessors both for registers (byte
significance preserving) and for streaming data (byte address order
preserving). These exist in both the interface for qtest drivers and in
the machine specific backends.
However, the register-style accessors aren't actually necessary in the
backend. They can be implemented in terms of the byte address order
preserving accessors by the libqos wrappers. This works because PCI is
always little endian.
This does assume that the back end byte address order preserving accessors
will perform the equivalent of a single bus transaction for short lengths.
This is the case, and in fact they currently end up using the same
cpu_physical_memory_rw() implementation within the qtest accelerator.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Currently PCI memory (aka MMIO) space is accessed via a set of readb/writeb
style accessors. This is what we want for accessing discrete registers of
a certain size. However, there are a few cases where we instead need a
"bag of bytes" style streaming interface to PCI MMIO space. This can be
either for streaming data style registers or when there's actual memory
rather than registers in PCI space, for example frame buffers or ivshmem.
This patch adds backend callbacks, and libqos wrappers for this type of
byte address order preserving accesses.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Avoid tco-test making assumptions about the internal format of the address
tokens passed to PCI IO accessors, by using the new qpci_legacy_iomap()
function.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The usual model for PCI IO with libqos is to use qpci_iomap() to map a
specific BAR for a PCI device, then perform IOs within that BAR using
qpci_io_{read,write}*().
However, certain devices also have legacy PCI IO. In this case, instead of
(or as well as) being accessed via PCI BARs, the device can be accessed
via certain well-known, fixed addresses in PCI IO space.
Two existing tests use legacy PCI IO, and take different flawed approaches
to it:
* tco-test manually constructs a tco_io_base value instead of calling
qpci_iomap(), which assumes internal knowledge of the structure of
the value it shouldn't have
* ide-test uses direct in*() and out*() calls instead of using
qpci_io_*() accessors, meaning it's not portable to non-x86 machine
types.
This patch implements a new qpci_iomap_legacy() interface which gets a
handle in the same format as qpci_iomap() but refers to a region in
the legacy PIO space. For a device which has the same registers
available both in a BAR and in legacy space (quite common), this
allows the same test code to test both options with just a different
iomap() at the beginning.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The PCI backends in libqos each supply an iomap() and iounmap() function
which is used to set up a specified PCI BAR. But PCI BAR allocation takes
place entirely within PCI space, so doesn't really need per-backend
versions. For example, Linux includes generic BAR allocation code used on
platforms where that isn't done by firmware.
This patch merges the BAR allocation from the two existing backends into a
single simplified copy. The back ends just need to set up some parameters
describing the window of PCI IO and PCI memory addresses which are
available for allocation. Like both the existing versions the new one uses
a simple bump allocator.
Note that (again like the existing versions) this doesn't really handle
64-bit memory BARs properly. It is actually used for such a BAR by the
ivshmem test, and apparently the 32-bit MMIO BAR logic is close enough to
work, as long as the BAR isn't too big. Fixing that to properly handle
64-bit BAR allocation is a problem for another time.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The PCI IO space (aka PIO, aka legacy IO) and PCI memory space (aka MMIO)
are distinct address spaces by the PCI spec (although parts of one might be
aliased to parts of the other in some cases).
However, qpci_io_read*() and qpci_io_write*() can perform accesses to
either space depending on parameter. That's convenient for test case
drivers, since there are a fair few devices which can be controlled via
either a PIO or MMIO BAR but with an otherwise identical driver.
This is implemented by having addresses below 64kiB treated as PIO, and
those above treated as MMIO. This works because low addresses in memory
space are generally reserved for DMA rather than MMIO.
At the moment, this demultiplexing must be handled by each PCI backend
(pc and spapr, so far). There's no real reason for this - the current
encoding is likely to work for all platforms, and even if it doesn't we
can still use a more complex common encoding since the value returned from
iomap are semi-opaque.
This patch moves the demultiplexing into the common part of the libqos PCI
code, with the backends having simpler, separate accessors for PIO and
MMIO space. This also means we have a way of explicitly accessing either
space if it's necessary for some special case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
The 'addr' parameter to qvirtio_config_read*() doesn't have a consistent
meaning: when using the virtio-pci versions, it's a full PCI space address,
but for virtio-mmio, it's an offset from the device's base mmio address.
This means that the callers need to do different things to calculate the
addresses in the two cases, which rather defeats the purpose of function
pointer backends.
All the current users of these functions are using them to retrieve
variables from the device specific portion of the virtio config space.
So, this patch alters the semantics to always be an offset into that
device specific config area.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
ADB devices must take new handler into account only when they recognize it.
This lets operating systems probe for valid/invalid handles, to know device capabilities.
Add a FIXME in keyboard handler, which should use a different translation
table depending of the selected handler.
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ibm,architecture-vec-5 is supposed to encode all option vector 5 bits
negotiated between platform/guest. Currently we hardcode this property
in the boot-time device tree to advertise a single negotiated
capability, "Form 1" NUMA Affinity, regardless of whether or not CAS
has been invoked or that capability has actually been negotiated.
Improve this by generating ibm,architecture-vec-5 based on the full
set of option vector 5 capabilities negotiated via CAS.
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In some cases, ibm,client-architecture-support calls can fail. This
could happen in the current code for situations where the modified
device tree segment exceeds the buffer size provided by the guest
via the call parameters. In these cases, QEMU will reset, allowing
an opportunity to regenerate the device tree from scratch via
boot-time handling. There are potentially other scenarios as well,
not currently reachable in the current code, but possible in theory,
such as cases where device-tree properties or nodes need to be removed.
We currently don't handle either of these properly for option vector
capabilities however. Instead of carrying the negotiated capability
beyond the reset and creating the boot-time device tree accordingly,
we start from scratch, generating the same boot-time device tree as we
did prior to the CAS-generated and the same device tree updates as we
did before. This could (in theory) cause us to get stuck in a reset
loop. This hasn't been observed, but depending on the extensiveness
of CAS-induced device tree updates in the future, could eventually
become an issue.
Address this by pulling capability-related device tree
updates resulting from CAS calls into a common routine,
spapr_dt_cas_updates(), and adding an sPAPROptionVector*
parameter that allows us to test for newly-negotiated capabilities.
We invoke it as follows:
1) When ibm,client-architecture-support gets called, we
call spapr_dt_cas_updates() with the set of capabilities
added since the previous call to ibm,client-architecture-support.
For the initial boot, or a system reset generated by something
other than the CAS call itself, this set will consist of *all*
options supported both the platform and the guest. For calls
to ibm,client-architecture-support immediately after a CAS-induced
reset, we call spapr_dt_cas_updates() with only the set
of capabilities added since the previous call, since the other
capabilities will have already been addressed by the boot-time
device-tree this time around. In the unlikely event that
capabilities are *removed* since the previous CAS, we will
generate a CAS-induced reset. In the unlikely event that we
cannot fit the device-tree updates into the buffer provided
by the guest, well generate a CAS-induced reset.
2) When a CAS update results in the need to reset the machine and
include the updates in the boot-time device tree, we call the
spapr_dt_cas_updates() using the full set of negotiated
capabilities as part of the reset path. At initial boot, or after
a reset generated by something other than the CAS call itself,
this set will be empty, resulting in what should be the same
boot-time device-tree as we generated prior to this patch. For
CAS-induced reset, this routine will be called with the full set of
capabilities negotiated by the platform/guest in the previous
CAS call, which should result in CAS updates from previous call
being accounted for in the initial boot-time device tree.
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
[dwg: Changed an int -> bool conversion to be more explicit]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently we access individual bytes of an option vector via
ldub_phys() to test for the presence of a particular capability
within that byte. Currently this is only done for the "dynamic
reconfiguration memory" capability bit. If that bit is present,
we pass a boolean value to spapr_h_cas_compose_response()
to generate a modified device tree segment with the additional
properties required to enable this functionality.
As more capability bits are added, will would need to modify the
code to add additional option vector accesses and extend the
param list for spapr_h_cas_compose_response() to include similar
boolean values for these parameters.
Avoid this by switching to spapr_ovec_* helpers so we can do all
the parsing in one shot and then test for these additional bits
within spapr_h_cas_compose_response() directly.
Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PAPR guests advertise their capabilities to the platform by passing
an ibm,architecture-vec structure via an
ibm,client-architecture-support hcall as described by LoPAPR v11,
B.6.2.3. during early boot.
Using this information, the platform enables the capabilities it
supports, then encodes a subset of those enabled capabilities (the
5th option vector of the ibm,architecture-vec structure passed to
ibm,client-architecture-support) into the guest device tree via
"/chosen/ibm,architecture-vec-5".
The logical format of these these option vectors is a bit-vector,
where individual bits are addressed/documented based on the byte-wise
offset from the beginning of the bit-vector, followed by the bit-wise
index starting from the byte-wise offset. Thus the bits of each of
these bytes are stored in reverse order. Additionally, the first
byte of each option vector is encodes the length of the option vector,
so byte offsets begin at 1, and bit offset at 0.
This is not very intuitive for the purposes of mapping these bits to
a particular documented capability, so this patch introduces a set
of abstractions that encapsulate the work of parsing/encoding these
options vectors and testing for individual capabilities.
Cc: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
[dwg: Tweaked double-include protection to not trigger a checkpatch
false positive]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
For historical reasons construction of the guest device tree in spapr is
divided between spapr_create_fdt_skel() which is called at init time, and
spapr_build_fdt() which runs at reset time. Over time, more and more
things have needed to be moved to reset time.
Previous cleanups mean the only things left in spapr_create_fdt_skel() are
the properties of the root node itself. Finish consolidating these two
parts of device tree construction, by moving this to the start of
spapr_build_fdt(), and removing spapr_create_fdt_skel() entirely.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Construction of the /vdevice node (and its children) is divided between
spapr_create_fdt_skel() (at init time), which creates the base node, and
spapr_populate_vdevice() (at reset time) which creates the nodes for each
individual virtual device.
This consolidates both into a single function called from
spapr_build_fdt().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Currently the /hypervisor device tree node is constructed in
spapr_create_fdt_skel(). As part of consolidating device tree construction
to reset time, move it to a function called from spapr_build_fdt().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
The /event-sources device tree node is built from spapr_create_fdt_skel().
As part of consolidating device tree construction to reset time, this moves
it to spapr_build_fdt().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
For historical reasons construction of the /rtas node in the device
tree (amongst others) is split into several places. In particular
it's split between spapr_create_fdt_skel(), spapr_build_fdt() and
spapr_rtas_device_tree_setup().
In fact, as well as adding the actual RTAS tokens to the device tree,
spapr_rtas_device_tree_setup() just adds the ibm,lrdr-capacity
property, which despite going in the /rtas node, doesn't have a lot to
do with RTAS.
This patch consolidates the code constructing /rtas together into a new
spapr_dt_rtas() function. spapr_rtas_device_tree_setup() is renamed to
spapr_dt_rtas_tokens() and now only adds the token properties.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
For historical reasons, building the /chosen node in the guest device tree
is split across several places and includes both parts which write the DT
sequentially and others which use random access functions.
This patch consolidates construction of the node into one place, using
random access functions throughout.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Currently the device tree node for the XICS interrupt controller is in
spapr_create_fdt_skel(). As part of consolidating device tree construction
to reset time, this moves it to a function called from spapr_build_fdt().
In addition we move the actual code into hw/intc/xics_spapr.c with the
rest of the PAPR specific interrupt controller code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
At each system reset, the pseries machine needs to load RTAS (the runtime
portion of the guest firmware) into the VM. This means copying
the actual RTAS code into guest memory, and also updating the device
tree so that the guest OS and boot firmware can locate it.
For historical reasons the copy and update to the device tree were in
different parts of the code. This cleanup brings them both together in
an spapr_load_rtas() function.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
The flattened device tree passed to pseries guests contains a list of
reserved memory areas. Currently we construct this list early in
spapr_create_fdt_skel() as we sequentially write the fdt.
This will be inconvenient for upcoming cleanups, so this patch moves
the reserve map changes to the end of fdt construction. This changes
fdt_add_reservemap_entry() calls - which work when writing the fdt
sequentially to fdt_add_mem_rsv() calls used when altering the fdt in
random access mode.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Currently spapr_create_fdt_skel() takes a bunch of individual parameters
for various things it will put in the device tree. Some of these can
already be taken directly from sPAPRMachineState. This patch alters it so
that all of them can be taken from there, which will allow this code to
be moved away from its current caller in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
These values are used only within ppc_spapr_reset(), so just change them
to local variables.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
spapr_finalize_fdt() both finishes building the device tree for the guest
and loads it into guest memory. For future cleanups, it's going to be
more convenient to do these two things separately. The loading portion is
pretty trivial, so we move it inline into the caller, ppc_spapr_reset().
We also rename spapr_finalize_fdt(), because the current name is going to
become inaccurate.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>