It always calls the IOMMU MR translate() callback with flag=IOMMU_NONE in
memory_region_iommu_replay(). Currently, smmuv3_translate() return an
IOMMUTLBEntry with perm set to IOMMU_NONE even if the translation success,
whereas it is expected to return the actual permission set in the table
entry.
So pass the actual perm to returned IOMMUTLBEntry in the table entry.
Signed-off-by: Xiang Chen <chenxiang66@hisilicon.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1650094695-121918-1-git-send-email-chenxiang66@hisilicon.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use tcg_constant_{i32,i64} as appropriate throughout.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The operation we're performing with the movcond
is either min/max depending on cond -- simplify.
Use tcg_constant_i64 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use tcg_constant_{i32,i64} as appropriate throughout.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use tcg_constant_{i32,i64} as appropriate throughout.
This fixes a bug in trans_VSCCLRM() where we were leaking a TCGv.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The length of the previous insn may be computed from
the difference of start and end addresses.
Use tcg_constant_i32 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Use tcg_gen_umin_i32 instead of tcg_gen_movcond_i32.
Use tcg_constant_i32 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Instead of computing
tmp1 = shift & 0xff;
dest = (tmp1 > 0x1f ? 0 : value) << (tmp1 & 0x1f)
use
tmpd = value << (shift & 0x1f);
dest = shift & 0xe0 ? 0 : tmpd;
which has a flatter dependency tree.
Use tcg_constant_i32 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For aa32, the function has a parameter to use the new el.
For aa64, that never happens.
Use tcg_constant_i32 while we're at it.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Common code for reset_btype and set_btype.
Use tcg_constant_i32.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function is incorrect in that it does not properly consider
CPTR_EL2.FPEN. We've already got another mechanism for raising
an FPU access trap: ARM_CP_FPU, so use that instead.
Remove CP_ACCESS_TRAP_FP_EL{2,3}, which becomes unused.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for this value.
Adjust the assignments to use true/false.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for this value.
Move the member down in the struct to keep the
bool type members together and remove a hole.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently we assume all fields are 32-bit.
Prepare for fields of a single byte, using sizeof_field().
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: use sizeof_field() instead of raw sizeof()]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for this value.
Adjust the assignments to use true/false.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Bool is a more appropriate type for this value.
Move the member down in the struct to keep the
bool type members together and remove a hole.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update SCTLR_ELx fields per ARM DDI0487 H.a.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update SCR_EL3 fields per ARM DDI0487 H.a.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Update isar fields per ARM DDI0487 H.a.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add support for the TCG GICv4 to the virt board. For the board,
the GICv4 is very similar to the GICv3, with the only difference
being the size of the redistributor frame. The changes here are thus:
* calculating virt_redist_capacity correctly for GICv4
* changing various places which were "if GICv3" to be "if not GICv2"
* the commandline option handling
Note that using GICv4 reduces the maximum possible number of CPUs on
the virt board from 512 to 317, because we can now only fit half as
many redistributors into the redistributor regions we have defined.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-42-peter.maydell@linaro.org
In several places in virt.c we calculate the number of redistributors that
fit in a region of our memory map, which is the size of the region
divided by the size of a single redistributor frame. For GICv4, the
redistributor frame is a different size from that for GICv3. Abstract
out the calculation of redistributor region capacity so that we have
one place we need to change to handle GICv4 rather than several.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-41-peter.maydell@linaro.org
Everywhere we need to check which GIC version we're using, we look at
vms->gic_version and use the VIRT_GIC_VERSION_* enum values, except
in create_gic(), which copies vms->gic_version into a local 'int'
variable and makes direct comparisons against values 2 and 3.
For consistency, change this function to check the GIC version
the same way we do elsewhere. This includes not implicitly relying
on the enumeration type values happening to match the integer
'revision' values the GIC device object wants.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-40-peter.maydell@linaro.org
Now that we have implemented all the GICv4 requirements, relax the
error-checking on the GIC object's 'revision' property to allow a TCG
GIC to be a GICv4, whilst still constraining the KVM GIC to GICv3.
Our 'revision' property doesn't consider the possibility of wanting
to specify the minor version of the GIC -- for instance there is a
GICv3.1 which adds support for extended SPI and PPI ranges, among
other things, and also GICv4.1. But since the QOM property is
internal to QEMU, not user-facing, we can cross that bridge when we
come to it. Within the GIC implementation itself code generally
checks against the appropriate ID register feature bits, and the
only use of s->revision is for setting those ID register bits.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-39-peter.maydell@linaro.org
Update the various GIC ID and feature registers for GICv4:
* PIDR2 [7:4] is the GIC architecture revision
* GICD_TYPER.DVIS is 1 to indicate direct vLPI injection support
* GICR_TYPER.VLPIS is 1 to indicate redistributor support for vLPIs
* GITS_TYPER.VIRTUAL is 1 to indicate vLPI support
* GITS_TYPER.VMOVP is 1 to indicate that our VMOVP implementation
handles cross-ITS synchronization for the guest
* ICH_VTR_EL2.nV4 is 0 to indicate direct vLPI injection support
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-38-peter.maydell@linaro.org
Implement the function gicv3_redist_inv_vlpi(), which was previously
left as a stub. This is the function that does the work of the INV
command for a virtual interrupt.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-37-peter.maydell@linaro.org
Implement the gicv3_redist_vinvall() function (previously left as a
stub). This function handles the work of a VINVALL command: it must
invalidate any cached information associated with a specific vCPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-36-peter.maydell@linaro.org
Implement the gicv3_redist_mov_vlpi() function (previously left as a
stub). This function handles the work of a VMOVI command: it marks
the vLPI not-pending on the source and pending on the destination.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-35-peter.maydell@linaro.org
We can use our new set_pending_table_bit() utility function
in gicv3_redist_mov_lpi() to clear the bit in the source
pending table, rather than doing the "load, clear bit, store"
ourselves.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-34-peter.maydell@linaro.org
Implement the function gicv3_redist_vlpi_pending(), which was
previously left as a stub. This is the function that is called by
the CPU interface when it changes the state of a vLPI. It's similar
to gicv3_redist_process_vlpi(), but we know that the vCPU is
definitely resident on the redistributor and the irq is in range, so
it is a bit simpler.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-33-peter.maydell@linaro.org
Implement the function gicv3_redist_process_vlpi(), which was left as
just a stub earlier. This function deals with being handed a VLPI by
the ITS. It must set the bit in the pending table. If the vCPU is
currently resident we must recalculate the highest priority pending
vLPI; otherwise we may need to ring a "doorbell" interrupt to let the
hypervisor know it might want to reschedule the vCPU.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-32-peter.maydell@linaro.org
Factor out the code which sets a single bit in an LPI pending table.
We're going to need this for handling vLPI tables, not just the
physical LPI table.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-31-peter.maydell@linaro.org
The guest uses GICR_VPENDBASER to tell the redistributor when it is
scheduling or descheduling a vCPU. When it writes and changes the
VALID bit from 0 to 1, it is scheduling a vCPU, and we must update
our view of the current highest priority pending vLPI from the new
Pending and Configuration tables. When it writes and changes the
VALID bit from 1 to 0, it is descheduling, which means that there is
no longer a highest priority pending vLPI.
The specification allows the implementation to use part of the vLPI
Pending table as an IMPDEF area where it can cache information when a
vCPU is descheduled, so that it can avoid having to do a full rescan
of the tables when the vCPU is scheduled again. For now, we don't
take advantage of this, and simply do a complete rescan.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-30-peter.maydell@linaro.org
Factor out the common part of gicv3_redist_update_lpi_only() into
a new function update_for_all_lpis(), which does a full rescan
of an LPI Pending table and sets the specified PendingIrq struct
with the highest priority pending enabled LPI it finds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-29-peter.maydell@linaro.org
Currently the functions which update the highest priority pending LPI
information by looking at the LPI Pending and Configuration tables
are hard-coded to use the physical LPI tables addressed by
GICR_PENDBASER and GICR_PROPBASER. To support virtual LPIs we will
need to do essentially the same job, but looking at the current
virtual LPI Pending and Configuration tables and updating cs->hppvlpi
instead of cs->hpplpi.
Factor out the common part of the gicv3_redist_check_lpi_priority()
function into a new update_for_one_lpi() function, which updates
a PendingIrq struct if the specified LPI is higher priority than
what is currently recorded there.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-28-peter.maydell@linaro.org
The maintenance interrupt state depends only on:
* ICH_HCR_EL2
* ICH_LR<n>_EL2
* ICH_VMCR_EL2 fields VENG0 and VENG1
Now we have a separate function that updates only the vIRQ and vFIQ
lines, use that in places that only change state that affects vIRQ
and vFIQ but not the maintenance interrupt.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-27-peter.maydell@linaro.org
The CPU interface changes to support vLPIs are fairly minor:
in the parts of the code that currently look at the list registers
to determine the highest priority pending virtual interrupt, we
must also look at the highest priority pending vLPI. To do this
we change hppvi_index() to check the vLPI and return a special-case
value if that is the right virtual interrupt to take. The callsites
(which handle HPPIR and IAR registers and the "raise vIRQ and vFIQ
lines" code) then have to handle this special-case value.
This commit includes two interfaces with the as-yet-unwritten
redistributor code:
* the new GICv3CPUState::hppvlpi will be set by the redistributor
(in the same way as the existing hpplpi does for physical LPIs)
* when the CPU interface acknowledges a vLPI it needs to set it
to non-pending; the new gicv3_redist_vlpi_pending() function
(which matches the existing gicv3_redist_lpi_pending() used
for physical LPIs) is a stub that will be filled in later
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-26-peter.maydell@linaro.org
The function gicv3_cpuif_virt_update() currently sets all of vIRQ,
vFIQ and the maintenance interrupt. This implies that it has to be
used quite carefully -- as the comment notes, setting the maintenance
interrupt will typically cause the GIC code to be re-entered
recursively. For handling vLPIs, we need the redistributor to be
able to tell the cpuif to update the vIRQ and vFIQ lines when the
highest priority pending vLPI changes. Since that change can't cause
the maintenance interrupt state to change, we can pull the "update
vIRQ/vFIQ" parts of gicv3_cpuif_virt_update() out into a separate
function, which the redistributor can then call without having to
worry about the reentrancy issue.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-25-peter.maydell@linaro.org
Implement the new GICv4 redistributor registers: GICR_VPROPBASER
and GICR_VPENDBASER; for the moment we implement these as simple
reads-as-written stubs, together with the necessary migration
and reset handling.
We don't put ID-register checks on the handling of these registers,
because they are all in the only-in-v4 extra register frames, so
they're not accessible in a GICv3.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-24-peter.maydell@linaro.org
The GICv4 extends the redistributor register map -- where GICv3
had two 64KB frames per CPU, GICv4 has four frames. Add support
for the extra frame by using a new gicv3_redist_size() function
in the places in the GIC implementation which currently use
a fixed constant size for the redistributor register block.
(Until we implement the extra registers they will RAZ/WI.)
Any board that wants to use a GICv4 will need to also adjust
to handle the different sized redistributor register block;
that will be done separately.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-23-peter.maydell@linaro.org
The VINVALL command should cause any cached information in the
ITS or redistributor for the specified vCPU to be dropped or
otherwise made consistent with the in-memory LPI configuration
tables.
Here we implement the command and table parsing, leaving the
redistributor part as a stub for the moment, as usual.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-22-peter.maydell@linaro.org
Implement the GICv4 VMOVI command, which moves the pending state
of a virtual interrupt from one redistributor to another. As with
MOVI, we handle the "parse and validate command arguments and
table lookups" part in the ITS source file, and pass the final
results to a function in the redistributor which will do the
actual operation. As with the "make a VLPI pending" change,
for the moment we leave that redistributor function as a stub,
to be implemented in a later commit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-21-peter.maydell@linaro.org
Implement the ITS side of the handling of the INV command for
virtual interrupts; as usual this calls into a redistributor
function which we leave as a stub to fill in later.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-20-peter.maydell@linaro.org
We were previously implementing INV (like INVALL) to just blow away
cached highest-priority-pending-LPI information on all connected
redistributors. For GICv4.0, this isn't going to be sufficient,
because the LPI we are invalidating cached information for might be
either physical or virtual, and the required action is different for
those two cases. So we need to do the full process of looking up the
ITE from the devid and eventid. This also means we can do the error
checks that the spec lists for this command.
Split out INV handling into a process_inv() function like our other
command-processing functions. For the moment, stick to handling only
physical LPIs; we will add the vLPI parts later.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-19-peter.maydell@linaro.org
The VSYNC command forces the ITS to synchronize all outstanding ITS
operations for the specified vPEID, so that subsequent writes to
GITS_TRANSLATER honour them. The QEMU implementation is always in
sync, so for us this is a nop, like the existing SYNC command.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-18-peter.maydell@linaro.org
Implement the GICv4 VMOVP command, which updates an entry in the vPE
table to change its rdbase field. This command is unique in the ITS
command set because its effects must be propagated to all the other
ITSes connected to the same GIC as the ITS which executes the VMOVP
command.
The GICv4 spec allows two implementation choices for handling the
propagation to other ITSes:
* If GITS_TYPER.VMOVP is 1, the guest only needs to issue the command
on one ITS, and the implementation handles the propagation to
all ITSes
* If GITS_TYPER.VMOVP is 0, the guest must issue the command on
every ITS, and arrange for the ITSes to synchronize the updates
with each other by setting ITSList and Sequence Number fields
in the command packets
We choose the GITS_TYPER.VMOVP = 1 approach, and synchronously
execute the update on every ITS.
For GICv4.1 this command has extra fields in the command packet and
additional behaviour. We define the 4.1-only fields with the FIELD
macro, but only implement the GICv4.0 version of the command.
Note that we don't update the reported GITS_TYPER value here;
we'll do that later in a commit which updates all the reported
feature bit and ID register values for GICv4.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20220408141550.1271295-17-peter.maydell@linaro.org
[PMM: Moved gicv3_foreach_its() to arm_gicv3_its_common.h,
for consistency with gicv3_add_its()]
Before this patch, 'dump-guest-memory -w' was accepting only 64-bit
dump header provided by guest through vmcoreinfo and thus was unable
to produce 32-bit guest Windows dump. So, add 32-bit guest Windows
dumping support.
Signed-off-by: Viktor Prutyanov <viktor.prutyanov@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[ misc error handling fixes to avoid compiler warning ]
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20220406171558.199263-5-viktor.prutyanov@redhat.com>