s390x/tcg: Flush the TLB of all CPUs on SSKE and RRBE

Whenever we modify a storage key, we should flush the TLBs of all CPUs,
so the MMU fault handling code can properly consider the changed storage
key (to e.g., properly set the reference and change bit on the next
accesses).

These functions are barely used in modern Linux guests, so the performance
implications are neglectable for now.

This is a preparation for better reference and change bit handling for
TCG, which will require more MMU changes.

Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190816084708.602-5-david@redhat.com>
Acked-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This commit is contained in:
David Hildenbrand 2019-08-16 10:47:06 +02:00 committed by Cornelia Huck
parent 3096ffd368
commit 5b773a1107

View File

@ -1815,6 +1815,11 @@ void HELPER(sske)(CPUS390XState *env, uint64_t r1, uint64_t r2)
key = (uint8_t) r1;
skeyclass->set_skeys(ss, addr / TARGET_PAGE_SIZE, 1, &key);
/*
* As we can only flush by virtual address and not all the entries
* that point to a physical address we have to flush the whole TLB.
*/
tlb_flush_all_cpus_synced(env_cpu(env));
}
/* reset reference bit extended */
@ -1843,6 +1848,11 @@ uint32_t HELPER(rrbe)(CPUS390XState *env, uint64_t r2)
if (skeyclass->set_skeys(ss, r2 / TARGET_PAGE_SIZE, 1, &key)) {
return 0;
}
/*
* As we can only flush by virtual address and not all the entries
* that point to a physical address we have to flush the whole TLB.
*/
tlb_flush_all_cpus_synced(env_cpu(env));
/*
* cc