Just like kvmap_dtlb_longpath we have to force the
global register level to one in order to mimick the
PSTATE_MG --> PSTATE_AG trasition done on SUN4U.
Signed-off-by: David S. Miller <davem@davemloft.net>
The SUN4V convention with non-shared TSBs is that the context
bit of the TAG is clear. So we have to choose an "invalid"
bit and initialize new TSBs appropriately. Otherwise a zero
TAG looks "valid".
Make sure, for the window fixup cases, that we use the right
global registers and that we don't potentially trample on
the live global registers in etrap/rtrap handling (%g2 and
%g6) and that we put the missing virtual address properly
in %g5.
Signed-off-by: David S. Miller <davem@davemloft.net>
Yes, you heard it right, they changed the PTE layout for
SUN4V. Ho hum...
This is the simple and inefficient way to support this.
It'll get optimized, don't worry.
Signed-off-by: David S. Miller <davem@davemloft.net>
Code patching did not sign extend negative branch
offsets correctly.
Kernel TLB miss path needs patching and %g4 register
preservation in order to handle SUN4V correctly.
Signed-off-by: David S. Miller <davem@davemloft.net>
Things are a little tricky because, unlike sun4u, we have
to:
1) do a hypervisor trap to do the TLB load.
2) do the TSB lookup calculations by hand
Signed-off-by: David S. Miller <davem@davemloft.net>
If we're just switching between different alternate global
sets, nop it out on sun4v. Also, get rid of all of the
alternate global save/restore in the OBP CIF trampoline code.
Signed-off-by: David S. Miller <davem@davemloft.net>
This way we don't need to lock the TSB into the TLB.
The trick is that every TSB load/store is registered into
a special instruction patch section. The default uses
virtual addresses, and the patch instructions use physical
address load/stores.
We can't do this on all chips because only cheetah+ and later
have the physical variant of the atomic quad load.
Signed-off-by: David S. Miller <davem@davemloft.net>
We now use the TSB hardware assist features of the UltraSPARC
MMUs.
SMP is currently knowingly broken, we need to find another place
to store the per-cpu base pointers. We hid them away in the TSB
base register, and that obviously will not work any more :-)
Another known broken case is non-8KB base page size.
Also noticed that flush_tlb_all() is not referenced anywhere, only
the internal __flush_tlb_all() (local cpu only) is used by the
sparc64 port, so we can get rid of flush_tlb_all().
The kernel gets it's own 8KB TSB (swapper_tsb) and each address space
gets it's own private 8K TSB. Later we can add code to dynamically
increase the size of per-process TSB as the RSS grows. An 8KB TSB is
good enough for up to about a 4MB RSS, after which the TSB starts to
incur many capacity and conflict misses.
We even accumulate OBP translations into the kernel TSB.
Another area for refinement is large page size support. We could use
a secondary address space TSB to handle those.
Signed-off-by: David S. Miller <davem@davemloft.net>
The sequence to move over to the Linux trap tables from
the firmware ones needs to be more air tight. It turns
out that to be %100 safe we do need to be able to translate
OBP mappings in our TLB miss handlers early.
In order not to eat up a lot of kernel image memory with
static page tables, just use the translations array in
the OBP TLB miss handlers. That solves the bulk of the
problem.
Furthermore, to make sure the OBP TLB miss path will work
even before the fixed MMU globals are loaded, explicitly
load %g1 to TLB_SFSR at the beginning of the i-TLB and
d-TLB miss handlers.
To ease the OBP TLB miss walking of the prom_trans[] array,
we sort it then delete all of the non-OBP entries in there
(for example, there are entries for the kernel image itself
which we're not interested in at all).
We also save about 32K of kernel image size with this change.
Not a bad side effect :-)
There are still some reasons why trampoline.S can't use the
setup_trap_table() yet. The most noteworthy are:
1) OBP boots secondary processors with non-bias'd stack for
some reason. This is easily fixed by using a small bootup
stack in the kernel image explicitly for this purpose.
2) Doing a firmware call via the normal C call prom_set_trap_table()
goes through the whole OBP enter/exit sequence that saves and
restores OBP and Linux kernel state in the MMUs. This path
unfortunately does a "flush %g6" while loading up the OBP locked
TLB entries for the firmware call.
If we setup the %g6 in the trampoline.S code properly, that
is in the PAGE_OFFSET linear mapping, but we're not on the
kernel trap table yet so those addresses won't translate properly.
One idea is to do a by-hand firmware call like we do in the
early bootup code and elsewhere here in trampoline.S But this
fails as well, as aparently the secondary processors are not
booted with OBP's special locked TLB entries loaded. These
are necessary for the firwmare to processes TLB misses correctly
up until the point where we take over the trap table.
This does need to be resolved at some point.
Signed-off-by: David S. Miller <davem@davemloft.net>
The trick is that we do the kernel linear mapping TLB miss starting
with an instruction sequence like this:
ba,pt %xcc, kvmap_load
xor %g2, %g4, %g5
succeeded by an instruction sequence which performs a full page table
walk starting at swapper_pg_dir.
We first take over the trap table from the firmware. Then, using this
constant PTE generation for the linear mapping area above, we build
the kernel page tables for the linear mapping.
After this is setup, we patch that branch above into a "nop", which
will cause TLB misses to fall through to the full page table walk.
With this, the page unmapping for CONFIG_DEBUG_PAGEALLOC is trivial.
Signed-off-by: David S. Miller <davem@davemloft.net>
This was kind of ugly, and actually buggy. The bug was that
we didn't handle a machine with memory starting > 4GB. If
the 'prompmd' was allocated in physical memory > 4GB we'd
croak because the obp_iaddr_patch and obp_daddr_patch things
only supported a 32-bit physical address.
So fix this by just loading the appropriate values from two
variables in the kernel image, which is locked into the TLB
and thus accesses to them can't cause a recursive TLB miss.
Signed-off-by: David S. Miller <davem@davemloft.net>