mirror of
https://github.com/FEX-Emu/linux.git
synced 2024-12-23 09:56:00 +00:00
ea3cc330ac
This is an attempt at cleaning up a bit the way we handle execute permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only defined by CPUs that can do something with it, and the myriad of #ifdef's in the I$/D$ coherency code is reduced to 2 cases that hopefully should cover everything. The logic on BookE is a little bit different than what it was though not by much. Since now, _PAGE_EXEC will be set by the generic code for executable pages, we need to filter out if they are unclean and recover it. However, I don't expect the code to be more bloated than it already was in that area due to that change. I could boast that this brings proper enforcing of per-page execute permissions to all BookE and 40x but in fact, we've had that now for some time as a side effect of my previous rework in that area (and I didn't even know it :-) We would only enable execute permission if the page was cache clean and we would only cache clean it if we took and exec fault. Since we now enforce that the later only work if VM_EXEC is part of the VMA flags, we de-fact already enforce per-page execute permissions... Unless I missed something Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
103 lines
3.8 KiB
C
103 lines
3.8 KiB
C
#ifndef _ASM_POWERPC_PTE_44x_H
|
|
#define _ASM_POWERPC_PTE_44x_H
|
|
#ifdef __KERNEL__
|
|
|
|
/*
|
|
* Definitions for PPC440
|
|
*
|
|
* Because of the 3 word TLB entries to support 36-bit addressing,
|
|
* the attribute are difficult to map in such a fashion that they
|
|
* are easily loaded during exception processing. I decided to
|
|
* organize the entry so the ERPN is the only portion in the
|
|
* upper word of the PTE and the attribute bits below are packed
|
|
* in as sensibly as they can be in the area below a 4KB page size
|
|
* oriented RPN. This at least makes it easy to load the RPN and
|
|
* ERPN fields in the TLB. -Matt
|
|
*
|
|
* This isn't entirely true anymore, at least some bits are now
|
|
* easier to move into the TLB from the PTE. -BenH.
|
|
*
|
|
* Note that these bits preclude future use of a page size
|
|
* less than 4KB.
|
|
*
|
|
*
|
|
* PPC 440 core has following TLB attribute fields;
|
|
*
|
|
* TLB1:
|
|
* 0 1 2 3 4 ... 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
|
|
* RPN................................. - - - - - - ERPN.......
|
|
*
|
|
* TLB2:
|
|
* 0 1 2 3 4 ... 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
|
|
* - - - - - - U0 U1 U2 U3 W I M G E - UX UW UR SX SW SR
|
|
*
|
|
* Newer 440 cores (440x6 as used on AMCC 460EX/460GT) have additional
|
|
* TLB2 storage attibute fields. Those are:
|
|
*
|
|
* TLB2:
|
|
* 0...10 11 12 13 14 15 16...31
|
|
* no change WL1 IL1I IL1D IL2I IL2D no change
|
|
*
|
|
* There are some constrains and options, to decide mapping software bits
|
|
* into TLB entry.
|
|
*
|
|
* - PRESENT *must* be in the bottom three bits because swap cache
|
|
* entries use the top 29 bits for TLB2.
|
|
*
|
|
* - FILE *must* be in the bottom three bits because swap cache
|
|
* entries use the top 29 bits for TLB2.
|
|
*
|
|
* - CACHE COHERENT bit (M) has no effect on original PPC440 cores,
|
|
* because it doesn't support SMP. However, some later 460 variants
|
|
* have -some- form of SMP support and so I keep the bit there for
|
|
* future use
|
|
*
|
|
* With the PPC 44x Linux implementation, the 0-11th LSBs of the PTE are used
|
|
* for memory protection related functions (see PTE structure in
|
|
* include/asm-ppc/mmu.h). The _PAGE_XXX definitions in this file map to the
|
|
* above bits. Note that the bit values are CPU specific, not architecture
|
|
* specific.
|
|
*
|
|
* The kernel PTE entry holds an arch-dependent swp_entry structure under
|
|
* certain situations. In other words, in such situations some portion of
|
|
* the PTE bits are used as a swp_entry. In the PPC implementation, the
|
|
* 3-24th LSB are shared with swp_entry, however the 0-2nd three LSB still
|
|
* hold protection values. That means the three protection bits are
|
|
* reserved for both PTE and SWAP entry at the most significant three
|
|
* LSBs.
|
|
*
|
|
* There are three protection bits available for SWAP entry:
|
|
* _PAGE_PRESENT
|
|
* _PAGE_FILE
|
|
* _PAGE_HASHPTE (if HW has)
|
|
*
|
|
* So those three bits have to be inside of 0-2nd LSB of PTE.
|
|
*
|
|
*/
|
|
|
|
#define _PAGE_PRESENT 0x00000001 /* S: PTE valid */
|
|
#define _PAGE_RW 0x00000002 /* S: Write permission */
|
|
#define _PAGE_FILE 0x00000004 /* S: nonlinear file mapping */
|
|
#define _PAGE_EXEC 0x00000004 /* H: Execute permission */
|
|
#define _PAGE_ACCESSED 0x00000008 /* S: Page referenced */
|
|
#define _PAGE_DIRTY 0x00000010 /* S: Page dirty */
|
|
#define _PAGE_SPECIAL 0x00000020 /* S: Special page */
|
|
#define _PAGE_USER 0x00000040 /* S: User page */
|
|
#define _PAGE_ENDIAN 0x00000080 /* H: E bit */
|
|
#define _PAGE_GUARDED 0x00000100 /* H: G bit */
|
|
#define _PAGE_COHERENT 0x00000200 /* H: M bit */
|
|
#define _PAGE_NO_CACHE 0x00000400 /* H: I bit */
|
|
#define _PAGE_WRITETHRU 0x00000800 /* H: W bit */
|
|
|
|
/* TODO: Add large page lowmem mapping support */
|
|
#define _PMD_PRESENT 0
|
|
#define _PMD_PRESENT_MASK (PAGE_MASK)
|
|
#define _PMD_BAD (~PAGE_MASK)
|
|
|
|
/* ERPN in a PTE never gets cleared, ignore it */
|
|
#define _PTE_NONE_MASK 0xffffffff00000000ULL
|
|
|
|
|
|
#endif /* __KERNEL__ */
|
|
#endif /* _ASM_POWERPC_PTE_44x_H */
|