mirror of
https://github.com/FEX-Emu/linux.git
synced 2024-12-01 03:22:08 +00:00
deceb6cd17
Final step in pushing down common core's page_table_lock. follow_page no longer wants caller to hold page_table_lock, uses pte_offset_map_lock itself; and so no page_table_lock is taken in get_user_pages itself. But get_user_pages (and get_futex_key) do then need follow_page to pin the page for them: take Daniel's suggestion of bitflags to follow_page. Need one for WRITE, another for TOUCH (it was the accessed flag before: vanished along with check_user_page_readable, but surely get_numa_maps is wrong to mark every page it finds as accessed), another for GET. And another, ANON to dispose of untouched_anonymous_page: it seems silly for that to descend a second time, let follow_page observe if there was no page table and return ZERO_PAGE if so. Fix minor bug in that: check VM_LOCKED - make_pages_present ought to make readonly anonymous present. Give get_numa_maps a cond_resched while we're there. Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org> |
||
---|---|---|
.. | ||
bootmem.c | ||
fadvise.c | ||
filemap_xip.c | ||
filemap.c | ||
filemap.h | ||
fremap.c | ||
highmem.c | ||
hugetlb.c | ||
internal.h | ||
Kconfig | ||
madvise.c | ||
Makefile | ||
memory.c | ||
mempolicy.c | ||
mempool.c | ||
mincore.c | ||
mlock.c | ||
mmap.c | ||
mprotect.c | ||
mremap.c | ||
msync.c | ||
nommu.c | ||
oom_kill.c | ||
page_alloc.c | ||
page_io.c | ||
page-writeback.c | ||
pdflush.c | ||
prio_tree.c | ||
readahead.c | ||
rmap.c | ||
shmem.c | ||
slab.c | ||
sparse.c | ||
swap_state.c | ||
swap.c | ||
swapfile.c | ||
thrash.c | ||
tiny-shmem.c | ||
truncate.c | ||
vmalloc.c | ||
vmscan.c |