2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* mm/mprotect.c
|
|
|
|
*
|
|
|
|
* (C) Copyright 1994 Linus Torvalds
|
|
|
|
* (C) Copyright 2002 Christoph Hellwig
|
|
|
|
*
|
2009-01-05 14:06:29 +00:00
|
|
|
* Address space accounting code <alan@lxorguk.ukuu.org.uk>
|
2005-04-16 22:20:36 +00:00
|
|
|
* (C) Copyright 2002 Red Hat Inc, All Rights Reserved
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/mm.h>
|
|
|
|
#include <linux/hugetlb.h>
|
|
|
|
#include <linux/shm.h>
|
|
|
|
#include <linux/mman.h>
|
|
|
|
#include <linux/fs.h>
|
|
|
|
#include <linux/highmem.h>
|
|
|
|
#include <linux/security.h>
|
|
|
|
#include <linux/mempolicy.h>
|
|
|
|
#include <linux/personality.h>
|
|
|
|
#include <linux/syscalls.h>
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 09:03:35 +00:00
|
|
|
#include <linux/swap.h>
|
|
|
|
#include <linux/swapops.h>
|
mmu-notifiers: core
With KVM/GFP/XPMEM there isn't just the primary CPU MMU pointing to pages.
There are secondary MMUs (with secondary sptes and secondary tlbs) too.
sptes in the kvm case are shadow pagetables, but when I say spte in
mmu-notifier context, I mean "secondary pte". In GRU case there's no
actual secondary pte and there's only a secondary tlb because the GRU
secondary MMU has no knowledge about sptes and every secondary tlb miss
event in the MMU always generates a page fault that has to be resolved by
the CPU (this is not the case of KVM where the a secondary tlb miss will
walk sptes in hardware and it will refill the secondary tlb transparently
to software if the corresponding spte is present). The same way
zap_page_range has to invalidate the pte before freeing the page, the spte
(and secondary tlb) must also be invalidated before any page is freed and
reused.
Currently we take a page_count pin on every page mapped by sptes, but that
means the pages can't be swapped whenever they're mapped by any spte
because they're part of the guest working set. Furthermore a spte unmap
event can immediately lead to a page to be freed when the pin is released
(so requiring the same complex and relatively slow tlb_gather smp safe
logic we have in zap_page_range and that can be avoided completely if the
spte unmap event doesn't require an unpin of the page previously mapped in
the secondary MMU).
The mmu notifiers allow kvm/GRU/XPMEM to attach to the tsk->mm and know
when the VM is swapping or freeing or doing anything on the primary MMU so
that the secondary MMU code can drop sptes before the pages are freed,
avoiding all page pinning and allowing 100% reliable swapping of guest
physical address space. Furthermore it avoids the code that teardown the
mappings of the secondary MMU, to implement a logic like tlb_gather in
zap_page_range that would require many IPI to flush other cpu tlbs, for
each fixed number of spte unmapped.
To make an example: if what happens on the primary MMU is a protection
downgrade (from writeable to wrprotect) the secondary MMU mappings will be
invalidated, and the next secondary-mmu-page-fault will call
get_user_pages and trigger a do_wp_page through get_user_pages if it
called get_user_pages with write=1, and it'll re-establishing an updated
spte or secondary-tlb-mapping on the copied page. Or it will setup a
readonly spte or readonly tlb mapping if it's a guest-read, if it calls
get_user_pages with write=0. This is just an example.
This allows to map any page pointed by any pte (and in turn visible in the
primary CPU MMU), into a secondary MMU (be it a pure tlb like GRU, or an
full MMU with both sptes and secondary-tlb like the shadow-pagetable layer
with kvm), or a remote DMA in software like XPMEM (hence needing of
schedule in XPMEM code to send the invalidate to the remote node, while no
need to schedule in kvm/gru as it's an immediate event like invalidating
primary-mmu pte).
At least for KVM without this patch it's impossible to swap guests
reliably. And having this feature and removing the page pin allows
several other optimizations that simplify life considerably.
Dependencies:
1) mm_take_all_locks() to register the mmu notifier when the whole VM
isn't doing anything with "mm". This allows mmu notifier users to keep
track if the VM is in the middle of the invalidate_range_begin/end
critical section with an atomic counter incraese in range_begin and
decreased in range_end. No secondary MMU page fault is allowed to map
any spte or secondary tlb reference, while the VM is in the middle of
range_begin/end as any page returned by get_user_pages in that critical
section could later immediately be freed without any further
->invalidate_page notification (invalidate_range_begin/end works on
ranges and ->invalidate_page isn't called immediately before freeing
the page). To stop all page freeing and pagetable overwrites the
mmap_sem must be taken in write mode and all other anon_vma/i_mmap
locks must be taken too.
2) It'd be a waste to add branches in the VM if nobody could possibly
run KVM/GRU/XPMEM on the kernel, so mmu notifiers will only enabled if
CONFIG_KVM=m/y. In the current kernel kvm won't yet take advantage of
mmu notifiers, but this already allows to compile a KVM external module
against a kernel with mmu notifiers enabled and from the next pull from
kvm.git we'll start using them. And GRU/XPMEM will also be able to
continue the development by enabling KVM=m in their config, until they
submit all GRU/XPMEM GPLv2 code to the mainline kernel. Then they can
also enable MMU_NOTIFIERS in the same way KVM does it (even if KVM=n).
This guarantees nobody selects MMU_NOTIFIER=y if KVM and GRU and XPMEM
are all =n.
The mmu_notifier_register call can fail because mm_take_all_locks may be
interrupted by a signal and return -EINTR. Because mmu_notifier_reigster
is used when a driver startup, a failure can be gracefully handled. Here
an example of the change applied to kvm to register the mmu notifiers.
Usually when a driver startups other allocations are required anyway and
-ENOMEM failure paths exists already.
struct kvm *kvm_arch_create_vm(void)
{
struct kvm *kvm = kzalloc(sizeof(struct kvm), GFP_KERNEL);
+ int err;
if (!kvm)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
+ kvm->arch.mmu_notifier.ops = &kvm_mmu_notifier_ops;
+ err = mmu_notifier_register(&kvm->arch.mmu_notifier, current->mm);
+ if (err) {
+ kfree(kvm);
+ return ERR_PTR(err);
+ }
+
return kvm;
}
mmu_notifier_unregister returns void and it's reliable.
The patch also adds a few needed but missing includes that would prevent
kernel to compile after these changes on non-x86 archs (x86 didn't need
them by luck).
[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix mm/filemap_xip.c build]
[akpm@linux-foundation.org: fix mm/mmu_notifier.c build]
Signed-off-by: Andrea Arcangeli <andrea@qumranet.com>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Jack Steiner <steiner@sgi.com>
Cc: Robin Holt <holt@sgi.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Kanoj Sarcar <kanojsarcar@yahoo.com>
Cc: Roland Dreier <rdreier@cisco.com>
Cc: Steve Wise <swise@opengridcomputing.com>
Cc: Avi Kivity <avi@qumranet.com>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Anthony Liguori <aliguori@us.ibm.com>
Cc: Chris Wright <chrisw@redhat.com>
Cc: Marcelo Tosatti <marcelo@kvack.org>
Cc: Eric Dumazet <dada1@cosmosbay.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Cc: Izik Eidus <izike@qumranet.com>
Cc: Anthony Liguori <aliguori@us.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-07-28 22:46:29 +00:00
|
|
|
#include <linux/mmu_notifier.h>
|
2009-01-06 22:39:16 +00:00
|
|
|
#include <linux/migrate.h>
|
perf: Do the big rename: Performance Counters -> Performance Events
Bye-bye Performance Counters, welcome Performance Events!
In the past few months the perfcounters subsystem has grown out its
initial role of counting hardware events, and has become (and is
becoming) a much broader generic event enumeration, reporting, logging,
monitoring, analysis facility.
Naming its core object 'perf_counter' and naming the subsystem
'perfcounters' has become more and more of a misnomer. With pending
code like hw-breakpoints support the 'counter' name is less and
less appropriate.
All in one, we've decided to rename the subsystem to 'performance
events' and to propagate this rename through all fields, variables
and API names. (in an ABI compatible fashion)
The word 'event' is also a bit shorter than 'counter' - which makes
it slightly more convenient to write/handle as well.
Thanks goes to Stephane Eranian who first observed this misnomer and
suggested a rename.
User-space tooling and ABI compatibility is not affected - this patch
should be function-invariant. (Also, defconfigs were not touched to
keep the size down.)
This patch has been generated via the following script:
FILES=$(find * -type f | grep -vE 'oprofile|[^K]config')
sed -i \
-e 's/PERF_EVENT_/PERF_RECORD_/g' \
-e 's/PERF_COUNTER/PERF_EVENT/g' \
-e 's/perf_counter/perf_event/g' \
-e 's/nb_counters/nb_events/g' \
-e 's/swcounter/swevent/g' \
-e 's/tpcounter_event/tp_event/g' \
$FILES
for N in $(find . -name perf_counter.[ch]); do
M=$(echo $N | sed 's/perf_counter/perf_event/g')
mv $N $M
done
FILES=$(find . -name perf_event.*)
sed -i \
-e 's/COUNTER_MASK/REG_MASK/g' \
-e 's/COUNTER/EVENT/g' \
-e 's/\<event\>/event_id/g' \
-e 's/counter/event/g' \
-e 's/Counter/Event/g' \
$FILES
... to keep it as correct as possible. This script can also be
used by anyone who has pending perfcounters patches - it converts
a Linux kernel tree over to the new naming. We tried to time this
change to the point in time where the amount of pending patches
is the smallest: the end of the merge window.
Namespace clashes were fixed up in a preparatory patch - and some
stylistic fallout will be fixed up in a subsequent patch.
( NOTE: 'counters' are still the proper terminology when we deal
with hardware registers - and these sed scripts are a bit
over-eager in renaming them. I've undone some of that, but
in case there's something left where 'counter' would be
better than 'event' we can undo that on an individual basis
instead of touching an otherwise nicely automated patch. )
Suggested-by: Stephane Eranian <eranian@google.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: <linux-arch@vger.kernel.org>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-21 10:02:48 +00:00
|
|
|
#include <linux/perf_event.h>
|
2014-01-21 23:51:02 +00:00
|
|
|
#include <linux/ksm.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/uaccess.h>
|
|
|
|
#include <asm/pgtable.h>
|
|
|
|
#include <asm/cacheflush.h>
|
|
|
|
#include <asm/tlbflush.h>
|
|
|
|
|
2008-05-14 23:05:51 +00:00
|
|
|
#ifndef pgprot_modify
|
|
|
|
static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
|
|
|
|
{
|
|
|
|
return newprot;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2012-10-25 12:16:32 +00:00
|
|
|
static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
|
2006-09-26 06:30:59 +00:00
|
|
|
unsigned long addr, unsigned long end, pgprot_t newprot,
|
2013-10-07 10:29:25 +00:00
|
|
|
int dirty_accountable, int prot_numa)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2012-10-25 12:16:32 +00:00
|
|
|
struct mm_struct *mm = vma->vm_mm;
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 09:03:35 +00:00
|
|
|
pte_t *pte, oldpte;
|
2005-10-30 01:16:27 +00:00
|
|
|
spinlock_t *ptl;
|
2012-11-19 02:14:23 +00:00
|
|
|
unsigned long pages = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-10-30 01:16:27 +00:00
|
|
|
pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
|
2006-10-01 06:29:33 +00:00
|
|
|
arch_enter_lazy_mmu_mode();
|
2005-04-16 22:20:36 +00:00
|
|
|
do {
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 09:03:35 +00:00
|
|
|
oldpte = *pte;
|
|
|
|
if (pte_present(oldpte)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
pte_t ptent;
|
2012-10-25 12:16:32 +00:00
|
|
|
bool updated = false;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2012-10-25 12:16:32 +00:00
|
|
|
if (!prot_numa) {
|
2013-12-19 01:08:37 +00:00
|
|
|
ptent = ptep_modify_prot_start(mm, addr, pte);
|
2013-12-19 01:08:41 +00:00
|
|
|
if (pte_numa(ptent))
|
|
|
|
ptent = pte_mknonnuma(ptent);
|
2012-10-25 12:16:32 +00:00
|
|
|
ptent = pte_modify(ptent, newprot);
|
2014-02-12 03:43:37 +00:00
|
|
|
/*
|
|
|
|
* Avoid taking write faults for pages we
|
|
|
|
* know to be dirty.
|
|
|
|
*/
|
|
|
|
if (dirty_accountable && pte_dirty(ptent))
|
|
|
|
ptent = pte_mkwrite(ptent);
|
|
|
|
ptep_modify_prot_commit(mm, addr, pte, ptent);
|
2012-10-25 12:16:32 +00:00
|
|
|
updated = true;
|
|
|
|
} else {
|
|
|
|
struct page *page;
|
|
|
|
|
|
|
|
page = vm_normal_page(vma, addr, oldpte);
|
2014-01-21 23:51:02 +00:00
|
|
|
if (page && !PageKsm(page)) {
|
2013-10-07 10:29:05 +00:00
|
|
|
if (!pte_numa(oldpte)) {
|
2014-02-12 03:43:38 +00:00
|
|
|
ptep_set_numa(mm, addr, pte);
|
2012-10-25 12:16:32 +00:00
|
|
|
updated = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (updated)
|
|
|
|
pages++;
|
2012-03-21 23:33:59 +00:00
|
|
|
} else if (IS_ENABLED(CONFIG_MIGRATION) && !pte_file(oldpte)) {
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 09:03:35 +00:00
|
|
|
swp_entry_t entry = pte_to_swp_entry(oldpte);
|
|
|
|
|
|
|
|
if (is_write_migration_entry(entry)) {
|
2013-10-16 20:46:51 +00:00
|
|
|
pte_t newpte;
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 09:03:35 +00:00
|
|
|
/*
|
|
|
|
* A protection check is difficult so
|
|
|
|
* just be safe and disable write
|
|
|
|
*/
|
|
|
|
make_migration_entry_read(&entry);
|
2013-10-16 20:46:51 +00:00
|
|
|
newpte = swp_entry_to_pte(entry);
|
|
|
|
if (pte_swp_soft_dirty(oldpte))
|
|
|
|
newpte = pte_swp_mksoft_dirty(newpte);
|
|
|
|
set_pte_at(mm, addr, pte, newpte);
|
2013-10-07 10:28:48 +00:00
|
|
|
|
|
|
|
pages++;
|
[PATCH] Swapless page migration: add R/W migration entries
Implement read/write migration ptes
We take the upper two swapfiles for the two types of migration ptes and define
a series of macros in swapops.h.
The VM is modified to handle the migration entries. migration entries can
only be encountered when the page they are pointing to is locked. This limits
the number of places one has to fix. We also check in copy_pte_range and in
mprotect_pte_range() for migration ptes.
We check for migration ptes in do_swap_cache and call a function that will
then wait on the page lock. This allows us to effectively stop all accesses
to apge.
Migration entries are created by try_to_unmap if called for migration and
removed by local functions in migrate.c
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration (I've no NUMA, just
hacking it up to migrate recklessly while running load), I've hit the
BUG_ON(!PageLocked(p)) in migration_entry_to_page.
This comes from an orphaned migration entry, unrelated to the current
correctly locked migration, but hit by remove_anon_migration_ptes as it
checks an address in each vma of the anon_vma list.
Such an orphan may be left behind if an earlier migration raced with fork:
copy_one_pte can duplicate a migration entry from parent to child, after
remove_anon_migration_ptes has checked the child vma, but before it has
removed it from the parent vma. (If the process were later to fault on this
orphaned entry, it would hit the same BUG from migration_entry_wait.)
This could be fixed by locking anon_vma in copy_one_pte, but we'd rather
not. There's no such problem with file pages, because vma_prio_tree_add
adds child vma after parent vma, and the page table locking at each end is
enough to serialize. Follow that example with anon_vma: add new vmas to the
tail instead of the head.
(There's no corresponding problem when inserting migration entries,
because a missed pte will leave the page count and mapcount high, which is
allowed for. And there's no corresponding problem when migrating via swap,
because a leftover swap entry will be correctly faulted. But the swapless
method has no refcounting of its entries.)
From: Ingo Molnar <mingo@elte.hu>
pte_unmap_unlock() takes the pte pointer as an argument.
From: Hugh Dickins <hugh@veritas.com>
Several times while testing swapless page migration, gcc has tried to exec
a pointer instead of a string: smells like COW mappings are not being
properly write-protected on fork.
The protection in copy_one_pte looks very convincing, until at last you
realize that the second arg to make_migration_entry is a boolean "write",
and SWP_MIGRATION_READ is 30.
Anyway, it's better done like in change_pte_range, using
is_write_migration_entry and make_migration_entry_read.
From: Hugh Dickins <hugh@veritas.com>
Remove unnecessary obfuscation from sys_swapon's range check on swap type,
which blew up causing memory corruption once swapless migration made
MAX_SWAPFILES no longer 2 ^ MAX_SWAPFILES_SHIFT.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Christoph Lameter <clameter@engr.sgi.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
From: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-06-23 09:03:35 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
} while (pte++, addr += PAGE_SIZE, addr != end);
|
2006-10-01 06:29:33 +00:00
|
|
|
arch_leave_lazy_mmu_mode();
|
2005-10-30 01:16:27 +00:00
|
|
|
pte_unmap_unlock(pte - 1, ptl);
|
2012-11-19 02:14:23 +00:00
|
|
|
|
|
|
|
return pages;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2012-12-18 22:23:17 +00:00
|
|
|
static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
|
|
|
|
pud_t *pud, unsigned long addr, unsigned long end,
|
|
|
|
pgprot_t newprot, int dirty_accountable, int prot_numa)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
pmd_t *pmd;
|
|
|
|
unsigned long next;
|
2012-11-19 02:14:23 +00:00
|
|
|
unsigned long pages = 0;
|
2013-11-12 23:08:32 +00:00
|
|
|
unsigned long nr_huge_updates = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
pmd = pmd_offset(pud, addr);
|
|
|
|
do {
|
2013-10-07 10:29:14 +00:00
|
|
|
unsigned long this_pages;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
next = pmd_addr_end(addr, end);
|
2011-01-13 23:47:04 +00:00
|
|
|
if (pmd_trans_huge(*pmd)) {
|
|
|
|
if (next - addr != HPAGE_PMD_SIZE)
|
2012-12-12 21:50:59 +00:00
|
|
|
split_huge_page_pmd(vma, addr, pmd);
|
2013-10-07 10:28:49 +00:00
|
|
|
else {
|
|
|
|
int nr_ptes = change_huge_pmd(vma, pmd, addr,
|
|
|
|
newprot, prot_numa);
|
|
|
|
|
|
|
|
if (nr_ptes) {
|
2013-11-12 23:08:32 +00:00
|
|
|
if (nr_ptes == HPAGE_PMD_NR) {
|
|
|
|
pages += HPAGE_PMD_NR;
|
|
|
|
nr_huge_updates++;
|
|
|
|
}
|
2013-10-07 10:28:49 +00:00
|
|
|
continue;
|
|
|
|
}
|
2012-11-19 02:14:23 +00:00
|
|
|
}
|
2011-01-13 23:47:04 +00:00
|
|
|
/* fall through */
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
if (pmd_none_or_clear_bad(pmd))
|
|
|
|
continue;
|
2013-10-07 10:29:14 +00:00
|
|
|
this_pages = change_pte_range(vma, pmd, addr, next, newprot,
|
2013-10-07 10:29:25 +00:00
|
|
|
dirty_accountable, prot_numa);
|
2013-10-07 10:29:14 +00:00
|
|
|
pages += this_pages;
|
2005-04-16 22:20:36 +00:00
|
|
|
} while (pmd++, addr = next, addr != end);
|
2012-11-19 02:14:23 +00:00
|
|
|
|
2013-11-12 23:08:32 +00:00
|
|
|
if (nr_huge_updates)
|
|
|
|
count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
|
2012-11-19 02:14:23 +00:00
|
|
|
return pages;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2012-12-18 22:23:17 +00:00
|
|
|
static inline unsigned long change_pud_range(struct vm_area_struct *vma,
|
|
|
|
pgd_t *pgd, unsigned long addr, unsigned long end,
|
|
|
|
pgprot_t newprot, int dirty_accountable, int prot_numa)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
pud_t *pud;
|
|
|
|
unsigned long next;
|
2012-11-19 02:14:23 +00:00
|
|
|
unsigned long pages = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
pud = pud_offset(pgd, addr);
|
|
|
|
do {
|
|
|
|
next = pud_addr_end(addr, end);
|
|
|
|
if (pud_none_or_clear_bad(pud))
|
|
|
|
continue;
|
2012-11-19 02:14:23 +00:00
|
|
|
pages += change_pmd_range(vma, pud, addr, next, newprot,
|
2012-10-25 12:16:32 +00:00
|
|
|
dirty_accountable, prot_numa);
|
2005-04-16 22:20:36 +00:00
|
|
|
} while (pud++, addr = next, addr != end);
|
2012-11-19 02:14:23 +00:00
|
|
|
|
|
|
|
return pages;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2012-11-19 02:14:23 +00:00
|
|
|
static unsigned long change_protection_range(struct vm_area_struct *vma,
|
2006-09-26 06:30:59 +00:00
|
|
|
unsigned long addr, unsigned long end, pgprot_t newprot,
|
2012-10-25 12:16:32 +00:00
|
|
|
int dirty_accountable, int prot_numa)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
|
|
pgd_t *pgd;
|
|
|
|
unsigned long next;
|
|
|
|
unsigned long start = addr;
|
2012-11-19 02:14:23 +00:00
|
|
|
unsigned long pages = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
BUG_ON(addr >= end);
|
|
|
|
pgd = pgd_offset(mm, addr);
|
|
|
|
flush_cache_range(vma, addr, end);
|
mm: fix TLB flush race between migration, and change_protection_range
There are a few subtle races, between change_protection_range (used by
mprotect and change_prot_numa) on one side, and NUMA page migration and
compaction on the other side.
The basic race is that there is a time window between when the PTE gets
made non-present (PROT_NONE or NUMA), and the TLB is flushed.
During that time, a CPU may continue writing to the page.
This is fine most of the time, however compaction or the NUMA migration
code may come in, and migrate the page away.
When that happens, the CPU may continue writing, through the cached
translation, to what is no longer the current memory location of the
process.
This only affects x86, which has a somewhat optimistic pte_accessible.
All other architectures appear to be safe, and will either always flush,
or flush whenever there is a valid mapping, even with no permissions
(SPARC).
The basic race looks like this:
CPU A CPU B CPU C
load TLB entry
make entry PTE/PMD_NUMA
fault on entry
read/write old page
start migrating page
change PTE/PMD to new page
read/write old page [*]
flush TLB
reload TLB from new entry
read/write new page
lose data
[*] the old page may belong to a new user at this point!
The obvious fix is to flush remote TLB entries, by making sure that
pte_accessible aware of the fact that PROT_NONE and PROT_NUMA memory may
still be accessible if there is a TLB flush pending for the mm.
This should fix both NUMA migration and compaction.
[mgorman@suse.de: fix build]
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Alex Thorlton <athorlton@sgi.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-12-19 01:08:44 +00:00
|
|
|
set_tlb_flush_pending(mm);
|
2005-04-16 22:20:36 +00:00
|
|
|
do {
|
|
|
|
next = pgd_addr_end(addr, end);
|
|
|
|
if (pgd_none_or_clear_bad(pgd))
|
|
|
|
continue;
|
2012-11-19 02:14:23 +00:00
|
|
|
pages += change_pud_range(vma, pgd, addr, next, newprot,
|
2012-10-25 12:16:32 +00:00
|
|
|
dirty_accountable, prot_numa);
|
2005-04-16 22:20:36 +00:00
|
|
|
} while (pgd++, addr = next, addr != end);
|
2012-11-19 02:14:23 +00:00
|
|
|
|
2012-11-19 02:14:24 +00:00
|
|
|
/* Only flush the TLB if we actually modified any entries: */
|
|
|
|
if (pages)
|
|
|
|
flush_tlb_range(vma, start, end);
|
mm: fix TLB flush race between migration, and change_protection_range
There are a few subtle races, between change_protection_range (used by
mprotect and change_prot_numa) on one side, and NUMA page migration and
compaction on the other side.
The basic race is that there is a time window between when the PTE gets
made non-present (PROT_NONE or NUMA), and the TLB is flushed.
During that time, a CPU may continue writing to the page.
This is fine most of the time, however compaction or the NUMA migration
code may come in, and migrate the page away.
When that happens, the CPU may continue writing, through the cached
translation, to what is no longer the current memory location of the
process.
This only affects x86, which has a somewhat optimistic pte_accessible.
All other architectures appear to be safe, and will either always flush,
or flush whenever there is a valid mapping, even with no permissions
(SPARC).
The basic race looks like this:
CPU A CPU B CPU C
load TLB entry
make entry PTE/PMD_NUMA
fault on entry
read/write old page
start migrating page
change PTE/PMD to new page
read/write old page [*]
flush TLB
reload TLB from new entry
read/write new page
lose data
[*] the old page may belong to a new user at this point!
The obvious fix is to flush remote TLB entries, by making sure that
pte_accessible aware of the fact that PROT_NONE and PROT_NUMA memory may
still be accessible if there is a TLB flush pending for the mm.
This should fix both NUMA migration and compaction.
[mgorman@suse.de: fix build]
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Alex Thorlton <athorlton@sgi.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-12-19 01:08:44 +00:00
|
|
|
clear_tlb_flush_pending(mm);
|
2012-11-19 02:14:23 +00:00
|
|
|
|
|
|
|
return pages;
|
|
|
|
}
|
|
|
|
|
|
|
|
unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
|
|
|
|
unsigned long end, pgprot_t newprot,
|
2012-10-25 12:16:32 +00:00
|
|
|
int dirty_accountable, int prot_numa)
|
2012-11-19 02:14:23 +00:00
|
|
|
{
|
|
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
|
|
unsigned long pages;
|
|
|
|
|
|
|
|
mmu_notifier_invalidate_range_start(mm, start, end);
|
|
|
|
if (is_vm_hugetlb_page(vma))
|
|
|
|
pages = hugetlb_change_protection(vma, start, end, newprot);
|
|
|
|
else
|
2012-10-25 12:16:32 +00:00
|
|
|
pages = change_protection_range(vma, start, end, newprot, dirty_accountable, prot_numa);
|
2012-11-19 02:14:23 +00:00
|
|
|
mmu_notifier_invalidate_range_end(mm, start, end);
|
|
|
|
|
|
|
|
return pages;
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2007-07-19 08:48:16 +00:00
|
|
|
int
|
2005-04-16 22:20:36 +00:00
|
|
|
mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev,
|
|
|
|
unsigned long start, unsigned long end, unsigned long newflags)
|
|
|
|
{
|
|
|
|
struct mm_struct *mm = vma->vm_mm;
|
|
|
|
unsigned long oldflags = vma->vm_flags;
|
|
|
|
long nrpages = (end - start) >> PAGE_SHIFT;
|
|
|
|
unsigned long charged = 0;
|
|
|
|
pgoff_t pgoff;
|
|
|
|
int error;
|
2006-09-26 06:30:59 +00:00
|
|
|
int dirty_accountable = 0;
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
if (newflags == oldflags) {
|
|
|
|
*pprev = vma;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If we make a private mapping writable we increase our commit;
|
|
|
|
* but (without finer accounting) cannot reduce our commit if we
|
2009-02-10 14:02:27 +00:00
|
|
|
* make it unwritable again. hugetlb mapping were accounted for
|
|
|
|
* even if read-only so there is no need to account for them here
|
2005-04-16 22:20:36 +00:00
|
|
|
*/
|
|
|
|
if (newflags & VM_WRITE) {
|
2009-02-10 14:02:27 +00:00
|
|
|
if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB|
|
2008-07-24 04:27:28 +00:00
|
|
|
VM_SHARED|VM_NORESERVE))) {
|
2005-04-16 22:20:36 +00:00
|
|
|
charged = nrpages;
|
2012-02-13 03:58:52 +00:00
|
|
|
if (security_vm_enough_memory_mm(mm, charged))
|
2005-04-16 22:20:36 +00:00
|
|
|
return -ENOMEM;
|
|
|
|
newflags |= VM_ACCOUNT;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* First try to merge with previous and/or next vma.
|
|
|
|
*/
|
|
|
|
pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
|
|
|
|
*pprev = vma_merge(mm, *pprev, start, end, newflags,
|
|
|
|
vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma));
|
|
|
|
if (*pprev) {
|
|
|
|
vma = *pprev;
|
|
|
|
goto success;
|
|
|
|
}
|
|
|
|
|
|
|
|
*pprev = vma;
|
|
|
|
|
|
|
|
if (start != vma->vm_start) {
|
|
|
|
error = split_vma(mm, vma, start, 1);
|
|
|
|
if (error)
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (end != vma->vm_end) {
|
|
|
|
error = split_vma(mm, vma, end, 0);
|
|
|
|
if (error)
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
|
|
|
|
success:
|
|
|
|
/*
|
|
|
|
* vm_flags and vm_page_prot are protected by the mmap_sem
|
|
|
|
* held in write mode.
|
|
|
|
*/
|
|
|
|
vma->vm_flags = newflags;
|
2008-05-14 23:05:51 +00:00
|
|
|
vma->vm_page_prot = pgprot_modify(vma->vm_page_prot,
|
|
|
|
vm_get_page_prot(newflags));
|
|
|
|
|
2006-09-26 06:30:59 +00:00
|
|
|
if (vma_wants_writenotify(vma)) {
|
2007-10-23 03:45:12 +00:00
|
|
|
vma->vm_page_prot = vm_get_page_prot(newflags & ~VM_SHARED);
|
2006-09-26 06:30:59 +00:00
|
|
|
dirty_accountable = 1;
|
|
|
|
}
|
2006-09-26 06:30:57 +00:00
|
|
|
|
2012-12-18 22:23:17 +00:00
|
|
|
change_protection(vma, start, end, vma->vm_page_prot,
|
|
|
|
dirty_accountable, 0);
|
2012-11-19 02:14:23 +00:00
|
|
|
|
2005-10-30 01:15:56 +00:00
|
|
|
vm_stat_account(mm, oldflags, vma->vm_file, -nrpages);
|
|
|
|
vm_stat_account(mm, newflags, vma->vm_file, nrpages);
|
2010-11-08 19:29:07 +00:00
|
|
|
perf_event_mmap(vma);
|
2005-04-16 22:20:36 +00:00
|
|
|
return 0;
|
|
|
|
|
|
|
|
fail:
|
|
|
|
vm_unacct_memory(charged);
|
|
|
|
return error;
|
|
|
|
}
|
|
|
|
|
2009-01-14 13:14:15 +00:00
|
|
|
SYSCALL_DEFINE3(mprotect, unsigned long, start, size_t, len,
|
|
|
|
unsigned long, prot)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
unsigned long vm_flags, nstart, end, tmp, reqprot;
|
|
|
|
struct vm_area_struct *vma, *prev;
|
|
|
|
int error = -EINVAL;
|
|
|
|
const int grows = prot & (PROT_GROWSDOWN|PROT_GROWSUP);
|
|
|
|
prot &= ~(PROT_GROWSDOWN|PROT_GROWSUP);
|
|
|
|
if (grows == (PROT_GROWSDOWN|PROT_GROWSUP)) /* can't be both */
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (start & ~PAGE_MASK)
|
|
|
|
return -EINVAL;
|
|
|
|
if (!len)
|
|
|
|
return 0;
|
|
|
|
len = PAGE_ALIGN(len);
|
|
|
|
end = start + len;
|
|
|
|
if (end <= start)
|
|
|
|
return -ENOMEM;
|
2008-07-07 14:28:51 +00:00
|
|
|
if (!arch_validate_prot(prot))
|
2005-04-16 22:20:36 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
reqprot = prot;
|
|
|
|
/*
|
|
|
|
* Does the application expect PROT_READ to imply PROT_EXEC:
|
|
|
|
*/
|
2006-06-23 09:03:23 +00:00
|
|
|
if ((prot & PROT_READ) && (current->personality & READ_IMPLIES_EXEC))
|
2005-04-16 22:20:36 +00:00
|
|
|
prot |= PROT_EXEC;
|
|
|
|
|
|
|
|
vm_flags = calc_vm_prot_bits(prot);
|
|
|
|
|
|
|
|
down_write(¤t->mm->mmap_sem);
|
|
|
|
|
2012-03-07 02:23:36 +00:00
|
|
|
vma = find_vma(current->mm, start);
|
2005-04-16 22:20:36 +00:00
|
|
|
error = -ENOMEM;
|
|
|
|
if (!vma)
|
|
|
|
goto out;
|
2012-03-07 02:23:36 +00:00
|
|
|
prev = vma->vm_prev;
|
2005-04-16 22:20:36 +00:00
|
|
|
if (unlikely(grows & PROT_GROWSDOWN)) {
|
|
|
|
if (vma->vm_start >= end)
|
|
|
|
goto out;
|
|
|
|
start = vma->vm_start;
|
|
|
|
error = -EINVAL;
|
|
|
|
if (!(vma->vm_flags & VM_GROWSDOWN))
|
|
|
|
goto out;
|
2012-12-18 22:23:17 +00:00
|
|
|
} else {
|
2005-04-16 22:20:36 +00:00
|
|
|
if (vma->vm_start > start)
|
|
|
|
goto out;
|
|
|
|
if (unlikely(grows & PROT_GROWSUP)) {
|
|
|
|
end = vma->vm_end;
|
|
|
|
error = -EINVAL;
|
|
|
|
if (!(vma->vm_flags & VM_GROWSUP))
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (start > vma->vm_start)
|
|
|
|
prev = vma;
|
|
|
|
|
|
|
|
for (nstart = start ; ; ) {
|
|
|
|
unsigned long newflags;
|
|
|
|
|
2012-12-18 22:23:17 +00:00
|
|
|
/* Here we know that vma->vm_start <= nstart < vma->vm_end. */
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2012-12-18 22:23:17 +00:00
|
|
|
newflags = vm_flags;
|
|
|
|
newflags |= (vma->vm_flags & ~(VM_READ | VM_WRITE | VM_EXEC));
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2005-09-21 16:55:39 +00:00
|
|
|
/* newflags >> 4 shift VM_MAY% in place of VM_% */
|
|
|
|
if ((newflags & ~(newflags >> 4)) & (VM_READ | VM_WRITE | VM_EXEC)) {
|
2005-04-16 22:20:36 +00:00
|
|
|
error = -EACCES;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
error = security_file_mprotect(vma, reqprot, prot);
|
|
|
|
if (error)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
tmp = vma->vm_end;
|
|
|
|
if (tmp > end)
|
|
|
|
tmp = end;
|
|
|
|
error = mprotect_fixup(vma, &prev, nstart, tmp, newflags);
|
|
|
|
if (error)
|
|
|
|
goto out;
|
|
|
|
nstart = tmp;
|
|
|
|
|
|
|
|
if (nstart < prev->vm_end)
|
|
|
|
nstart = prev->vm_end;
|
|
|
|
if (nstart >= end)
|
|
|
|
goto out;
|
|
|
|
|
|
|
|
vma = prev->vm_next;
|
|
|
|
if (!vma || vma->vm_start != nstart) {
|
|
|
|
error = -ENOMEM;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
out:
|
|
|
|
up_write(¤t->mm->mmap_sem);
|
|
|
|
return error;
|
|
|
|
}
|