mirror of
https://github.com/joel16/android_kernel_sony_msm8994.git
synced 2024-11-27 06:01:12 +00:00
[PATCH] fix free swap cache latency
Lee Revell reported 28ms latency when process with lots of swapped memory exits. 2.6.15 introduced a latency regression when unmapping: in accounting the zap_work latency breaker, pte_none counted 1, pte_present PAGE_SIZE, but a swap entry counted nothing at all. We think of pages present as the slow case, but Lee's trace shows that free_swap_and_cache's radix tree lookup can make a lot of work - and we could have been doing it many thousands of times without a latency break. Move the zap_work update up to account swap entries like pages present. This does account non-linear pte_file entries, and unmap_mapping_range skipping over swap entries, by the same amount even though they're quick: but neither of those cases deserves complicating the code (and they're treated no worse than they were in 2.6.14). Signed-off-by: Hugh Dickins <hugh@veritas.com> Acked-by: Nick Piggin <npiggin@suse.de> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This commit is contained in:
parent
7670f023aa
commit
6f5e6b9e69
@ -623,11 +623,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
|
||||
(*zap_work)--;
|
||||
continue;
|
||||
}
|
||||
|
||||
(*zap_work) -= PAGE_SIZE;
|
||||
|
||||
if (pte_present(ptent)) {
|
||||
struct page *page;
|
||||
|
||||
(*zap_work) -= PAGE_SIZE;
|
||||
|
||||
page = vm_normal_page(vma, addr, ptent);
|
||||
if (unlikely(details) && page) {
|
||||
/*
|
||||
|
Loading…
Reference in New Issue
Block a user