linux/mm
Daisuke Nishimura a10cebf56c memcg: check under limit at shrink_usage
Current memory cgroup(both in mainline and -mm) doesn't account swap
caches as memory(swap cache support is dropped temporarily now).

So try_to_free_mem_cgroup_pages doesn't reflect the count of pages that
have been moved to swap cache.

But this makes mem_cgroup_shrink_usage fail easily if most of the pages
are anon/shmem, and then shmem_getpage returns -ENOMEM and the process
will be killed.

This patch adds res_counter_check_under_limit to avoid these cases.

BTW, even if swap cache support is enabled again, if a process is moved to
another cgroup, which has been just made, between precharge and
shrink_usage in shmem_getpage, shrink_usage may fail just because there is
no pages to reclaim.

So this change would make sense anyway.

Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-09-23 08:09:14 -07:00
..
allocpercpu.c
backing-dev.c
bootmem.c
bounce.c
dmapool.c
fadvise.c
filemap_xip.c
filemap.c
fremap.c
highmem.c
hugetlb.c
internal.h
Kconfig
maccess.c
madvise.c
Makefile
memcontrol.c memcg: check under limit at shrink_usage 2008-09-23 08:09:14 -07:00
memory_hotplug.c
memory.c
mempolicy.c
mempool.c
migrate.c
mincore.c
mlock.c
mm_init.c
mmap.c
mmu_notifier.c
mmzone.c
mprotect.c
mremap.c
msync.c
nommu.c
oom_kill.c
page_alloc.c
page_io.c
page_isolation.c
page-writeback.c
pagewalk.c
pdflush.c
prio_tree.c
quicklist.c
readahead.c
rmap.c
shmem_acl.c
shmem.c
slab.c
slob.c
slub.c
sparse-vmemmap.c
sparse.c
swap_state.c
swap.c
swapfile.c
thrash.c
tiny-shmem.c mm: tiny-shmem fix lock ordering: mmap_sem vs i_mutex 2008-09-23 08:09:14 -07:00
truncate.c
util.c
vmalloc.c
vmscan.c
vmstat.c