From: Christoph Lameter If the kernel was compiled without support for swapping then we have no means of evicting anonymous pages and they become like mlocked pages. Do not add new anonymous pages to the LRU and if we find one on the LRU then take it off. This is also going to reduce the overhead of allocating anonymous pages since the LRU lock must no longer be taken to put pages onto the active list. Probably mostly of interest to embedded systems since normal kernels support swap. On linux-mm we also discussed taking anonymous pages off the LRU if there is no swap defined or not enough swap. However, there is no easy way of putting the pages back to the LRU since we have no list of mlocked pages. We could set up such a list but then list manipulation would complicate the mlocked page treatment and require taking the lru lock. I'd rather leave the mlocked handling as simple as it is right now. Anonymous pages will be accounted as mlocked pages. Signed-off-by: Christoph Lameter Cc: Nick Piggin Cc: Hugh Dickins Signed-off-by: Andrew Morton --- mm/memory.c | 29 +++++++++++++++++++---------- mm/vmscan.c | 4 +++- 2 files changed, 22 insertions(+), 11 deletions(-) diff -puN mm/memory.c~take-anonymous-pages-off-the-lru-if-we-have-no-swap mm/memory.c --- a/mm/memory.c~take-anonymous-pages-off-the-lru-if-we-have-no-swap +++ a/mm/memory.c @@ -907,17 +907,26 @@ static void add_anon_page(struct vm_area unsigned long address) { inc_mm_counter(vma->vm_mm, anon_rss); - if (vma->vm_flags & VM_LOCKED) { - /* - * Page is new and therefore not on the LRU - * so we can directly mark it as mlocked - */ - SetPageMlocked(page); - ClearPageActive(page); - inc_zone_page_state(page, NR_MLOCK); - } else - lru_cache_add_active(page); page_add_new_anon_rmap(page, vma, address); + +#ifdef CONFIG_SWAP + /* + * It only makes sense to put anonymous pages on the + * LRU if we have a way of evicting anonymous pages. + */ + if (!(vma->vm_flags & VM_LOCKED)) { + lru_cache_add_active(page); + return; + } +#endif + + /* + * Page is new and therefore not on the LRU + * so we can directly mark it as mlocked + */ + SetPageMlocked(page); + ClearPageActive(page); + inc_zone_page_state(page, NR_MLOCK); } /* diff -puN mm/vmscan.c~take-anonymous-pages-off-the-lru-if-we-have-no-swap mm/vmscan.c --- a/mm/vmscan.c~take-anonymous-pages-off-the-lru-if-we-have-no-swap +++ a/mm/vmscan.c @@ -488,14 +488,16 @@ static unsigned long shrink_page_list(st if (referenced && page_mapping_inuse(page)) goto activate_locked; -#ifdef CONFIG_SWAP /* * Anonymous process memory has backing store? * Try to allocate it some swap space here. */ if (PageAnon(page) && !PageSwapCache(page)) +#ifdef CONFIG_SWAP if (!add_to_swap(page, GFP_ATOMIC)) goto activate_locked; +#else + goto mlocked; #endif /* CONFIG_SWAP */ mapping = page_mapping(page); _