From: Ashwin Chaugule In the current implementation of swap token tuning, grab swap token is made from: 1) after page_cache_read (filemap.c) and 2) after the readahead logic in do_swap_page (memory.c) IMO, the contention for the swap token should happen _before_ the aforementioned calls, because in the event of low system memory, calls to freeup space will be made later from page_cache_read and read_swap_cache_async , so we want to avoid "false LRU" pages by grabbing the token before the VM starts searching for replacement candidates. Signed-off-by: Ashwin Chaugule Cc: Rik van Riel Cc: Peter Zijlstra Signed-off-by: Andrew Morton --- mm/filemap.c | 2 +- mm/memory.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff -puN mm/filemap.c~swap-token-try-to-grab-swap-token-before-the-vm-selects-pages-for-eviction mm/filemap.c --- a/mm/filemap.c~swap-token-try-to-grab-swap-token-before-the-vm-selects-pages-for-eviction +++ a/mm/filemap.c @@ -1456,8 +1456,8 @@ no_cached_page: * We're only likely to ever get here if MADV_RANDOM is in * effect. */ + grab_swap_token(); /* Contend for token _before_ we read-in */ error = page_cache_read(file, pgoff); - grab_swap_token(); /* * The page we want has now been added to the page cache. diff -puN mm/memory.c~swap-token-try-to-grab-swap-token-before-the-vm-selects-pages-for-eviction mm/memory.c --- a/mm/memory.c~swap-token-try-to-grab-swap-token-before-the-vm-selects-pages-for-eviction +++ a/mm/memory.c @@ -1989,6 +1989,7 @@ static int do_swap_page(struct mm_struct delayacct_set_flag(DELAYACCT_PF_SWAPIN); page = lookup_swap_cache(entry); if (!page) { + grab_swap_token(); /* Contend for token _before_ we read-in */ swapin_readahead(entry, address, vma); page = read_swap_cache_async(entry, vma, address); if (!page) { @@ -2006,7 +2007,6 @@ static int do_swap_page(struct mm_struct /* Had to read the page from swap area: Major fault */ ret = VM_FAULT_MAJOR; count_vm_event(PGMAJFAULT); - grab_swap_token(); } delayacct_clear_flag(DELAYACCT_PF_SWAPIN); _