From: Carsten Otte This patch puts #ifdef CONFIG_DEBUG_VM around a check in vm_normal_page that verifies that a pfn is valid. This patch increases performance of the page fault microbenchmark in lmbench by 13% and overall dbench performance by 7% on s390x. pfn_valid() is an expensive operation on s390 that needs a high double digit amount of CPU cycles. Nick Piggin suggested that pfn_valid() involves an array lookup on systems with sparsemem, and therefore is an expensive operation there too. The check looks like a clear debug thing to me, it should never trigger on regular kernels. And if a pte is created for an invalid pfn, we'll find out once the memory gets accessed later on anyway. Please consider inclusion of this patch into mm. Signed-off-by: Carsten Otte Cc: Nick Piggin Signed-off-by: Andrew Morton --- mm/memory.c | 2 ++ 1 file changed, 2 insertions(+) diff -puN mm/memory.c~ifdef-very-expensive-debug-check-in-page-fault-path mm/memory.c --- a/mm/memory.c~ifdef-very-expensive-debug-check-in-page-fault-path +++ a/mm/memory.c @@ -392,6 +392,7 @@ struct page *vm_normal_page(struct vm_ar return NULL; } +#ifdef CONFIG_DEBUG_VM /* * Add some anal sanity checks for now. Eventually, * we should just do "return pfn_to_page(pfn)", but @@ -402,6 +403,7 @@ struct page *vm_normal_page(struct vm_ar print_bad_pte(vma, pte, addr); return NULL; } +#endif /* * NOTE! We still have PageReserved() pages in the page _