From: Mel Gorman move-free-pages-between-lists-on-steal-fix-2.patch fixed an issue with a BUG_ON() that checked for a page just outside a MAX_ORDER_NR_PAGES boundary. In fact, the proper place to check it was earlier. A situation can occur on SPARSEMEM where a section boundary is crossed which will cause problems on some machines. This patch addresses the problem. Signed-off-by: Mel Gorman Acked-by: Yasunori Goto Signed-off-by: Andrew Morton --- mm/page_alloc.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff -puN mm/page_alloc.c~move-free-pages-between-lists-on-steal-do-not-cross-section-boundary-when-moving-pages-between-mobility-lists mm/page_alloc.c --- a/mm/page_alloc.c~move-free-pages-between-lists-on-steal-do-not-cross-section-boundary-when-moving-pages-between-mobility-lists +++ a/mm/page_alloc.c @@ -681,10 +681,10 @@ int move_freepages(struct zone *zone, * Remove at a later date when no bug reports exist related to * CONFIG_PAGE_GROUP_BY_MOBILITY */ - BUG_ON(page_zone(start_page) != page_zone(end_page - 1)); + BUG_ON(page_zone(start_page) != page_zone(end_page)); #endif - for (page = start_page; page < end_page;) { + for (page = start_page; page <= end_page;) { #ifdef CONFIG_HOLES_IN_ZONE if (!pfn_valid(page_to_pfn(page))) { page++; @@ -716,8 +716,8 @@ int move_freepages_block(struct zone *zo start_pfn = page_to_pfn(page); start_pfn = start_pfn & ~(MAX_ORDER_NR_PAGES-1); start_page = pfn_to_page(start_pfn); - end_page = start_page + MAX_ORDER_NR_PAGES; - end_pfn = start_pfn + MAX_ORDER_NR_PAGES; + end_page = start_page + MAX_ORDER_NR_PAGES - 1; + end_pfn = start_pfn + MAX_ORDER_NR_PAGES - 1; /* Do not cross zone boundaries */ if (start_pfn < zone->zone_start_pfn) _