From: Christoph Lameter drain_node_pages() currently drains the complete pageset of all pages. If there are a large number of pages in the queues then we may hold off interrupts for too long. Duplicate the method used in free_hot_cold_page. Only drain pcp->batch pages at one time. Signed-off-by: Christoph Lameter Signed-off-by: Andrew Morton --- mm/page_alloc.c | 10 ++++++++-- 1 files changed, 8 insertions(+), 2 deletions(-) diff -puN mm/page_alloc.c~drain_node_page-drain-pages-in-batch-units mm/page_alloc.c --- a/mm/page_alloc.c~drain_node_page-drain-pages-in-batch-units +++ a/mm/page_alloc.c @@ -689,9 +689,15 @@ void drain_node_pages(int nodeid) pcp = &pset->pcp[i]; if (pcp->count) { + int to_drain; + local_irq_save(flags); - free_pages_bulk(zone, pcp->count, &pcp->list, 0); - pcp->count = 0; + if (pcp->count >= pcp->batch) + to_drain = pcp->batch; + else + to_drain = pcp->count; + free_pages_bulk(zone, to_drain, &pcp->list, 0); + pcp->count -= to_drain; local_irq_restore(flags); } } _