follow: Do not put_page() if no flags are specified. Seems that one of the side effects of the dirty pages patch in 2.6.17-rc4-mm3 is that follow_pages does a page_put if flags = 0. This breaks the new sys_move_pages(). Only put_page if we did a get_page() before. Signed-off-by: Christoph Lameter Index: linux-2.6.17-rc4-mm3/mm/migrate.c =================================================================== --- linux-2.6.17-rc4-mm3.orig/mm/migrate.c 2006-05-23 17:27:20.617652640 -0700 +++ linux-2.6.17-rc4-mm3/mm/migrate.c 2006-05-23 17:57:02.747364602 -0700 @@ -592,8 +592,10 @@ static int unmap_and_move(new_page_t get int *result = NULL; struct page *newpage = get_new_page(page, private, &result); - if (!newpage) + if (!newpage) { + printk(KERN_CRIT "unmap_and_move: get_new_page(%p,%lu %p) returned NULL\n",page, private, result); return -ENOMEM; + } if (page_count(page) == 1) /* page was freed from under us. So we are done. */ @@ -679,6 +681,7 @@ int migrate_pages(struct list_head *from if (!swapwrite) current->flags |= PF_SWAPWRITE; + printk(KERN_CRIT "migrate_pages(%p,%p,%lu)\n",from,get_new_page,private); for(pass = 0; pass < 10 && retry; pass++) { retry = 0; Index: linux-2.6.17-rc4-mm3/mm/mempolicy.c =================================================================== --- linux-2.6.17-rc4-mm3.orig/mm/mempolicy.c 2006-05-23 15:10:24.459969091 -0700 +++ linux-2.6.17-rc4-mm3/mm/mempolicy.c 2006-05-23 17:57:02.748341104 -0700 @@ -1846,6 +1846,9 @@ int show_numa_map(struct seq_file *m, vo &node_online_map, MPOL_MF_STATS, md); } + if (vma->vm_flags & VM_LOCKED) + seq_printf(m, " locked"); + if (!md->pages) goto out;