From: Christoph Lameter This was discussed at http://marc.theaimsgroup.com/?l=linux-kernel&m=113166526217117&w=2 This patch changes the dequeueing to select a huge page near the node executing instead of always beginning to check for free nodes from node 0. This will result in a placement of the huge pages near the executing processor improving performance. The existing implementation can place the huge pages far away from the executing processor causing significant degradation of performance. The search starting from zero also means that the lower zones quickly run out of memory. Selecting a huge page near the process distributed the huge pages better. Signed-off-by: Christoph Lameter Cc: William Lee Irwin III Cc: Adam Litke Signed-off-by: Andrew Morton --- mm/hugetlb.c | 14 ++++++++------ 1 files changed, 8 insertions(+), 6 deletions(-) diff -puN mm/hugetlb.c~dequeue-a-huge-page-near-to-this-node mm/hugetlb.c --- devel/mm/hugetlb.c~dequeue-a-huge-page-near-to-this-node 2005-11-22 22:30:24.000000000 -0800 +++ devel-akpm/mm/hugetlb.c 2005-11-22 22:30:24.000000000 -0800 @@ -40,14 +40,16 @@ static struct page *dequeue_huge_page(vo { int nid = numa_node_id(); struct page *page = NULL; + struct zonelist *zonelist = NODE_DATA(nid)->node_zonelists; + struct zone **z; - if (list_empty(&hugepage_freelists[nid])) { - for (nid = 0; nid < MAX_NUMNODES; ++nid) - if (!list_empty(&hugepage_freelists[nid])) - break; + for (z = zonelist->zones; *z; z++) { + nid = (*z)->zone_pgdat->node_id; + if (!list_empty(&hugepage_freelists[nid])) + break; } - if (nid >= 0 && nid < MAX_NUMNODES && - !list_empty(&hugepage_freelists[nid])) { + + if (*z) { page = list_entry(hugepage_freelists[nid].next, struct page, lru); list_del(&page->lru); _