From: Andy Whitcroft Similar to the generic initialisers, the x86_64 vmemmap initialisation may incorrectly skip the last page of a section if the section start is not aligned to the page. Where we have a section spanning the end of a PMD we will check the start of the section at A populating it. We will then move on 1 PMD page to C and find ourselves beyond the end of the section which ends at B we will complete without checking the second PMD page. | PMD | PMD | | SECTION | A B C We should round ourselves to the end of the PMD as we iterate. Signed-off-by: Andy Whitcroft Cc: Christoph Lameter Cc: Mel Gorman Cc: Andi Kleen Cc: KAMEZAWA Hiroyuki Cc: "Torsten Kaiser" Signed-off-by: Andrew Morton --- arch/x86_64/mm/init.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff -puN arch/x86_64/mm/init.c~x86_64-sparsemem_vmemmap-2m-page-size-support-ensure-end-of-section-memmap-is-initialised arch/x86_64/mm/init.c --- a/arch/x86_64/mm/init.c~x86_64-sparsemem_vmemmap-2m-page-size-support-ensure-end-of-section-memmap-is-initialised +++ a/arch/x86_64/mm/init.c @@ -757,9 +757,10 @@ int __meminit vmemmap_populate_pmd(pud_t unsigned long end, int node) { pmd_t *pmd; + unsigned long next; - for (pmd = pmd_offset(pud, addr); addr < end; - pmd++, addr += PMD_SIZE) + for (pmd = pmd_offset(pud, addr); addr < end; pmd++, addr = next) { + next = pmd_addr_end(addr, end); if (pmd_none(*pmd)) { pte_t entry; void *p = vmemmap_alloc_block(PMD_SIZE, node); @@ -773,8 +774,8 @@ int __meminit vmemmap_populate_pmd(pud_t printk(KERN_DEBUG " [%lx-%lx] PMD ->%p on node %d\n", addr, addr + PMD_SIZE - 1, p, node); } else - vmemmap_verify((pte_t *)pmd, node, - pmd_addr_end(addr, end), end); + vmemmap_verify((pte_t *)pmd, node, next, end); + } return 0; } #endif _