commit b53e49019344f24501741c344114832e2d19d97f Author: Greg Kroah-Hartman Date: Fri Aug 20 11:34:38 2010 -0700 Linux 2.6.32.20 commit e4599a4a45259b9cfb0942d36f6f35f3dca1d893 Author: Linus Torvalds Date: Sun Aug 15 11:35:52 2010 -0700 mm: fix up some user-visible effects of the stack guard page commit d7824370e26325c881b665350ce64fb0a4fde24a upstream. This commit makes the stack guard page somewhat less visible to user space. It does this by: - not showing the guard page in /proc//maps It looks like lvm-tools will actually read /proc/self/maps to figure out where all its mappings are, and effectively do a specialized "mlockall()" in user space. By not showing the guard page as part of the mapping (by just adding PAGE_SIZE to the start for grows-up pages), lvm-tools ends up not being aware of it. - by also teaching the _real_ mlock() functionality not to try to lock the guard page. That would just expand the mapping down to create a new guard page, so there really is no point in trying to lock it in place. It would perhaps be nice to show the guard page specially in /proc//maps (or at least mark grow-down segments some way), but let's not open ourselves up to more breakage by user space from programs that depends on the exact deails of the 'maps' file. Special thanks to Henrique de Moraes Holschuh for diving into lvm-tools source code to see what was going on with the whole new warning. Reported-and-tested-by: François Valenduc Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 058daedc8311ab42702dfe29d3ff16dff7e7eaf8 Author: Linus Torvalds Date: Sat Aug 14 11:44:56 2010 -0700 mm: fix page table unmap for stack guard page properly commit 11ac552477e32835cb6970bf0a70c210807f5673 upstream. We do in fact need to unmap the page table _before_ doing the whole stack guard page logic, because if it is needed (mainly 32-bit x86 with PAE and CONFIG_HIGHPTE, but other architectures may use it too) then it will do a kmap_atomic/kunmap_atomic. And those kmaps will create an atomic region that we cannot do allocations in. However, the whole stack expand code will need to do anon_vma_prepare() and vma_lock_anon_vma() and they cannot do that in an atomic region. Now, a better model might actually be to do the anon_vma_prepare() when _creating_ a VM_GROWSDOWN segment, and not have to worry about any of this at page fault time. But in the meantime, this is the straightforward fix for the issue. See https://bugzilla.kernel.org/show_bug.cgi?id=16588 for details. Reported-by: Wylda Reported-by: Sedat Dilek Reported-by: Mike Pagano Reported-by: François Valenduc Tested-by: Ed Tomlinson Cc: Pekka Enberg Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman