From: Lee Schermerhorn Now, We call downgrade_write(&mm->mmap_sem) at begin of mlock. It increase mlock scalability. But if mlock and munmap conflict happend, We can find vma gone. At that time, kernel should return ENOMEM because mlock after munmap return ENOMEM. (in addition, EAGAIN indicate "please try again", but mlock() called again cause error again) This problem is theoretical issue. I can't reproduce that vma gone on my box, but fixes is better. Signed-off-by: KOSAKI Motohiro Signed-off-by: Andrew Morton --- mm/mlock.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff -puN mm/mlock.c~mmap-handle-mlocked-pages-during-map-remap-unmap-mlock-fix-return-value-for-munmap-mlock-vma-race mm/mlock.c --- a/mm/mlock.c~mmap-handle-mlocked-pages-during-map-remap-unmap-mlock-fix-return-value-for-munmap-mlock-vma-race +++ a/mm/mlock.c @@ -283,7 +283,7 @@ long mlock_vma_pages_range(struct vm_are vma = find_vma(mm, start); /* non-NULL vma must contain @start, but need to check @end */ if (!vma || end > vma->vm_end) - return -EAGAIN; + return -ENOMEM; return 0; /* hide other errors from mmap(), et al */ } @@ -405,7 +405,7 @@ success: *prev = find_vma(mm, start); /* non-NULL *prev must contain @start, but need to check @end */ if (!(*prev) || end > (*prev)->vm_end) - ret = -EAGAIN; + ret = -ENOMEM; } else { /* * TODO: for unlocking, pages will already be resident, so _