From: Hugh Dickins One anomaly remains from when Andrea rationalized the responsibilities of mmap_sem and page_table_lock: in dup_mmap we add vmas to the child holding its page_table_lock, but not the mmap_sem which normally guards the vma list and rbtree. Which could be an issue for unuse_mm: though since it just walks down the list (today with page_table_lock, tomorrow not), it's probably okay. Will need a memory barrier? Oh, keep it simple, Nick and I agreed, no harm in taking child's mmap_sem here. Signed-off-by: Hugh Dickins Signed-off-by: Andrew Morton --- kernel/fork.c | 9 ++++----- 1 files changed, 4 insertions(+), 5 deletions(-) diff -puN kernel/fork.c~mm-dup_mmap-down-new-mmap_sem kernel/fork.c --- devel/kernel/fork.c~mm-dup_mmap-down-new-mmap_sem 2005-09-25 15:31:32.000000000 -0700 +++ devel-akpm/kernel/fork.c 2005-09-25 15:31:32.000000000 -0700 @@ -192,6 +192,8 @@ static inline int dup_mmap(struct mm_str down_write(&oldmm->mmap_sem); flush_cache_mm(oldmm); + down_write(&mm->mmap_sem); + mm->locked_vm = 0; mm->mmap = NULL; mm->mmap_cache = NULL; @@ -251,10 +253,7 @@ static inline int dup_mmap(struct mm_str } /* - * Link in the new vma and copy the page table entries: - * link in first so that swapoff can see swap entries. - * Note that, exceptionally, here the vma is inserted - * without holding mm->mmap_sem. + * Link in the new vma and copy the page table entries. */ spin_lock(&mm->page_table_lock); *pprev = tmp; @@ -275,8 +274,8 @@ static inline int dup_mmap(struct mm_str goto out; } retval = 0; - out: + up_write(&mm->mmap_sem); flush_tlb_mm(oldmm); up_write(&oldmm->mmap_sem); return retval; _