From: Neil Brown The locking is all backwards and broken. We first atomic_dec_and_test. If this fails, someone else has an active reference and we need do no more. If it succeeds, then the only ref is in the hash table, but someone might be about to find and use that reference. nsm_mutex provides exclusion against this. If sm_count is still 0 once the mutex has been gained, then it is safe to discard the nsm. Signed-off-by: Neil Brown Cc: Olaf Kirch Signed-off-by: Andrew Morton --- fs/lockd/host.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff -puN fs/lockd/host.c~knfsd-lockd-introduce-nsm_handle-fix fs/lockd/host.c --- a/fs/lockd/host.c~knfsd-lockd-introduce-nsm_handle-fix +++ a/fs/lockd/host.c @@ -483,9 +483,9 @@ nsm_release(struct nsm_handle *nsm) { if (!nsm) return; - if (atomic_read(&nsm->sm_count) == 1) { + if (atomic_dec_and_test(&nsm->sm_count)) { down(&nsm_sema); - if (atomic_dec_and_test(&nsm->sm_count)) { + if (atomic_read(&nsm->sm_count) == 0) { list_del(&nsm->sm_link); kfree(nsm); } _