From: Eric W. Biederman Cleanup of task_struct does not happen when its reference count drops to zero, instead cleanup happens when release_task is called. Tasks can only be looked up via rcu before release_task is called. All rcu protected members of task_struct are freed by release_task. Therefore we can move call_rcu from put_task_struct into release_task. And we can modify release_task to not immediately release the reference count but instead have it call put_task_struct from the function it gives to call_rcu. The end result: - get_task_struct is safe in an rcu context where we have just looked up the task. - put_task_struct simplifies into it's old pre rcu self. This reorganization also makes put_task_struct uncallable from modules as it is not exported but it does not appear to be called from any modules so this should not be an issue, and is trivially fixed. Signed-off-by: Eric W. Biederman Signed-off-by: Andrew Morton --- include/linux/sched.h | 2 +- kernel/exit.c | 7 ++++++- sched.c | 0 3 files changed, 7 insertions(+), 2 deletions(-) diff -puN include/linux/sched.h~task-rcu-protect-task-usage include/linux/sched.h --- devel/include/linux/sched.h~task-rcu-protect-task-usage 2006-03-16 02:09:13.000000000 -0800 +++ devel-akpm/include/linux/sched.h 2006-03-16 02:09:13.000000000 -0800 @@ -930,7 +930,7 @@ extern void __put_task_struct(struct tas static inline void put_task_struct(struct task_struct *t) { if (atomic_dec_and_test(&t->usage)) - call_rcu(&t->rcu, __put_task_struct_cb); + __put_task_struct(t); } /* diff -puN kernel/exit.c~task-rcu-protect-task-usage kernel/exit.c --- devel/kernel/exit.c~task-rcu-protect-task-usage 2006-03-16 02:09:13.000000000 -0800 +++ devel-akpm/kernel/exit.c 2006-03-16 02:09:13.000000000 -0800 @@ -126,6 +126,11 @@ static void __exit_signal(struct task_st } } +static void delayed_put_task_struct(struct rcu_head *rhp) +{ + put_task_struct(container_of(rhp, struct task_struct, rcu)); +} + void release_task(struct task_struct * p) { int zap_leader; @@ -167,7 +172,7 @@ repeat: spin_unlock(&p->proc_lock); proc_pid_flush(proc_dentry); release_thread(p); - put_task_struct(p); + call_rcu(&p->rcu, delayed_put_task_struct); p = leader; if (unlikely(zap_leader)) diff -puN kernel/sched.c~task-rcu-protect-task-usage kernel/sched.c _