From: Oleg Nesterov Change migration_call(CPU_DEAD) to use direct spin_lock_irq() instead of task_rq_lock(rq->idle), rq->idle can't change its task_rq(). This makes the code a bit more symmetrical with migrate_dead_tasks()'s path which uses spin_lock_irq/spin_unlock_irq. Signed-off-by: Oleg Nesterov Cc: Cliff Wickman Cc: Gautham R Shenoy Cc: Ingo Molnar Cc: Srivatsa Vaddagiri Cc: Akinobu Mita Signed-off-by: Andrew Morton --- kernel/sched.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff -puN kernel/sched.c~migration_callcpu_dead-use-spin_lock_irq-instead-of-task_rq_lock kernel/sched.c --- a/kernel/sched.c~migration_callcpu_dead-use-spin_lock_irq-instead-of-task_rq_lock +++ a/kernel/sched.c @@ -5443,14 +5443,14 @@ migration_call(struct notifier_block *nf kthread_stop(rq->migration_thread); rq->migration_thread = NULL; /* Idle task back to normal (off runqueue, low prio) */ - rq = task_rq_lock(rq->idle, &flags); + spin_lock_irq(&rq->lock); update_rq_clock(rq); deactivate_task(rq, rq->idle, 0); rq->idle->static_prio = MAX_PRIO; __setscheduler(rq, rq->idle, SCHED_NORMAL, 0); rq->idle->sched_class = &idle_sched_class; migrate_dead_tasks(cpu); - task_rq_unlock(rq, &flags); + spin_unlock_irq(&rq->lock); migrate_nr_uninterruptible(rq); BUG_ON(rq->nr_running != 0); _