From: "Chen, Kenneth W" While we at it, let's clean up this hunk. task_hot is evaluated twice in the more common case of nr_balance_failed <= cache_nice_tries. We should only test/increment relevant stats for forced migration. Signed-off-by: Ken Chen Cc: Mike Galbraith Cc: Ingo Molnar Cc: Ken Chen Signed-off-by: Andrew Morton --- kernel/sched.c | 12 ++++++------ 1 files changed, 6 insertions(+), 6 deletions(-) diff -puN kernel/sched.c~sched-improve-migration-accuracy-tidy kernel/sched.c --- a/kernel/sched.c~sched-improve-migration-accuracy-tidy +++ a/kernel/sched.c @@ -2106,8 +2106,13 @@ int can_migrate_task(struct task_struct * 2) too many balance attempts have failed. */ - if (sd->nr_balance_failed > sd->cache_nice_tries) + if (sd->nr_balance_failed > sd->cache_nice_tries) { +#ifdef CONFIG_SCHEDSTATS + if (task_hot(p, rq->most_recent_timestamp, sd)) + schedstat_inc(sd, lb_hot_gained[idle]); +#endif return 1; + } if (task_hot(p, rq->most_recent_timestamp, sd)) return 0; @@ -2207,11 +2212,6 @@ skip_queue: goto skip_bitmap; } -#ifdef CONFIG_SCHEDSTATS - if (task_hot(tmp, busiest->most_recent_timestamp, sd)) - schedstat_inc(sd, lb_hot_gained[idle]); -#endif - pull_task(busiest, array, tmp, this_rq, dst_array, this_cpu); pulled++; rem_load_move -= tmp->load_weight; _