From: Peter Williams Problem: Due to an injudicious piece of code near the end of find_busiest_group() smpnice load balancing is too aggressive resulting in excessive movement of tasks from one CPU to another. Solution: Remove the offending code. The thinking that caused it to be included became invalid when find_busiest_queue() was modified to use average load per task (on the relevant run queue) instead of SCHED_LOAD_SCALE when evaluating small imbalance values to see whether they warranted being moved. Signed-off-by: Peter Williams Cc: "Siddha, Suresh B" Cc: Con Kolivas Cc: Nick Piggin Acked-by: Ingo Molnar Cc: "Chen, Kenneth W" Signed-off-by: Andrew Morton --- kernel/sched.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) diff -puN kernel/sched.c~sched-improve-stability-of-smpnice-load-balancing kernel/sched.c --- devel/kernel/sched.c~sched-improve-stability-of-smpnice-load-balancing 2006-06-09 15:18:10.000000000 -0700 +++ devel-akpm/kernel/sched.c 2006-06-09 15:18:10.000000000 -0700 @@ -2235,13 +2235,10 @@ find_busiest_group(struct sched_domain * pwr_move /= SCHED_LOAD_SCALE; /* Move if we gain throughput */ - if (pwr_move > pwr_now) - *imbalance = busiest_load_per_task; - /* or if there's a reasonable chance that *imbalance is big - * enough to cause a move - */ - else if (*imbalance <= busiest_load_per_task / 2) + if (pwr_move <= pwr_now) goto out_balanced; + + *imbalance = busiest_load_per_task; } return busiest; _