diff options
author | Vincent Guittot <vincent.guittot@linaro.org> | 2014-03-11 17:26:06 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-03-12 10:49:00 +0100 |
commit | a2cd42601b474b957e1a5fe3692bcf7f9363bd51 (patch) | |
tree | d551c6e8329b99ce163959f8b7281488da582c3a /kernel/sched | |
parent | 383afd0971538b3d77532a56404b24cfe967b5dd (diff) |
sched: Remove double calculation in fix_small_imbalance()
The tmp value has been already calculated in:
scaled_busy_load_per_task =
(busiest->load_per_task * SCHED_POWER_SCALE) /
busiest->group_power;
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1394555166-22894-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/fair.c | 6 |
1 files changed, 2 insertions, 4 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index f1eedae1e83..b301918ed51 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6061,12 +6061,10 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds) pwr_now /= SCHED_POWER_SCALE; /* Amount of load we'd subtract */ - tmp = (busiest->load_per_task * SCHED_POWER_SCALE) / - busiest->group_power; - if (busiest->avg_load > tmp) { + if (busiest->avg_load > scaled_busy_load_per_task) { pwr_move += busiest->group_power * min(busiest->load_per_task, - busiest->avg_load - tmp); + busiest->avg_load - scaled_busy_load_per_task); } /* Amount of load we'd add */ |