diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2008-10-17 19:27:02 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-10-20 14:05:02 +0200 |
commit | ffda12a17a324103e9900fa1035309811eecbfe5 (patch) | |
tree | 79fe8aae79a41b467f2cdd055036b3017642a9f6 /kernel/sched_rt.c | |
parent | b0aa51b999c449e5e3f9faa1ee406e052d407fe7 (diff) |
sched: optimize group load balancer
I noticed that tg_shares_up() unconditionally takes rq-locks for all cpus
in the sched_domain. This hurts.
We need the rq-locks whenever we change the weight of the per-cpu group sched
entities. To allevate this a little, only change the weight when the new
weight is at least shares_thresh away from the old value.
This avoids the rq-lock for the top level entries, since those will never
be re-weighted, and fuzzes the lower level entries a little to gain performance
in semi-stable situations.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_rt.c')
0 files changed, 0 insertions, 0 deletions