diff options
author | Paul Turner <pjt@google.com> | 2011-01-21 20:45:03 -0800 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2011-01-26 12:33:19 +0100 |
commit | f07333bf6ee66d9b49286cec4371cf375e745b7a (patch) | |
tree | 03a1e9968eadf2465b6fc04fb8c1ab67327ea0f2 /kernel | |
parent | 6d5ab2932a21ea54406ab95c43ecff90a3eddfda (diff) |
sched: Avoid expensive initial update_cfs_load()
Since cfs->{load_stamp,load_last} are zero-initalized the initial load update
will consider the delta to be 'since the beginning of time'.
This results in a lot of pointless divisions to bring this large period to be
within the sysctl_sched_shares_window.
Fix this by initializing load_stamp to be 1 at cfs_rq initialization, this
allows for an initial load_stamp > load_last which then lets standard idle
truncation proceed.
We avoid spinning (and slightly improve consistency) by fixing delta to be
[period - 1] in this path resulting in a slightly more predictable shares ramp.
(Previously the amount of idle time preserved by the overflow would range between
[period/2,period-1].)
Signed-off-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20110122044852.102126037@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 2 | ||||
-rw-r--r-- | kernel/sched_fair.c | 1 |
2 files changed, 3 insertions, 0 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index e0fa3ff7f19..6820b5b3a96 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -7796,6 +7796,8 @@ static void init_cfs_rq(struct cfs_rq *cfs_rq, struct rq *rq) INIT_LIST_HEAD(&cfs_rq->tasks); #ifdef CONFIG_FAIR_GROUP_SCHED cfs_rq->rq = rq; + /* allow initial update_cfs_load() to truncate */ + cfs_rq->load_stamp = 1; #endif cfs_rq->min_vruntime = (u64)(-(1LL << 20)); } diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 0c550c841ee..4cbc9121094 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -733,6 +733,7 @@ static void update_cfs_load(struct cfs_rq *cfs_rq, int global_update) now - cfs_rq->load_last > 4 * period) { cfs_rq->load_period = 0; cfs_rq->load_avg = 0; + delta = period - 1; } cfs_rq->load_stamp = now; |