summaryrefslogtreecommitdiffstats
path: root/kernel/sched_idletask.c
diff options
context:
space:
mode:
authorThomas Gleixner <tglx@linutronix.de>2009-04-11 10:43:41 +0200
committerThomas Gleixner <tglx@linutronix.de>2009-05-15 15:32:45 +0200
commitdce48a84adf1806676319f6f480e30a6daa012f9 (patch)
tree79151f5d31d9c3dcdc723ab8877cb943b944890e /kernel/sched_idletask.c
parent2ff799d3cff1ecb274049378b28120ee5c1c5e5f (diff)
sched, timers: move calc_load() to scheduler
Dimitri Sivanich noticed that xtime_lock is held write locked across calc_load() which iterates over all online CPUs. That can cause long latencies for xtime_lock readers on large SMP systems. The load average calculation is an rough estimate anyway so there is no real need to protect the readers vs. the update. It's not a problem when the avenrun array is updated while a reader copies the values. Instead of iterating over all online CPUs let the scheduler_tick code update the number of active tasks shortly before the avenrun update happens. The avenrun update itself is handled by the CPU which calls do_timer(). [ Impact: reduce xtime_lock write locked section ] Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra <peterz@infradead.org>
Diffstat (limited to 'kernel/sched_idletask.c')
-rw-r--r--kernel/sched_idletask.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched_idletask.c b/kernel/sched_idletask.c
index 8a21a2e28c1..499672c10cb 100644
--- a/kernel/sched_idletask.c
+++ b/kernel/sched_idletask.c
@@ -22,7 +22,8 @@ static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int sy
static struct task_struct *pick_next_task_idle(struct rq *rq)
{
schedstat_inc(rq, sched_goidle);
-
+ /* adjust the active tasks as we might go into a long sleep */
+ calc_load_account_active(rq);
return rq->idle;
}