diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2012-05-11 17:31:26 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2012-05-14 15:05:27 +0200 |
commit | 556061b00c9f2fd6a5524b6bde823ef12f299ecf (patch) | |
tree | 087891d70dbcd97cd23ac3eb92fad6a905c0f527 /kernel/sched/sched.h | |
parent | 870a0bb5d636156502769233d02a0d5791d4366a (diff) |
sched/nohz: Fix rq->cpu_load[] calculations
While investigating why the load-balancer did funny I found that the
rq->cpu_load[] tables were completely screwy.. a bit more digging
revealed that the updates that got through were missing ticks followed
by a catchup of 2 ticks.
The catchup assumes the cpu was idle during that time (since only nohz
can cause missed ticks and the machine is idle etc..) this means that
esp. the higher indices were significantly lower than they ought to
be.
The reason for this is that its not correct to compare against jiffies
on every jiffy on any other cpu than the cpu that updates jiffies.
This patch cludges around it by only doing the catch-up stuff from
nohz_idle_balance() and doing the regular stuff unconditionally from
the tick.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: pjt@google.com
Cc: Venkatesh Pallipadi <venki@google.com>
Link: http://lkml.kernel.org/n/tip-tp4kj18xdd5aj4vvj0qg55s2@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/sched.h')
-rw-r--r-- | kernel/sched/sched.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 7282e7b5f4c..ba9dccfd24c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -876,7 +876,7 @@ extern void resched_cpu(int cpu); extern struct rt_bandwidth def_rt_bandwidth; extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime); -extern void update_cpu_load(struct rq *this_rq); +extern void update_idle_cpu_load(struct rq *this_rq); #ifdef CONFIG_CGROUP_CPUACCT #include <linux/cgroup.h> |