summaryrefslogtreecommitdiffstats
path: root/kernel/sched_idletask.c
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2009-12-12 11:34:10 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2009-12-12 11:34:10 -0800
commit702a7c7609bec3a940b6a46b0d6ab9ce45274580 (patch)
tree6c169691449259410b9b51a146acb0e837dae96a /kernel/sched_idletask.c
parent053fe57ac249a9531c396175778160d9e9509399 (diff)
parentb9889ed1ddeca5a3f3569c8de7354e9e97d803ae (diff)
Merge branch 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (21 commits) sched: Remove forced2_migrations stats sched: Fix memory leak in two error corner cases sched: Fix build warning in get_update_sysctl_factor() sched: Update normalized values on user updates via proc sched: Make tunable scaling style configurable sched: Fix missing sched tunable recalculation on cpu add/remove sched: Fix task priority bug sched: cgroup: Implement different treatment for idle shares sched: Remove unnecessary RCU exclusion sched: Discard some old bits sched: Clean up check_preempt_wakeup() sched: Move update_curr() in check_preempt_wakeup() to avoid redundant call sched: Sanitize fork() handling sched: Clean up ttwu() rq locking sched: Remove rq->clock coupling from set_task_cpu() sched: Consolidate select_task_rq() callers sched: Remove sysctl.sched_features sched: Protect sched_rr_get_param() access to task->sched_class sched: Protect task->cpus_allowed access in sched_getaffinity() sched: Fix balance vs hotplug race ... Fixed up conflicts in kernel/sysctl.c (due to sysctl cleanup)
Diffstat (limited to 'kernel/sched_idletask.c')
-rw-r--r--kernel/sched_idletask.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched_idletask.c b/kernel/sched_idletask.c
index b133a28fcde..33d5384a73a 100644
--- a/kernel/sched_idletask.c
+++ b/kernel/sched_idletask.c
@@ -97,7 +97,7 @@ static void prio_changed_idle(struct rq *rq, struct task_struct *p,
check_preempt_curr(rq, p, 0);
}
-unsigned int get_rr_interval_idle(struct task_struct *task)
+unsigned int get_rr_interval_idle(struct rq *rq, struct task_struct *task)
{
return 0;
}