summaryrefslogtreecommitdiffstats
path: root/kernel
AgeCommit message (Collapse)Author
2007-08-25sched: s/sched_latency/sched_min_granularityIngo Molnar
runtime limit and wakeup granularity used to be a function of granularity and that was incorrect changed to sched_latency. Fix this to make wakeup granularity a function of min-granularity, and the runtime limit equal to latency. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-25sched: cleanup, sched_granularity -> sched_min_granularityIngo Molnar
due to adaptive granularity scheduling the role of sched_granularity has changed to "minimum granularity", so rename the variable (and the tunable) accordingly. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
2007-08-25sched: adaptive scheduler granularityPeter Zijlstra
Instead of specifying the preemption granularity, specify the wanted latency. By fixing the granlarity to a constany the wakeup latency it a function of the number of running tasks on the rq. Invert this relation. sysctl_sched_granularity becomes a minimum for the dynamic granularity computed from the new sysctl_sched_latency. Then use this latency to do more intelligent granularity decisions: if there are fewer tasks running then we can schedule coarser. This helps performance while still always keeping the latency target. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-25sched: fix CONFIG_SCHED_DEBUG dependency of lockdep sysctlsPeter Zijlstra
Make the lockdep sysctls not depend on CONFIG_SCHED_DEBUG. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-24sched: fix startup penalty calculationIngo Molnar
fix task startup penalty miscalculation: sysctl_sched_granularity is unsigned int and wait_runtime is long so we first have to convert it to long before turning it negative ... Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-24sched: simplify bonus calculation #2Peter Zijlstra
current code: delta = calc_delta_mine(delta_exec, curr->load.weight, lw); delta = min((u64)delta, cfs_rq->sleeper_bonus); Notice that this calc_delta_mine() line is exactly delta_mine, which gives: delta = min((u64)delta_mine, cfs_rq->sleeper_bonus); Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-24sched: simplify bonus calculation #1Peter Zijlstra
current code: delta = min(cfs_rq->sleeper_bonus, (u64)delta_exec); delta = calc_delta_mine(delta, curr->load.weight, lw); delta = min((u64)delta, cfs_rq->sleeper_bonus); drop the first min(), because we clip against sleeper_bonus in the 3rd line again. That gives: delta = calc_delta_mine(delta_exec, curr->load.weight, lw); delta = min((u64)delta, cfs_rq->sleeper_bonus); Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-24sched: tidy up and simplify the bonus balanceIngo Molnar
make the bonus balance more consistent: do not hand out a bonus if there's too much in flight already, and only deduct as much from a runner as it has the capacity. This makes the bonus engine a zero-sum game (as intended). this also simplifies the code: text data bss dec hex filename 34770 2998 24 37792 93a0 sched.o.before 34749 2998 24 37771 938b sched.o.after and it also avoids overscheduling in sleep-happy workloads like hackbench.c. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-24sched: optimize task_tick_rt() a bitDmitry Adamushko
Mitchell Erblich suggested a quality-of-implementation change to not requeue SCHED_RR tasks if there's only a single task on the runqueue, by checking for rq->nr_running == 1. provide a more efficient implementation of that, to check that particular RT priority-queue only. [ From: mingo@elte.hu ] Also first requeue the task then set need_resched - results in slightly better machine-instruction ordering. Also clean up the code a bit. Signed-off-by: Dmitry Adamushko <dmitry.adamushko@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-24sched: simplify can_migrate_task()Sven-Thorsten Dietrich
Remove trivial conditional branch in Linux scheduler's can_migrate_task() function. text data bss dec hex filename 34770 2998 24 37792 93a0 sched.o.before 34757 2998 24 37779 9393 sched.o.after Signed-off-by: Sven-Thorsten Dietrich <sven@thebigcorporation.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-24sched: remove HZ dependency from the granularity defaultIngo Molnar
remove HZ dependency from the granularity default. Use 10 msec for the base granularity, 1 msec for wakeup granularity and 25 msec for batch wakeup granularity. (These defaults are close to the values that the default HZ=250 setting got previously, and thus it's the most common setting.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-24sched: CONFIG_SCHED_GROUP_FAIR=y fixletBruce Ashfield
when I built with CONFIG_FAIR_GROUP_SCHED=y, I need the following change to make things right. [ From: mingo@elte.hu ] this config option is not upstream-configurable right now but lets fix this for completeness. Signed-off-by: Bruce Ashfield <bruce.ashfield@windriver.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-23Merge git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-schedLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: sched: tweak the sched_runtime_limit tunable sched: skip updating rq's next_balance under null SD sched: fix broken SMT/MC optimizations sched: accounting regression since rc1 sched: fix sysctl directory permissions sched: sched_clock_idle_[sleep|wakeup]_event()
2007-08-23Merge master.kernel.org:/pub/scm/linux/kernel/git/gregkh/driver-2.6Linus Torvalds
* master.kernel.org:/pub/scm/linux/kernel/git/gregkh/driver-2.6: sysfs: don't warn on removal of a nonexistent binary file HOWTO: latest lxr url address changed HOWTO: korean translation of Documentation/HOWTO Fix Off-by-one in /sys/module/*/refcnt sysfs: fix locking in sysfs_lookup() and sysfs_rename_dir()
2007-08-23sched: tweak the sched_runtime_limit tunableIngo Molnar
Michael Gerdau reported reniced task CPU usage weirdnesses. Such symptoms can be caused by limit underruns so double the sched_runtime_limit. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-23sched: skip updating rq's next_balance under null SDSuresh Siddha
Was playing with sched_smt_power_savings/sched_mc_power_savings and found out that while the scheduler domains are reconstructed when sysfs settings change, rebalance_domains() can get triggered with null domain on other cpus, which is setting next_balance to jiffies + 60*HZ. Resulting in no idle/busy balancing for 60 seconds. Fix this. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-23sched: fix broken SMT/MC optimizationsSuresh Siddha
On a four package system with HT - HT load balancing optimizations were broken. For example, if two tasks end up running on two logical threads of one of the packages, scheduler is not able to pull one of the tasks to a completely idle package. In this scenario, for nice-0 tasks, imbalance calculated by scheduler will be 512 and find_busiest_queue() will return 0 (as each cpu's load is 1024 > imbalance and has only one task running). Similarly MC scheduler optimizations also get fixed with this patch. [ mingo@elte.hu: restored fair balancing by increasing the fuzz and adding it back to the power decision, without the /2 factor. ] Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-23sched: fix sysctl directory permissionsEric W. Biederman
There are two remaining gotchas: - The directories have impossible permissions (writeable). - The ctl_name for the kernel directory is inconsistent with everything else. It should be CTL_KERN. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-23sched: sched_clock_idle_[sleep|wakeup]_event()Ingo Molnar
construct a more or less wall-clock time out of sched_clock(), by using ACPI-idle's existing knowledge about how much time we spent idling. This allows the rq clock to work around TSC-stops-in-C2, TSC-gets-corrupted-in-C3 type of problems. ( Besides the scheduler's statistics this also benefits blktrace and printk-timestamps as well. ) Furthermore, the precise before-C2/C3-sleep and after-C2/C3-wakeup callbacks allow the scheduler to get out the most of the period where the CPU has a reliable TSC. This results in slightly more precise task statistics. the ACPI bits were acked by Len. Signed-off-by: Ingo Molnar <mingo@elte.hu> Acked-by: Len Brown <len.brown@intel.com>
2007-08-22signalfd: fix interaction with posix-timersOleg Nesterov
dequeue_signal: if (__SI_TIMER) { spin_unlock(&tsk->sighand->siglock); do_schedule_next_timer(info); spin_lock(&tsk->sighand->siglock); } Unless tsk == curent, this is absolutely unsafe: nothing prevents tsk from exiting. If signalfd was passed to another process, do_schedule_next_timer() is just wrong. Add yet another "tsk == current" check into dequeue_signal(). This patch fixes an oopsable bug, but breaks the scheduling of posix timers if the shared __SI_TIMER signal was fetched via signalfd attached to another sub-thread. Mostly fixed by the next patch. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Davide Libenzi <davidel@xmailserver.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Michael Kerrisk <mtk-manpages@gmx.net> Cc: Roland McGrath <roland@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-22posix-timers: fix creation raceOleg Nesterov
sys_timer_create() sets ->it_process and unlocks ->siglock, then checks tmr->it_sigev_notify to define if get_task_struct() is needed. We already passed ->it_id to the caller, another thread can delete this timer and free its memory in between. As a minimal fix, move this code under ->siglock, sys_timer_delete() takes it too before calling release_posix_timer(). A proper serialization would be to take ->it_lock, we add a partly initialized timer on posix_timers_id, not good. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-22posix-timers: fix deletion raceThomas Gleixner
timer_delete does: lock_timer(); timer->it_process = NULL; unlock_timer(); release_posix_timer(); timer->it_process is checked in lock_timer() to prevent access to a timer, which is on the way to be deleted, but the check happens after idr_lock is dropped. This allows release_posix_timer() to delete the timer before the lock code can check the timer: CPU 0 CPU 1 lock_timer(); timer->it_process = NULL; unlock_timer(); lock_timer() spin_lock(idr_lock); timer = idr_find(); spin_lock(timer->lock); spin_unlock(idr_lock); release_posix_timer(); spin_lock(idr_lock); idr_remove(timer); spin_unlock(idr_lock); free_timer(timer); if (timer->......) Change the locking to prevent this. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-22free_irq(): fix DEBUG_SHIRQ handlingAndrew Morton
If we're going to run the handler from free_irq() then we must do it with local irq's disabled. Otherwise lockdep complains that the handler is taking irq-safe spinlocks in a non-irq-safe fashion. Cc: Ingo Molnar <mingo@elte.hu> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-22futex_unlock_pi() hurts my brain and may cause application deadlockjohn stultz
Avoid futex_unlock_pi returning -EFAULT (which results in deadlock), by clearing uval before jumping to retry_locked. Signed-off-by: John Stultz <johnstul@us.ibm.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-22kernel/auditsc.c: fix an off-by-oneAdrian Bunk
This patch fixes an off-by-one in a BUG_ON() spotted by the Coverity checker. Signed-off-by: Adrian Bunk <bunk@stusta.de> Cc: Amy Griffis <amy.griffis@hp.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-22Fix Off-by-one in /sys/module/*/refcntAlexey Dobriyan
sysfs internals were changed to not pin module in question. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Acked-by: Kay Sievers <kay.sievers@vrfy.org> Acked-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2007-08-21fix - ensure we don't use bootconsoles after init has been releasedRobin Getz
Gerd Hoffmann pointed out that my patch from yesterday can lead to a null pointer dereference if the kernel is booted with no console, and no earlyprintk defined. This fixes that issue. Signed-off-by: Robin Getz <rgetz@blackfin.uclinux.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-20ensure we don't use bootconsoles after init has been releasedRobin Getz
This is a followup to the cleanups for earlyprintk patch from Gerd Hoffmann http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=69331af79cf29e26d1231152a172a1a10c2df511 This ensures that a bootconsole is unregistered if it is not replaced. The current implementation spews garbage out the bootconsole in this case, since the bootconsole structure is normally in the init section, and is freed, but still used. Signed-off-by: Robin Getz <rgetz@blackfin.uclinux.org> Acked-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Paul Mundt <lethal@linux-sh.org> Cc: Mike Frysinger <vapier.adi@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-19Remove double inclusion of linux/capability.hChristian Heim
Remove the second inclusion of linux/capability.h, which has been introduced with "[PATCH] move capable() to capability.h" (commit c59ede7b78db329949d9cdcd7064e22d357560ef) Signed-off-by: Christian Heim <phreak@gentoo.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-12Merge git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-schedLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: sched: run_rebalance_domains: s/SCHED_IDLE/CPU_IDLE/ sched: fix sleeper bonus sched: make global code static
2007-08-12genirq: suppress resend of level interruptsThomas Gleixner
Level type interrupts are resent by the interrupt hardware when they are still active at irq_enable(). Suppress the resend mechanism for interrupts marked as level. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-12genirq: cleanup mismerge artifactThomas Gleixner
Commit 5a43a066b11ac2fe84cf67307f20b83bea390f83: "genirq: Allow fasteoi handler to retrigger disabled interrupts" was erroneously applied to handle_level_irq(). This added the irq retrigger / resend functionality to the level irq handler. Revert the offending bits. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-12sched: run_rebalance_domains: s/SCHED_IDLE/CPU_IDLE/Oleg Nesterov
rebalance_domains(SCHED_IDLE) looks strange (typo), change it to CPU_IDLE. the effect of this bug was slightly more agressive idle-balancing on SMP than intended. Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-12sched: fix sleeper bonusIngo Molnar
Peter Ziljstra noticed that the sleeper bonus deduction code was not properly rate-limited: a task that scheduled more frequently would get a disproportionately large deduction. So limit the deduction to delta_exec. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-12sched: make global code staticAdrian Bunk
This patch makes the following needlessly global code static: - arch_reinit_sched_domains() - struct attr_sched_mc_power_savings - struct attr_sched_smt_power_savings Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-schedLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: sched debug: dont print kernel address in /proc/sched_debug sched: fix typo in the FAIR_GROUP_SCHED branch sched: improve rq-clock overflow logic
2007-08-11fix compilation with gcc 4.2Peter Chubb
gcc-4.2 is a lot more picky about its symbol handling. EXPORT_SYMBOL no longer works on symbols that are undefined or defined with static scope. For example, with CONFIG_PROFILE off, I see: kernel/profile.c:206: error: __ksymtab_profile_event_unregister causes a section type conflict kernel/profile.c:205: error: __ksymtab_profile_event_register causes a section type conflict This patch moves the EXPORTs inside the #ifdef CONFIG_PROFILE, so we only try to export symbols that are defined. Also, in kernel/kprobes.c there's an EXPORT_SYMBOL_GPL() for jprobes_return, which if CONFIG_JPROBES is undefined is a static inline and gives the same error. And in drivers/acpi/resources/rsxface.c, there's an ACPI_EXPORT_SYMBOPL() for a static symbol. If it's static, it's not accessible from outside the compilation unit, so should bot be exported. These three changes allow building a zx1_defconfig kernel with gcc 4.2 on IA64. [akpm@linux-foundation.org: export jpobe_return properly] Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Len Brown <lenb@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-11timer: remove clockevents_unregister_notifierMiao Xie
I find a function(clockevents_unregister_notifier) which is not called by anything in tree. Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-11Hibernation: do not try to mark invalid PFNs as nosaveRafael J. Wysocki
On some systems some PFNs reported by the early initialization code as 'nosave' may be invalid. If we try to set the corresponding bits in the hibernation bitmap, BUG_ON() in memory_bm_find_bit() will be triggered and the system won't be able to boot (cf. https://bugzilla.novell.com/show_bug.cgi?id=296242). Prevent this from happening by verifying if the 'nosave' PFNs are valid in mark_nosave_pages(). Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-11Fix missing numa_zonelist_order sysctlLee Schermerhorn
Misplaced #endif is hiding the numa_zonelist_order sysctl when !SECURITY. Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Mel Gorman <mel@csn.ul.ie> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-10sched debug: dont print kernel address in /proc/sched_debugIngo Molnar
Arjan van de Ven pointed out that we should not print kernel addresses in world-readable /proc files - fix that. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
2007-08-10sched: fix typo in the FAIR_GROUP_SCHED branchIngo Molnar
while there's no in-tree way to turn group scheduling at the moment, fix a typo in it nevertheless. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-10sched: improve rq-clock overflow logicIngo Molnar
improve the rq-clock overflow logic: limit the absolute rq->clock delta since the last scheduler tick, instead of limiting the delta itself. tested by Arjan van de Ven - whole laptop was misbehaving due to an incorrectly calibrated cpu_khz confusing sched_clock(). Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
2007-08-09Merge git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-schedLinus Torvalds
* git://git.kernel.org/pub/scm/linux/kernel/git/mingo/linux-2.6-sched: (61 commits) sched: refine negative nice level granularity sched: fix update_stats_enqueue() reniced codepath sched: round a bit better sched: make the multiplication table more accurate sched: optimize update_rq_clock() calls in the load-balancer sched: optimize activate_task() sched: clean up set_curr_task_fair() sched: remove __update_rq_clock() call from entity_tick() sched: move the __update_rq_clock() call to scheduler_tick() sched debug: remove the 'u64 now' parameter from print_task()/_rq() sched: remove the 'u64 now' local variables sched: remove the 'u64 now' parameter from deactivate_task() sched: remove the 'u64 now' parameter from dequeue_task() sched: remove the 'u64 now' parameter from enqueue_task() sched: remove the 'u64 now' parameter from dec_nr_running() sched: remove the 'u64 now' parameter from inc_nr_running() sched: remove the 'u64 now' parameter from dec_load() sched: remove the 'u64 now' parameter from inc_load() sched: remove the 'u64 now' parameter from update_curr_load() sched: remove the 'u64 now' parameter from ->task_new() ...
2007-08-09Revert "genirq: temporary fix for level-triggered IRQ resend"Linus Torvalds
This reverts commit 0fc4969b866671dfe39b1a9119d0fdc7ea0f63e5. It was always meant to be temporary, but it's generating more useless noise than anything else, and we probably should never have done it in the generic kernel (only had the people involved test it on their own). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-08-09sched: refine negative nice level granularityIngo Molnar
refine the granularity of negative nice level tasks: let them reschedule more often to offset the effect of them consuming their wait_runtime proportionately slower. (This makes nice-0 task scheduling smoother in the presence of negatively reniced tasks.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-09sched: fix update_stats_enqueue() reniced codepathIngo Molnar
the key has to be rescaled to /weight even if it has a positive value. (this change only affects the scheduling of reniced tasks) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-09sched: round a bit betterIngo Molnar
round a tiny bit better in high-frequency rescheduling scenarios, by rounding around zero instead of rounding down. (this is pretty theoretical though) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-09sched: make the multiplication table more accurateIngo Molnar
do small deltas in the weight and multiplication constant table so that the worst-case numeric error is better than 1:100000000. (8 digits) the current error table is: nice mult * inv_mult error ------------------------------------------ -20: 88761 * 48388 -0.0000000065 -19: 71755 * 59856 -0.0000000037 -18: 56483 * 76040 0.0000000056 -17: 46273 * 92818 0.0000000042 -16: 36291 * 118348 -0.0000000065 -15: 29154 * 147320 -0.0000000037 -14: 23254 * 184698 -0.0000000009 -13: 18705 * 229616 -0.0000000037 -12: 14949 * 287308 -0.0000000009 -11: 11916 * 360437 -0.0000000009 -10: 9548 * 449829 -0.0000000009 -9: 7620 * 563644 -0.0000000037 -8: 6100 * 704093 0.0000000009 -7: 4904 * 875809 0.0000000093 -6: 3906 * 1099582 -0.0000000009 -5: 3121 * 1376151 -0.0000000058 -4: 2501 * 1717300 0.0000000009 -3: 1991 * 2157191 -0.0000000035 -2: 1586 * 2708050 0.0000000009 -1: 1277 * 3363326 0.0000000014 0: 1024 * 4194304 0.0000000000 1: 820 * 5237765 0.0000000009 2: 655 * 6557202 0.0000000033 3: 526 * 8165337 -0.0000000079 4: 423 * 10153587 0.0000000012 5: 335 * 12820798 0.0000000079 6: 272 * 15790321 0.0000000037 7: 215 * 19976592 -0.0000000037 8: 172 * 24970740 -0.0000000037 9: 137 * 31350126 -0.0000000079 10: 110 * 39045157 -0.0000000061 11: 87 * 49367440 -0.0000000037 12: 70 * 61356676 0.0000000056 13: 56 * 76695844 -0.0000000075 14: 45 * 95443717 -0.0000000072 15: 36 * 119304647 -0.0000000009 16: 29 * 148102320 -0.0000000037 17: 23 * 186737708 -0.0000000028 18: 18 * 238609294 -0.0000000009 19: 15 * 286331153 -0.0000000002 Signed-off-by: Ingo Molnar <mingo@elte.hu>
2007-08-09sched: optimize update_rq_clock() calls in the load-balancerIngo Molnar
optimize update_rq_clock() calls in the load-balancer: update them right after locking the runqueue(s) so that the pull functions do not have to call it. Signed-off-by: Ingo Molnar <mingo@elte.hu>