summaryrefslogtreecommitdiffstats
path: root/block
AgeCommit message (Collapse)Author
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-12Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (56 commits) doc: fix typo in comment explaining rb_tree usage Remove fs/ntfs/ChangeLog doc: fix console doc typo doc: cpuset: Update the cpuset flag file Fix of spelling in arch/sparc/kernel/leon_kernel.c no longer needed Remove drivers/parport/ChangeLog Remove drivers/char/ChangeLog doc: typo - Table 1-2 should refer to "status", not "statm" tree-wide: fix typos "ass?o[sc]iac?te" -> "associate" in comments No need to patch AMD-provided drivers/gpu/drm/radeon/atombios.h devres/irq: Fix devm_irq_match comment Remove reference to kthread_create_on_cpu tree-wide: Assorted spelling fixes tree-wide: fix 'lenght' typo in comments and code drm/kms: fix spelling in error message doc: capitalization and other minor fixes in pnp doc devres: typo fix s/dev/devm/ Remove redundant trailing semicolons from macros fix typo "definetly" -> "definitely" in comment tree-wide: s/widht/width/g typo in comments ... Fix trivial conflict in Documentation/laptops/00-INDEX
2010-03-12cgroups: blkio subsystem as moduleBen Blum
Modify the Block I/O cgroup subsystem to be able to be built as a module. As the CFQ disk scheduler optionally depends on blk-cgroup, config options in block/Kconfig, block/Kconfig.iosched, and block/blk-cgroup.h are enhanced to support the new module dependency. Signed-off-by: Ben Blum <bblum@andrew.cmu.edu> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Paul Menage <menage@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Jens Axboe <jens.axboe@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-03-08Merge branch 'for-next' into for-linusJiri Kosina
Conflicts: Documentation/filesystems/proc.txt arch/arm/mach-u300/include/mach/debug-macro.S drivers/net/qlge/qlge_ethtool.c drivers/net/qlge/qlge_main.c drivers/net/typhoon.c
2010-03-07Driver core: Constify struct sysfs_ops in struct kobj_typeEmese Revfy
Constify struct sysfs_ops. This is part of the ops structure constification effort started by Arjan van de Ven et al. Benefits of this constification: * prevents modification of data that is shared (referenced) by many other structure instances at runtime * detects/prevents accidental (but not intentional) modification attempts on archs that enforce read-only kernel data at runtime * potentially better optimized code as the compiler can assume that the const data cannot be changed * the compiler/linker move const data into .rodata and therefore exclude them from false sharing Signed-off-by: Emese Revfy <re.emese@gmail.com> Acked-by: David Teigland <teigland@redhat.com> Acked-by: Matt Domsch <Matt_Domsch@dell.com> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Acked-by: Hans J. Koch <hjk@linutronix.de> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Jens Axboe <jens.axboe@oracle.com> Acked-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-03-01block: don't access jiffies when initialising io_contextRichard Kennedy
As the comment says the initial value of last_waited is never used, so there is no need to initialise it with the current jiffies. Jiffies is hot enough without accessing it for no reason. Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-03-01cfq: remove 8 bytes of padding from cfq_rb_root on 64 bit buildsRichard Kennedy
Reorder cfq_rb_root to remove 8 bytes of padding on 64 bit builds. Consequently removing 56 bytes from cfq_group and 64 bytes from cfq_data. Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-03-01cfq-iosched: quantum check tweakShaohua Li
Currently a queue can only dispatch up to 4 requests if there are other queues. This isn't optimal, device can handle more requests, for example, AHCI can handle 31 requests. I can understand the limit is for fairness, but we could do a tweak: if the queue still has a lot of slice left, sounds we could ignore the limit. Test shows this boost my workload (two thread randread of a SSD) from 78m/s to 100m/s. Thanks for suggestions from Corrado and Vivek for the patch. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-28cfq-iosched: requests "in flight" vs "in driver" clarificationCorrado Zoccolo
Counters for requests "in flight" and "in driver" are used asymmetrically in cfq_may_dispatch, and have slightly different meaning. We split the rq_in_flight counter (was sync_flight) to count both sync and async requests, in order to use this one, which is more accurate in some corner cases. The rq_in_driver counter is coalesced, since individual sync/async counts are not used any more. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-28cfq-iosched: rethink seeky detection for SSDsCorrado Zoccolo
CFQ currently applies the same logic of detecting seeky queues and grouping them together for rotational disks as well as SSDs. For SSDs, the time to complete a request doesn't depend on the request location, but only on the size. This patch therefore changes the criterion to group queues by request size in case of SSDs, in order to achieve better fairness. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-28cfq-iosched: rework seeky detectionCorrado Zoccolo
Current seeky detection is based on average seek lenght. This is suboptimal, since the average will not distinguish between: * a process doing medium sized seeks * a process doing some sequential requests interleaved with larger seeks and even a medium seek can take lot of time, if the requested sector happens to be behind the disk head in the rotation (50% probability). Therefore, we change the seeky queue detection to work as follows: * each request can be classified as sequential if it is very close to the current head position, i.e. it is likely in the disk cache (disks usually read more data than requested, and put it in cache for subsequent reads). Otherwise, the request is classified as seeky. * an history window of the last 32 requests is kept, storing the classification result. * A queue is marked as seeky if more than 1/8 of the last 32 requests were seeky. This patch fixes a regression reported by Yanmin, on mmap 64k random reads. Reported-by: Yanmin Zhang <yanmin_zhang@linux.intel.com> Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26block: Consolidate phys_segment and hw_segment limitsMartin K. Petersen
Except for SCSI no device drivers distinguish between physical and hardware segment limits. Consolidate the two into a single segment limit. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26block: Rename blk_queue_max_sectors to blk_queue_max_hw_sectorsMartin K. Petersen
The block layer calling convention is blk_queue_<limit name>. blk_queue_max_sectors predates this practice, leading to some confusion. Rename the function to appropriately reflect that its intended use is to set max_hw_sectors. Also introduce a temporary wrapper for backwards compability. This can be removed after the merge window is closed. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26block: Add BLK_ prefix to definitionsMartin K. Petersen
Add a BLK_ prefix to block layer constants. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26block: Remove unused accessor functionMartin K. Petersen
blk_queue_max_hw_sectors is no longer called by any subsystem and can be removed. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26block: Update blk_queue_max_sectors and documentationMartin K. Petersen
Clarify blk_queue_max_sectors and update documentation. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-26cfq: Remove useless css reference getGui Jianfeng
There's no need to take css reference here, for the caller has already called rcu_read_lock() to prevent cgroup from being removed. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Reviewed-by: Li Zefan <lizf@cn.fujitsu.com> Acked-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-25Merge branch 'master' into for-2.6.34Jens Axboe
Conflicts: include/linux/blkdev.h Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-23blk-core: use BIO list management functionsAkinobu Mita
Now that the bio list management stuff is generic, convert generic_make_request to use bio lists instead of its own private bio list implementation. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-23Revert "block: improve queue_should_plug() by looking at IO depths"Jens Axboe
This reverts commit fb1e75389bd06fd5987e9cda1b4e0305c782f854. "Benjamin S." <sbenni@gmx.de> reports that the patch in question causes a big drop in sequential throughput for him, dropping from 200MB/sec down to only 70MB/sec. Needs to be investigated more fully, for now lets just revert the offending commit. Conflicts: include/linux/blkdev.h Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-22cfq: reorder cfq_queue removing padding on 64bitRichard Kennedy
This removes 8 bytes of padding from struct cfq_queue on 64 bit builds, shrinking it's size to 256 bytes, so fitting into 1 fewer cachelines and allowing 1 more object/slab in it's kmem_cache. Signed-off-by: Richard Kennedy <richard@rsk.demon.co.uk> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> ---- patch against 2.6.33-rc8 tested on x86_64 AMDX2 Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-22Merge branch 'master' into for-2.6.34Jens Axboe
2010-02-09tree-wide: Assorted spelling fixesDaniel Mack
In particular, several occurances of funny versions of 'success', 'unknown', 'therefore', 'acknowledge', 'argument', 'achieve', 'address', 'beginning', 'desirable', 'separate' and 'necessary' are fixed. Signed-off-by: Daniel Mack <daniel@caiaq.de> Cc: Joe Perches <joe@perches.com> Cc: Junio C Hamano <gitster@pobox.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2010-02-05cfq-iosched: split seeky coop queues after one sliceShaohua Li
Currently we split seeky coop queues after 1s, which is too big. Below patch marks seeky coop queue split_coop flag after one slice. After that, if new requests come in, the queues will be splitted. Patch is suggested by Corrado. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Reviewed-by: Corrado Zoccolo <czoccolo@gmail.com> Acked-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-02cfq-iosched: Do not idle on async queuesVivek Goyal
Few weeks back, Shaohua Li had posted similar patch. I am reposting it with more test results. This patch does two things. - Do not idle on async queues. - It also changes the write queue depth CFQ drives (cfq_may_dispatch()). Currently, we seem to driving queue depth of 1 always for WRITES. This is true even if there is only one write queue in the system and all the logic of infinite queue depth in case of single busy queue as well as slowly increasing queue depth based on last delayed sync request does not seem to be kicking in at all. This patch will allow deeper WRITE queue depths (subjected to the other WRITE queue depth contstraints like cfq_quantum and last delayed sync request). Shaohua Li had reported getting more out of his SSD. For me, I have got one Lun exported from an HP EVA and when pure buffered writes are on, I can get more out of the system. Following are test results of pure buffered writes (with end_fsync=1) with vanilla and patched kernel. These results are average of 3 sets of run with increasing number of threads. AVERAGE[bufwfs][vanilla] ------- job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us) --- --- -- ------------ ----------- ------------- ----------- bufwfs 3 1 0 0 95349 474141 bufwfs 3 2 0 0 100282 806926 bufwfs 3 4 0 0 109989 2.7301e+06 bufwfs 3 8 0 0 116642 3762231 bufwfs 3 16 0 0 118230 6902970 AVERAGE[bufwfs] [patched kernel] ------- bufwfs 3 1 0 0 270722 404352 bufwfs 3 2 0 0 206770 1.06552e+06 bufwfs 3 4 0 0 195277 1.62283e+06 bufwfs 3 8 0 0 260960 2.62979e+06 bufwfs 3 16 0 0 299260 1.70731e+06 I also ran buffered writes along with some sequential reads and some buffered reads going on in the system on a SATA disk because the potential risk could be that we should not be driving queue depth higher in presence of sync IO going to keep the max clat low. With some random and sequential reads going on in the system on one SATA disk I did not see any significant increase in max clat. So it looks like other WRITE queue depth control logic is doing its job. Here are the results. AVERAGE[brr, bsr, bufw together] [vanilla] ------- job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us) --- --- -- ------------ ----------- ------------- ----------- brr 3 1 850 546345 0 0 bsr 3 1 14650 729543 0 0 bufw 3 1 0 0 23908 8274517 brr 3 2 981.333 579395 0 0 bsr 3 2 14149.7 1175689 0 0 bufw 3 2 0 0 21921 1.28108e+07 brr 3 4 898.333 1.75527e+06 0 0 bsr 3 4 12230.7 1.40072e+06 0 0 bufw 3 4 0 0 19722.3 2.4901e+07 brr 3 8 900 3160594 0 0 bsr 3 8 9282.33 1.91314e+06 0 0 bufw 3 8 0 0 18789.3 23890622 AVERAGE[brr, bsr, bufw mixed] [patched kernel] ------- job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us) --- --- -- ------------ ----------- ------------- ----------- brr 3 1 837 417973 0 0 bsr 3 1 14357.7 591275 0 0 bufw 3 1 0 0 24869.7 8910662 brr 3 2 1038.33 543434 0 0 bsr 3 2 13351.3 1205858 0 0 bufw 3 2 0 0 18626.3 13280370 brr 3 4 913 1.86861e+06 0 0 bsr 3 4 12652.3 1430974 0 0 bufw 3 4 0 0 15343.3 2.81305e+07 brr 3 8 890 2.92695e+06 0 0 bsr 3 8 9635.33 1.90244e+06 0 0 bufw 3 8 0 0 17200.3 24424392 So looks like it might make sense to include this patch. Thanks Vivek Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-02-01blk-cgroup: Fix potential deadlock in blk-cgroupGui Jianfeng
I triggered a lockdep warning as following. ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.33-rc2 #1 ------------------------------------------------------- test_io_control/7357 is trying to acquire lock: (blkio_list_lock){+.+...}, at: [<c053a990>] blkiocg_weight_write+0x82/0x9e but task is already holding lock: (&(&blkcg->lock)->rlock){......}, at: [<c053a949>] blkiocg_weight_write+0x3b/0x9e which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&(&blkcg->lock)->rlock){......}: [<c04583b7>] validate_chain+0x8bc/0xb9c [<c0458dba>] __lock_acquire+0x723/0x789 [<c0458eb0>] lock_acquire+0x90/0xa7 [<c0692b0a>] _raw_spin_lock_irqsave+0x27/0x5a [<c053a4e1>] blkiocg_add_blkio_group+0x1a/0x6d [<c053cac7>] cfq_get_queue+0x225/0x3de [<c053eec2>] cfq_set_request+0x217/0x42d [<c052c8a6>] elv_set_request+0x17/0x26 [<c0532a0f>] get_request+0x203/0x2c5 [<c0532ae9>] get_request_wait+0x18/0x10e [<c0533470>] __make_request+0x2ba/0x375 [<c0531985>] generic_make_request+0x28d/0x30f [<c0532da7>] submit_bio+0x8a/0x8f [<c04d827a>] submit_bh+0xf0/0x10f [<c04d91d2>] ll_rw_block+0xc0/0xf9 [<f86e9705>] ext3_find_entry+0x319/0x544 [ext3] [<f86eae58>] ext3_lookup+0x2c/0xb9 [ext3] [<c04c3e1b>] do_lookup+0xd3/0x172 [<c04c56c8>] link_path_walk+0x5fb/0x95c [<c04c5a65>] path_walk+0x3c/0x81 [<c04c5b63>] do_path_lookup+0x21/0x8a [<c04c66cc>] do_filp_open+0xf0/0x978 [<c04c0c7e>] open_exec+0x1b/0xb7 [<c04c1436>] do_execve+0xbb/0x266 [<c04081a9>] sys_execve+0x24/0x4a [<c04028a2>] ptregs_execve+0x12/0x18 -> #1 (&(&q->__queue_lock)->rlock){..-.-.}: [<c04583b7>] validate_chain+0x8bc/0xb9c [<c0458dba>] __lock_acquire+0x723/0x789 [<c0458eb0>] lock_acquire+0x90/0xa7 [<c0692b0a>] _raw_spin_lock_irqsave+0x27/0x5a [<c053dd2a>] cfq_unlink_blkio_group+0x17/0x41 [<c053a6eb>] blkiocg_destroy+0x72/0xc7 [<c0467df0>] cgroup_diput+0x4a/0xb2 [<c04ca473>] dentry_iput+0x93/0xb7 [<c04ca4b3>] d_kill+0x1c/0x36 [<c04cb5c5>] dput+0xf5/0xfe [<c04c6084>] do_rmdir+0x95/0xbe [<c04c60ec>] sys_rmdir+0x10/0x12 [<c04027cc>] sysenter_do_call+0x12/0x32 -> #0 (blkio_list_lock){+.+...}: [<c0458117>] validate_chain+0x61c/0xb9c [<c0458dba>] __lock_acquire+0x723/0x789 [<c0458eb0>] lock_acquire+0x90/0xa7 [<c06929fd>] _raw_spin_lock+0x1e/0x4e [<c053a990>] blkiocg_weight_write+0x82/0x9e [<c0467f1e>] cgroup_file_write+0xc6/0x1c0 [<c04bd2f3>] vfs_write+0x8c/0x116 [<c04bd7c6>] sys_write+0x3b/0x60 [<c04027cc>] sysenter_do_call+0x12/0x32 other info that might help us debug this: 1 lock held by test_io_control/7357: #0: (&(&blkcg->lock)->rlock){......}, at: [<c053a949>] blkiocg_weight_write+0x3b/0x9e stack backtrace: Pid: 7357, comm: test_io_control Not tainted 2.6.33-rc2 #1 Call Trace: [<c045754f>] print_circular_bug+0x91/0x9d [<c0458117>] validate_chain+0x61c/0xb9c [<c0458dba>] __lock_acquire+0x723/0x789 [<c0458eb0>] lock_acquire+0x90/0xa7 [<c053a990>] ? blkiocg_weight_write+0x82/0x9e [<c06929fd>] _raw_spin_lock+0x1e/0x4e [<c053a990>] ? blkiocg_weight_write+0x82/0x9e [<c053a990>] blkiocg_weight_write+0x82/0x9e [<c0467f1e>] cgroup_file_write+0xc6/0x1c0 [<c0454df5>] ? trace_hardirqs_off+0xb/0xd [<c044d93a>] ? cpu_clock+0x2e/0x44 [<c050e6ec>] ? security_file_permission+0xf/0x11 [<c04bcdda>] ? rw_verify_area+0x8a/0xad [<c0467e58>] ? cgroup_file_write+0x0/0x1c0 [<c04bd2f3>] vfs_write+0x8c/0x116 [<c04bd7c6>] sys_write+0x3b/0x60 [<c04027cc>] sysenter_do_call+0x12/0x32 To prevent deadlock, we should take locks as following sequence: blkio_list_lock -> queue_lock -> blkcg_lock. The following patch should fix this bug. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-01-29block: Added in stricter no merge semantics for block I/OAlan D. Brunelle
Updated 'nomerges' tunable to accept a value of '2' - indicating that _no_ merges at all are to be attempted (not even the simple one-hit cache). The following table illustrates the additional benefit - 5 minute runs of a random I/O load were applied to a dozen devices on a 16-way x86_64 system. nomerges Throughput %System Improvement (tput / %sys) -------- ------------ ----------- ------------------------- 0 12.45 MB/sec 0.669365609 1 12.50 MB/sec 0.641519199 0.40% / 2.71% 2 12.52 MB/sec 0.639849750 0.56% / 2.96% Signed-off-by: Alan D. Brunelle <alan.brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-01-11cfq-iosched: Respect ioprio_class when preemptingDivyesh Shah
In cfq_should_preempt(), we currently allow some cases where a non-RT request can preempt an ongoing RT cfqq timeslice. This should not happen. Examples include: o A sync_noidle wl type non-RT request pre-empting a sync_noidle wl type cfqq on which we are idling. o Once we have per-cgroup async queues, a non-RT sync request pre-empting a RT async cfqq. Signed-off-by: Divyesh Shah<dpshah@google.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-01-11block: Stop using byte offsetsMartin K. Petersen
All callers of the stacking functions use 512-byte sector units rather than byte offsets. Simplify the code so the stacking functions take sectors when specifying data offsets. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-01-11block: removed unused as_io_contextKirill Afonshin
It isn't used anymore, since AS was deleted. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-01-11block: bdev_stack_limits wrapperMartin K. Petersen
DM does not want to know about partition offsets. Add a partition-aware wrapper that DM can use when stacking block devices. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Acked-by: Mike Snitzer <snitzer@redhat.com> Reviewed-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-01-11block: Fix discard alignment calculation and printingMartin K. Petersen
Discard alignment reporting for partitions was incorrect. Update to match the algorithm used elsewhere. The alignment can be negative (misaligned). Fix format string accordingly. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2010-01-11block: Correct handling of bottom device misaligmentMartin K. Petersen
The top device misalignment flag would not be set if the added bottom device was already misaligned as opposed to causing a stacking failure. Also massage the reporting so that an error is only returned if adding the bottom device caused the misalignment. I.e. don't return an error if the top is already flagged as misaligned. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-29block: Honor the gfp_mask for alloc_page() in blkdev_issue_discard()OGAWA Hirofumi
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-29block: Fix incorrect alignment offset reporting and update documentationMartin K. Petersen
queue_sector_alignment_offset returned the wrong value which caused partitions to report an incorrect alignment_offset. Since offset alignment calculation is needed several places it has been split into a separate helper function. The topology stacking function has been updated accordingly. Furthermore, comments have been added to clarify how the stacking function works. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Tested-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-28cfq-iosched: don't regard requests with long distance as closeShaohua Li
seek_mean could be very big sometimes, using it as close criteria is meaningless as this doen't improve any performance. So if it's big, let's fallback to default value. Reviewed-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Shaohua Li<shaohua.li@intel.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-21block: Fix topology stacking for data and discard alignmentMartin K. Petersen
The stacking code incorrectly scaled up the data offset in some cases causing misaligned devices to report alignment. Rewrite the stacking algorithm to remedy this and apply the same alignment principles to the discard handling. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-18cfq-iosched: Remove prio_change logic for workload selectionVivek Goyal
o CFQ now internally divides cfq queues in therr workload categories. sync-idle, sync-noidle and async. Which workload to run depends primarily on rb_key offset across three service trees. Which is a combination of mulitiple things including what time queue got queued on the service tree. There is one exception though. That is if we switched the prio class, say we served some RT tasks and again started serving BE class, then with-in BE class we always started with sync-noidle workload irrespective of rb_key offset in service trees. This can provide better latencies for sync-noidle workload in the presence of RT tasks. o This patch gets rid of that exception and which workload to run with-in class always depends on lowest rb_key across service trees. The reason being that now we have multiple BE class groups and if we always switch to sync-noidle workload with-in group, we can potentially starve a sync-idle workload with-in group. Same is true for async workload which will be in root group. Also the workload-switching with-in group will become very unpredictable as it now depends whether some RT workload was running in the system or not. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Acked-by: Corrado Zoccolo <czoccolo@gmail.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-18cfq-iosched: Get rid of nr_groupsVivek Goyal
o Currently code does not seem to be using cfqd->nr_groups. Get rid of it. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-18cfq-iosched: Remove the check for same cfq group from allow_mergeVivek Goyal
o allow_merge() already checks if submitting task is pointing to same cfqq as rq has been queued in. If everything is fine, we should not be having a task in one cgroup and having a pointer to cfqq in other cgroup. Well I guess in some situations it can happen and that is, when a random IO queue has been moved into root cgroup for group_isolation=0. In this case, tasks's cgroup/group is different from where actually cfqq is, but this is intentional and in this case merging should be allowed. The second situation is where due to close cooperator patches, multiple processes can be sharing a cfqq. If everything implemented right, we should not end up in a situation where tasks from different processes in different groups are sharing the same cfqq as we allow merging of cooperating queues only if they are in same group. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Reviewed-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-16block: temporarily disable discard granularityJens Axboe
Commit 86b37281411cf1e9bc0a6b5406c45edb7bd9ea5d adds a check for misaligned stacking offsets, but it's buggy since the defaults are 0. Hence all dm devices that pass in a non-zero starting offset will be marked as misaligned amd dm will complain. A real fix is coming, in the mean time disable the discard granularity check so that users don't worry about dm reporting about misaligned devices. Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-15Merge branch 'for-2.6.33' of git://git.kernel.dk/linux-2.6-blockLinus Torvalds
* 'for-2.6.33' of git://git.kernel.dk/linux-2.6-block: cfq: set workload as expired if it doesn't have any slice left Fix a CFQ crash in "for-2.6.33" branch of block tree cfq: Remove wait_request flag when idle time is being deleted cfq-iosched: commenting non-obvious initialization cfq-iosched: Take care of corner cases of group losing share due to deletion cfq-iosched: Get rid of cfqq wait_busy_done flag cfq: Optimization for close cooperating queue searching block,xd: Delay allocation of DMA buffers until device is known drbd: Following the hmac change to SHASH (see linux commit 8bd1209cfff) cfq-iosched: reduce write depth only if sync was delayed
2009-12-15cfq: set workload as expired if it doesn't have any slice leftGui Jianfeng
When a group is resumed, if it doesn't have workload slice left, we should set workload_expires as expired. Otherwise, we might start from where we left in previous group by error. Thanks the idea from Corrado. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-10Fix a CFQ crash in "for-2.6.33" branch of block treeVivek Goyal
I think my previous patch introduced a bug which can lead to CFQ hitting BUG_ON(). The offending commit in for-2.6.33 branch is. commit 7667aa0630407bc07dc38dcc79d29cc0a65553c1 Author: Vivek Goyal <vgoyal@redhat.com> Date: Tue Dec 8 17:52:58 2009 -0500 cfq-iosched: Take care of corner cases of group losing share due to deletion While doing some stress testing on my box, I enountered following. login: [ 3165.148841] BUG: scheduling while atomic: swapper/0/0x10000100 [ 3165.149821] Modules linked in: cfq_iosched dm_multipath qla2xxx igb scsi_transport_fc dm_snapshot [last unloaded: scsi_wait_scan] [ 3165.149821] Pid: 0, comm: swapper Not tainted 2.6.32-block-for-33-merged-new #3 [ 3165.149821] Call Trace: [ 3165.149821] <IRQ> [<ffffffff8103fab8>] __schedule_bug+0x5c/0x60 [ 3165.149821] [<ffffffff8103afd7>] ? __wake_up+0x44/0x4d [ 3165.149821] [<ffffffff8153a979>] schedule+0xe3/0x7bc [ 3165.149821] [<ffffffff8103a796>] ? cpumask_next+0x1d/0x1f [ 3165.149821] [<ffffffffa000b21d>] ? cfq_dispatch_requests+0x6ba/0x93e [cfq_iosched] [ 3165.149821] [<ffffffff810422d8>] __cond_resched+0x2a/0x35 [ 3165.149821] [<ffffffffa000b21d>] ? cfq_dispatch_requests+0x6ba/0x93e [cfq_iosched] [ 3165.149821] [<ffffffff8153b1ee>] _cond_resched+0x2c/0x37 [ 3165.149821] [<ffffffff8100e2db>] is_valid_bugaddr+0x16/0x2f [ 3165.149821] [<ffffffff811e4161>] report_bug+0x18/0xac [ 3165.149821] [<ffffffff8100f1fc>] die+0x39/0x63 [ 3165.149821] [<ffffffff8153cde1>] do_trap+0x11a/0x129 [ 3165.149821] [<ffffffff8100d470>] do_invalid_op+0x96/0x9f [ 3165.149821] [<ffffffffa000b21d>] ? cfq_dispatch_requests+0x6ba/0x93e [cfq_iosched] [ 3165.149821] [<ffffffff81034b4d>] ? enqueue_task+0x5c/0x67 [ 3165.149821] [<ffffffff8103ae83>] ? task_rq_unlock+0x11/0x13 [ 3165.149821] [<ffffffff81041aae>] ? try_to_wake_up+0x292/0x2a4 [ 3165.149821] [<ffffffff8100c935>] invalid_op+0x15/0x20 [ 3165.149821] [<ffffffffa000b21d>] ? cfq_dispatch_requests+0x6ba/0x93e [cfq_iosched] [ 3165.149821] [<ffffffff810df5a6>] ? virt_to_head_page+0xe/0x2f [ 3165.149821] [<ffffffff811d8c2a>] blk_peek_request+0x191/0x1a7 [ 3165.149821] [<ffffffff811e5b8d>] ? kobject_get+0x1a/0x21 [ 3165.149821] [<ffffffff812c8d4c>] scsi_request_fn+0x82/0x3df [ 3165.149821] [<ffffffff8110b2de>] ? bio_fs_destructor+0x15/0x17 [ 3165.149821] [<ffffffff810df5a6>] ? virt_to_head_page+0xe/0x2f [ 3165.149821] [<ffffffff811d931f>] __blk_run_queue+0x42/0x71 [ 3165.149821] [<ffffffff811d9403>] blk_run_queue+0x26/0x3a [ 3165.149821] [<ffffffff812c8761>] scsi_run_queue+0x2de/0x375 [ 3165.149821] [<ffffffff812b60ac>] ? put_device+0x17/0x19 [ 3165.149821] [<ffffffff812c92d7>] scsi_next_command+0x3b/0x4b [ 3165.149821] [<ffffffff812c9b9f>] scsi_io_completion+0x1c9/0x3f5 [ 3165.149821] [<ffffffff812c3c36>] scsi_finish_command+0xb5/0xbe I think I have hit following BUG_ON() in cfq_dispatch_request(). BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list)); Please find attached the patch to fix it. I have done some stress testing with it and have not seen it happening again. o We should wait on a queue even after slice expiry only if it is empty. If queue is not empty then continue to expire it. o If we decide to keep the queue then make cfqq=NULL. Otherwise select_queue() will return a valid cfqq and cfq_dispatch_request() can hit following BUG_ON(). BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list)) Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-10cfq: Remove wait_request flag when idle time is being deletedGui Jianfeng
Remove wait_request flag when idle time is being deleted, otherwise it'll hit this path every time when a request is enqueued. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-09Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (42 commits) tree-wide: fix misspelling of "definition" in comments reiserfs: fix misspelling of "journaled" doc: Fix a typo in slub.txt. inotify: remove superfluous return code check hdlc: spelling fix in find_pvc() comment doc: fix regulator docs cut-and-pasteism mtd: Fix comment in Kconfig doc: Fix IRQ chip docs tree-wide: fix assorted typos all over the place drivers/ata/libata-sff.c: comment spelling fixes fix typos/grammos in Documentation/edac.txt sysctl: add missing comments fs/debugfs/inode.c: fix comment typos sgivwfb: Make use of ARRAY_SIZE. sky2: fix sky2_link_down copy/paste comment error tree-wide: fix typos "couter" -> "counter" tree-wide: fix typos "offest" -> "offset" fix kerneldoc for set_irq_msi() spidev: fix double "of of" in comment comment typo fix: sybsystem -> subsystem ...
2009-12-09cfq-iosched: commenting non-obvious initializationCorrado Zoccolo
Added a comment to explain the initialization of last_delayed_sync. Signed-off-by: Corrado Zoccolo <czoccolo@gmail.com> Acked-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-09cfq-iosched: Take care of corner cases of group losing share due to deletionVivek Goyal
If there is a sequential reader running in a group, we wait for next request to come in that group after slice expiry and once new request is in, we expire the queue. Otherwise we delete the group from service tree and group looses its fair share. So far I was marking a queue as wait_busy if it had consumed its slice and it was last queue in the group. But this condition did not cover following two cases. 1.If a request completed and slice has not expired yet. Next request comes in and is dispatched to disk. Now select_queue() hits and slice has expired. This group will be deleted. Because request is still in the disk, this queue will never get a chance to wait_busy. 2.If request completed and slice has not expired yet. Before next request comes in (delay due to think time), select_queue() hits and expires the queue hence group. This queue never got a chance to wait busy. Gui was hitting the boundary condition 1 and not getting fairness numbers proportional to weight. This patch puts the checks for above two conditions and improves the fairness numbers for sequential workload on rotational media. Check in select_queue() takes care of case 1 and additional check in should_wait_busy() takes care of case 2. Reported-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-09cfq-iosched: Get rid of cfqq wait_busy_done flagVivek Goyal
o Get rid of wait_busy_done flag. This flag only tells we were doing wait busy on a queue and that queue got request so expire it. That information can easily be obtained by (cfq_cfqq_wait_busy() && queue_is_not_empty). So remove this flag and keep code simple. Signed-off-by: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2009-12-09cfq: Optimization for close cooperating queue searchingGui Jianfeng
It doesn't make any sense to try to find out a close cooperating queue if current cfqq is the only one in the group. Signed-off-by: Gui Jianfeng <guijianfeng@cn.fujitsu.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>