summaryrefslogtreecommitdiffstats
path: root/include
AgeCommit message (Collapse)Author
2006-03-26[PATCH] bitops: generic find_{next,first}{,_zero}_bit()Akinobu Mita
This patch introduces the C-language equivalents of the functions below: unsigned logn find_next_bit(const unsigned long *addr, unsigned long size, unsigned long offset); unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size, unsigned long offset); unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size); unsigned long find_first_bit(const unsigned long *addr, unsigned long size); In include/asm-generic/bitops/find.h This code largely copied from: arch/powerpc/lib/bitops.c Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic fls64()Akinobu Mita
This patch introduces the C-language equivalent of the function: int fls64(__u64 x); In include/asm-generic/bitops/fls64.h This code largely copied from: include/linux/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic fls()Akinobu Mita
This patch introduces the C-language equivalent of the function: int fls(int x); In include/asm-generic/bitops/fls.h This code largely copied from: include/linux/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic ffz()Akinobu Mita
This patch introduces the C-language equivalent of the function: unsigned long ffz(unsigned long word); In include/asm-generic/bitops/ffz.h This code largely copied from: include/asm-parisc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic __ffs()Akinobu Mita
This patch introduces the C-language equivalent of the function: unsigned long __ffs(unsigned long word); In include/asm-generic/bitops/__ffs.h This code largely copied from: include/asm-sparc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic __{,test_and_}{set,clear,change}_bit() and test_bit()Akinobu Mita
This patch introduces the C-language equivalents of the functions below: void __set_bit(int nr, volatile unsigned long *addr); void __clear_bit(int nr, volatile unsigned long *addr); void __change_bit(int nr, volatile unsigned long *addr); int __test_and_set_bit(int nr, volatile unsigned long *addr); int __test_and_clear_bit(int nr, volatile unsigned long *addr); int __test_and_change_bit(int nr, volatile unsigned long *addr); int test_bit(int nr, const volatile unsigned long *addr); In include/asm-generic/bitops/non-atomic.h This code largely copied from: asm-powerpc/bitops.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: generic {,test_and_}{set,clear,change}_bit()Akinobu Mita
This patch introduces the C-language equivalents of the functions below: void set_bit(int nr, volatile unsigned long *addr); void clear_bit(int nr, volatile unsigned long *addr); void change_bit(int nr, volatile unsigned long *addr); int test_and_set_bit(int nr, volatile unsigned long *addr); int test_and_clear_bit(int nr, volatile unsigned long *addr); int test_and_change_bit(int nr, volatile unsigned long *addr); In include/asm-generic/bitops/atomic.h This code largely copied from: include/asm-powerpc/bitops.h include/asm-parisc/bitops.h include/asm-parisc/atomic.h Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: use non atomic operations for minix_*_bit() and ext2_*_bit()Akinobu Mita
Bitmap functions for the minix filesystem and the ext2 filesystem except ext2_set_bit_atomic() and ext2_clear_bit_atomic() do not require the atomic guarantees. But these are defined by using atomic bit operations on several architectures. (cris, frv, h8300, ia64, m32r, m68k, m68knommu, mips, s390, sh, sh64, sparc, sparc64, v850, and xtensa) This patch switches to non atomic bit operation. Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: cris: remove unnecessary local_irq_restore()Akinobu Mita
remove unnecessary local_irq_restore() after cris_atomic_restore() in test_and_set_bit(). Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Cc: Mikael Starvik <starvik@axis.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: parisc: add ()-pair in __ffz() macroAkinobu Mita
Noticed by Michael Tokarev add missing ()-pair in __ffz() macro for parisc Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Cc: Kyle McMartin <kyle@mcmartin.ca> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] bitops: alpha: use config options instead of __alpha_fix__ and ↵Akinobu Mita
__alpha_cix__ Use config options instead of gcc builtin definition to tell the use of instruction set extensions (CIX and FIX). This is introduced to tell the kbuild system the use of opmized hweight*() routines on alpha architecture. Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] arm: fix undefined reference to generic_flsAkinobu Mita
This patch defines constant_fls() instead of removed generic_fls(). Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] more s/fucn/func/ typo fixesAkinobu Mita
s/fucntion/function/ typo fixes Signed-off-by: Akinobu Mita <mita@miraclelinux.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] isdn4linux: Siemens Gigaset drivers - tty interfaceHansjoerg Lipp
And: Tilman Schmidt <tilman@imap.cc> This patch adds the tty interface to the gigaset module. The tty interface provides direct access to the AT command set of the Gigaset devices. Signed-off-by: Hansjoerg Lipp <hjlipp@web.de> Signed-off-by: Tilman Schmidt <tilman@imap.cc> Cc: Karsten Keil <kkeil@suse.de> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] x86: kprobes-boosterMasami Hiramatsu
Current kprobe copies the original instruction at the probe point and replaces it with a breakpoint instruction (int3). When the kernel hits the probe point, kprobe handler is invoked. And the copied instruction is single-step executed on the copied buffer (not on the original address) by kprobe. After that, the kprobe checks registers and modify it (if need) as if the instructions was executed on the original address. My proposal is based on the fact there are many instructions which do NOT require the register modification after the single-step execution. When the copied instruction is a kind of them, kprobe just jumps back to the next instruction after single-step execution. If so, why don't we execute those instructions directly? With kprobe-booster patch, kprobes will execute a copied instruction directly and (if need) jump back to original code. This direct execution is executed when the kprobe don't have both post_handler and break_handler, and the copied instruction can be executed directly. I sorted instructions which can be executed directly or not; - Call instructions are NG(can not be executed directly). We should correct the return address pushed into top of stack. - Indirect instructions except for absolute indirect-jumps are NG. Those instructions changes EIP randomly. We should check EIP and correct it. - Instructions that change EIP beyond the range of the instruction buffer are NG. - Instructions that change EIP to tail 5 bytes of the instruction buffer (it is the size of a jump instruction). We must write a jump instruction which backs to original kernel code in the instruction buffer. - Break point instruction is NG. We should not touch EIP and pass to other handlers. - Absolute direct/indirect jumps are OK.- Conditional Jumps are NG. - Halt and software-interruptions are NG. Because it will stay on the instruction buffer of kprobes. - Prefixes are NG. - Unknown/reserved opcode is NG. - Other 1 byte instructions are OK. But those instructions need a jump back code. - 2 bytes instructions are mapped sparsely. So, in this release, this patch don't boost those instructions. >From Intel's IA-32 opcode map described in IA-32 Intel Architecture Software Developer's Manual Vol.2 B, I determined that following opcodes are not boostable. - 0FH (2byte escape) - 70H - 7FH (Jump on condition) - 9AH (Call) and 9CH (Pushf) - C0H-C1H (Grp 2: includes reserved opcode) - C6H-C7H (Grp11: includes reserved opcode) - CCH-CEH (Software-interrupt) - D0H-D3H (Grp2: includes reserved opcode) - D6H (Reserved) - D8H-DFH (Coprocessor) - E0H-E3H (loop/conditional jump) - E8H (Call) - F0H-F3H (Prefixes and reserved) - F4H (Halt) - F6H-F7H (Grp3: includes reserved opcode) - FEH-FFH(Grp4,5: includes reserved opcode) Kprobe-booster checks whether target instruction can be boosted (can be executed directly) at arch_copy_kprobe() function. If the target instruction can be boosted, it clears "boostable" flag. If not, it sets "boostable" flag -1. This is disabled status. In resume_execution() function, If "boostable" flag is cleared, kprobe-booster measures the size of the target instruction and sets "boostable" flag 1. In kprobe_handler(), kprobe checks the "boostable" flag. If the flag is 1, it resets current kprobe and executes instruction buffer directly instead of single stepping. When unregistering a boosted kprobe, it calls synchronize_sched() after "int3" is removed. So we can ensure followings after the synchronize_sched() called. - interrupt handlers are finished on all CPUs. - instruction buffer is not executed on all CPUs. And we can release the boosted kprobe safely. And also, on preemptible kernel, the booster is not enabled where the kernel preemption is enabled. So, there are no preempted threads on the instruction buffer. The description of kretprobe-booster: ==================================== In the normal operation, kretprobe make a target function return to trampoline code. And a kprobe (called trampoline_probe) have been inserted at the trampoline code. When the kernel hits this kprobe, it calls kretprobe's handler and it returns to original return address. Kretprobe-booster patch removes the trampoline_probe. It allows the trampoline code to call kretprobe's handler directly instead of invoking kprobe. And tranpoline code returns to original return address. This new trampoline code stores and restores registers, so the kretprobe handler is still able to access those registers. Current kprobe has about 1.3 usec/probe(*) overhead, and kprobe-booster patch reduces it to 0.6 usec/probe(*). Also current kretprobe has about 2.0 usec/probe(*) overhead. Kprobe-booster patch reduces it to 1.3 usec/probe(*), and the combination of both kprobe-booster patch and kretprobe-booster patch reduces it to 0.9 usec/probe(*). I expect the combination of both patches can reduce half of a probing overhead. Performance numbers strongly depend on the processor model. Andrew Morton wrote: > These preempt tricks look rather nasty. Can you please describe what the > problem is, precisely? And how this code avoids it? Perhaps we can find > something cleaner. The problem is how to remove the copied instructions of the kprobe *safely* on the preemptable kernel (CONFIG_PREEMPT=y). Kprobes basically executes the following actions; (1)int3 (2)preempt_disable() (3)kprobe_prehandler() (4)copied instructioin(single step) (5)kprobe_posthandler() (6)preempt_enable() (7)return to the original code During the execution of copied instruction, preemption is disabled (from step (2) to (6)). When unregistering the probes, Kprobe waits for RCU quiescent state by using synchronize_sched() after removing int3 instruction. Thus we can ensure the copied instruction is not executed. On the other hand, kprobe-booster executes the following actions; (1)int3 (2)preempt_disable() (3)kprobe_prehandler() (4)preempt_enable() <-- this one is added by my patch (5)copied instruction(direct execution) (6)jmp back to the original code The problem is that we have no way to prevent preemption on step (5) or (6). We cannot call preempt_disable() after step (6), because there are no rooms to do that. Thus, some other processes may be preempted at step(5) or (6) on preemptable kernel. And I couldn't find the easy way to ensure that other processes' stack do *not* have the address of them. (I thought some way to do that, but those are very costly.) So currently, I simply boost the kprobe only when the probe point is already preemption disabled. > Also, the patch adds a preempt_enable() but I don't see a corresponding > preempt_disable(). Am I missing something? It is corresponding to the preempt_disable() in the top of kprobe_handler(). I copied the code of kprobe_handler() here: static int __kprobes kprobe_handler(struct pt_regs *regs) { struct kprobe *p; int ret = 0; kprobe_opcode_t *addr = NULL; unsigned long *lp; struct kprobe_ctlblk *kcb; /* * We don't want to be preempted for the entire * duration of kprobe processing */ preempt_disable(); <-- HERE kcb = get_kprobe_ctlblk(); Signed-off-by: Masami Hiramatsu <hiramatu@sdl.hitachi.co.jp> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] hrtimers: remove data fieldRoman Zippel
The nanosleep cleanup allows to remove the data field of hrtimer. The callback function can use container_of() to get it's own data. Since the hrtimer structure is anyway embedded in other structures, this adds no overhead. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] hrtimers: remove nsec_t typedefRoman Zippel
nsec_t predates ktime_t and has mostly been superseded by it. In the few places that are left it's better to make it explicit that we're dealing with 64 bit values here. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: John Stultz <johnstul@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] hrtimers: remove DEFINE_KTIME and ktime_to_clock_t()Roman Zippel
Now that it_real_value is gone, the last user of DEFINE_KTIME and ktime_to_clock_t are also gone, so remove it before someone starts using it again. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] hrtimers: remove state fieldRoman Zippel
Remove the state field and encode this information in the rb_node similiar to normal timer. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] hrtimers: simplify nanosleepRoman Zippel
nanosleep is the only user of the expired state, so let it manage this itself, which makes the hrtimer code a bit simpler. The remaining time is also only calculated if requested. Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] hrtimers: pass current time to hrtimer_forward()Roman Zippel
Pass current time to hrtimer_forward(). This allows to use the softirq time in the timer base when the forward function is called from the timer callback. Other places pass current time with a call to timer->base->get_time(). Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] hrtimers: optimize softirq runqueuesThomas Gleixner
The hrtimer softirq is called from the timer softirq every tick. Retrieve the current time from xtime and wall_to_monotonic instead of calling base->get_time() for each timer base. Store the time in the base structure and provide a hook once clock source abstractions are in place and to keep the code open for new base clocks. Based on a patch from: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] remove ->get_blocks() supportBadari Pulavarty
Now that get_block() can handle mapping multiple disk blocks, no need to have ->get_blocks(). This patch removes fs specific ->get_blocks() added for DIO and makes it users use get_block() instead. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] pass b_size to ->get_block()Badari Pulavarty
Pass amount of disk needs to be mapped to get_block(). This way one can modify the fs ->get_block() functions to map multiple blocks at the same time. [akpm@osdl.org: performance tweak] [akpm@osdl.org: remove unneeded assignments] Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] change buffer_head.b_size to size_tBadari Pulavarty
Increase the size of the buffer_head b_size field (only) for 64 bit platforms. Update some old and moldy comments in and around the structure as well. The b_size increase allows us to perform larger mappings and allocations for large I/O requests from userspace, which tie in with other changes allowing the get_block_t() interface to map multiple blocks at once. Signed-off-by: Nathan Scott <nathans@sgi.com> Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] ext3_get_blocks: support multiple blocks allocation in ext3_new_block()Mingming Cao
Change ext3_try_to_allocate() (called via ext3_new_blocks()) to try to allocate the requested number of blocks on a best effort basis: After allocated the first block, it will always attempt to allocate the next few(up to the requested size and not beyond the reservation window) adjacent blocks at the same time. Signed-off-by: Mingming Cao <cmm@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] ext3_get_blocks: Mapping multiple blocks at a onceMingming Cao
Currently ext3_get_block() only maps or allocates one block at a time. This is quite inefficient for sequential IO workload. I have posted a early implements a simply multiple block map and allocation with current ext3. The basic idea is allocating the 1st block in the existing way, and attempting to allocate the next adjacent blocks on a best effort basis. More description about the implementation could be found here: http://marc.theaimsgroup.com/?l=ext2-devel&m=112162230003522&w=2 The following the latest version of the patch: break the original patch into 5 patches, re-worked some logicals, and fixed some bugs. The break ups are: [patch 1] Adding map multiple blocks at a time in ext3_get_blocks() [patch 2] Extend ext3_get_blocks() to support multiple block allocation [patch 3] Implement multiple block allocation in ext3-try-to-allocate (called via ext3_new_block()). [patch 4] Proper accounting updates in ext3_new_blocks() [patch 5] Adjust reservation window size properly (by the given number of blocks to allocate) before block allocation to increase the possibility of allocating multiple blocks in a single call. Tests done so far includes fsx,tiobench and dbench. The following numbers collected from Direct IO tests (1G file creation/read) shows the system time have been greatly reduced (more than 50% on my 8 cpu system) with the patches. 1G file DIO write: 2.6.15 2.6.15+patches real 0m31.275s 0m31.161s user 0m0.000s 0m0.000s sys 0m3.384s 0m0.564s 1G file DIO read: 2.6.15 2.6.15+patches real 0m30.733s 0m30.624s user 0m0.000s 0m0.004s sys 0m0.748s 0m0.380s Some previous test we did on buffered IO with using multiple blocks allocation and delayed allocation shows noticeable improvement on throughput and system time. This patch: Add support of mapping multiple blocks in one call. This is useful for DIO reads and re-writes (where blocks are already allocated), also is in line with Christoph's proposal of using getblocks() in mpage_readpage() or mpage_readpages(). Signed-off-by: Mingming Cao <cmm@us.ibm.com> Cc: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] 2TB files: change type of kstatfs entriesTakashi Sato
This fix was proposed by Trond Myklebust. He says: The type "sector_t" is heavily tied in to the block layer interface as an offset/handle to a block, and is subject to a supposedly block-specific configuration option: CONFIG_LBD. Despite this, it is used in struct kstatfs to save a couple of bytes on the stack whenever we call the filesystems' ->statfs(). So kstatfs's entries related to blocks are invalid on statfs64 for a network filesystem which has more than 2^32-1 blocks when CONFIG_LBD is disabled. - struct kstatfs Change the type of following entries from sector_t to u64. f_blocks f_bfree f_bavail f_files f_ffree Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Takashi Sato <sho@tnes.nec.co.jp> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] 2TB files: add blkcnt_tTakashi Sato
Add blkcnt_t as the type of inode.i_blocks. This enables you to make the size of blkcnt_t either 4 bytes or 8 bytes on 32 bits architecture with CONFIG_LSF. - CONFIG_LSF Add new configuration parameter. - blkcnt_t On h8300, i386, mips, powerpc, s390 and sh that define sector_t, blkcnt_t is defined as u64 if CONFIG_LSF is enabled; otherwise it is defined as unsigned long. On other architectures, it is defined as unsigned long. - inode.i_blocks Change the type from sector_t to blkcnt_t. Signed-off-by: Takashi Sato <sho@tnes.nec.co.jp> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] 2TB files: st_blocks is invalid when calling stat64Takashi Sato
This patch series fixes the following problems on 32 bits architecture. o stat64 returns the lower 32 bits of blocks, although userland st_blocks has 64 bits, because i_blocks has only 32 bits. The ioctl with FIOQSIZE has the same problem. o As Dave Kleikamp said, making >2TB file on JFS results in writing an invalid block number to disk inode. The cause is the same as above too. o In generic quota code dquot_transfer(), the file usage is calculated from i_blocks via inode_get_bytes(). If the file is over 2TB, the change of usage is less than expected. The cause is the same as above too. o As Trond Myklebust said, statfs64's entries related to blocks are invalid on statfs64 for a network filesystem which has more than 2^32-1 blocks with CONFIG_LBD disabled. [PATCH 3/3] We made patches to fix problems that occur when handling a large filesystem and a large file. It was discussed on the mails titled "stat64 for over 2TB file returned invalid st_blocks". Signed-off-by: Takashi Sato <sho@tnes.nec.co.jp> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Jan Kara <jack@ucw.cz> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] mempool: use mempool_create_slab_pool()Matthew Dobson
Modify well over a dozen mempool users to call mempool_create_slab_pool() rather than calling mempool_create() with extra arguments, saving about 30 lines of code and increasing readability. Signed-off-by: Matthew Dobson <colpatch@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] mempool: add mempool_create_slab_pool()Matthew Dobson
Create a simple wrapper function for the common case of creating a slab-based mempool. Signed-off-by: Matthew Dobson <colpatch@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] mempool: add kzalloc allocatorMatthew Dobson
Add another allocator to the common mempool code: a kzalloc/kfree allocator This will be used by the next patch in the series to replace a mempool-backed kzalloc allocator. It is also very likely that there will be more users in the future. Signed-off-by: Matthew Dobson <colpatch@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] mempool: add kmalloc allocatorMatthew Dobson
Add another allocator to the common mempool code: a kmalloc/kfree allocator This will be used by the next patch in the series to replace duplicate mempool-backed kmalloc allocators in several places in the kernel. It is also very likely that there will be more users in the future. Signed-off-by: Matthew Dobson <colpatch@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] mempool: add page allocatorMatthew Dobson
This will be used by the next patch in the series to replace duplicate mempool-backed page allocators in 2 places in the kernel. It is also likely that there will be more users in the future. Signed-off-by: Matthew Dobson <colpatch@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] Use loff_t for size in struct proc_dir_entryManeesh Soni
Change proc_dir_entry->size to be loff_t to represent files like /proc/vmcore for 32bit systems with more than 4G memory. Needed for seeing correct size for /proc/vmcore for 32-bit systems with > 4G RAM. Signed-off-by: Maneesh Soni <maneesh@in.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] consolidate sys32/compat_adjtimexStephen Rothwell
Create compat_sys_adjtimex and use it an all appropriate places. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] create struct compat_timex and use it everywhereStephen Rothwell
We had a copy of the compatibility version of struct timex in each 64 bit architecture. This patch just creates a global one and replaces all the usages of the old ones. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Arnd Bergmann <arnd@arndb.de> Acked-by: Kyle McMartin <kyle@parisc-linux.org> Acked-by: Tony Luck <tony.luck@intel.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] VFS,fs/locks.c,NFSD4: add race_free posix_lock_file_conf() interfaceAndy Adamson
Lockd and the NFSv4 server both exercise a race condition where posix_test_lock() is called either before or after posix_lock_file() to deal with a denied lock request due to a conflicting lock. Remove the race condition for the NFSv4 server by adding a new conflicting lock parameter to __posix_lock_file() , changing the name to __posix_lock_file_conf(). Keep posix_lock_file() interface, add posix_lock_conf() interface, both call __posix_lock_file_conf(). [akpm@osdl.org: Put the EXPORT_SYMBOL() where it belongs] Signed-off-by: Andy Adamson <andros@citi.umich.edu> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] hpet header sanitizationRandy Dunlap
Add __KERNEL__ block. Use __KERNEL__ to allow ioctl interface to be usable. Signed-off-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] ipmi: add full sysfs supportCorey Minyard
Add full driver model support for the IPMI driver. It links in the proper bus and device support. It adds an "ipmi" driver interface that has each BMC discovered by the driver (as a device). These BMCs appear in the devices/platform directory. If there are multiple interfaces to the same BMC, the driver should discover this and will only have one BMC entry. The BMC entry will have pointers to each interface device that connects to it. The device information (statistics and config information) has not yet been ported over to the driver model from proc, that will come later. This work was based on work by Yani Ioannou. I basically rewrote it using that code as a guide, but he still deserves credit :). [bunk@stusta.de: make ipmi_find_bmc_guid() static] Signed-off-by: Corey Minyard <minyard@acm.org> Signed-off-by: Yani Ioannou <yani.ioannou@gmail.com> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] cleanup smp_call_function UP buildCon Kolivas
net/core/flow.c: In function 'flow_cache_flush': net/core/flow.c:299: warning: statement with no effect Signed-off-by: Con Kolivas <kernel@kolivas.org> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] Make address_space_operations->invalidatepage return voidNeilBrown
The return value of this function is never used, so let's be honest and declare it as void. Some places where invalidatepage returned 0, I have inserted comments suggesting a BUG_ON. [akpm@osdl.org: JBD BUG fix] [akpm@osdl.org: rework for git-nfs] [akpm@osdl.org: don't go BUG in block_invalidate_page()] Signed-off-by: Neil Brown <neilb@suse.de> Acked-by: Dave Kleikamp <shaggy@austin.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] Make address_space_operations->sync_page return voidNeilBrown
The only user ignores the return value, and the only instanace (block_sync_page) always returns 0... Signed-off-by: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] EFI: keep physical table addresses in efi structureBjorn Helgaas
Almost all users of the table addresses from the EFI system table want physical addresses. So rather than doing the pa->va->pa conversion, just keep physical addresses in struct efi. This fixes a DMI bug: the efi structure contained the physical SMBIOS address on x86 but the virtual address on ia64, so dmi_scan_machine() used ioremap() on a virtual address on ia64. This is essentially the same as an earlier patch by Matt Tolentino: http://marc.theaimsgroup.com/?l=linux-kernel&m=112130292316281&w=2 except that this changes all table addresses, not just ACPI addresses. Matt's original patch was backed out because it caused MCAs on HP sx1000 systems. That problem is resolved by the ioremap() attribute checking added for ia64. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] ia64: ioremap: check EFI for valid memory attributesBjorn Helgaas
Check the EFI memory map so we can use the correct memory attributes for ioremap(). Previously, we always used uncacheable access, which blows up on some machines for regular system memory. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] EFI, /dev/mem: simplify efi_mem_attribute_range()Bjorn Helgaas
Pass the size, not a pointer to the size, to efi_mem_attribute_range(). This function validates memory regions for the /dev/mem read/write/mmap paths. The pointer allows arches to reduce the size of the range, but I think that's unnecessary complexity. Simplifying it will let me use efi_mem_attribute_range() to improve the ia64 ioremap() implementation. Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Matt Domsch <Matt_Domsch@dell.com> Cc: "Tolentino, Matthew E" <matthew.e.tolentino@intel.com> Cc: "Brown, Len" <len.brown@intel.com> Cc: Andi Kleen <ak@muc.de> Acked-by: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] ia64: use i386 dmi_scan.cMatt Domsch
Enable DMI table parsing on ia64. Andi Kleen has a patch in his x86_64 tree which enables the use of i386 dmi_scan.c on x86_64. dmi_scan.c functions are being used by the drivers/char/ipmi/ipmi_si_intf.c driver for autodetecting the ports or memory spaces where the IPMI controllers may be found. This patch adds equivalent changes for ia64 as to what is in the x86_64 tree. In addition, I reworked the DMI detection, such that on EFI-capable systems, it uses the efi.smbios pointer to find the table, rather than brute-force searching from 0xF0000. On non-EFI systems, it continues the brute-force search. My test system, an Intel S870BN4 'Tiger4', aka Dell PowerEdge 7250, with latest BIOS, does not list the IPMI controller in the ACPI namespace, nor does it have an ACPI SPMI table. Also note, currently shipping Dell x8xx EM64T servers don't have these either, so DMI is the only method for obtaining the address of the IPMI controller. Signed-off-by: Matt Domsch <Matt_Domsch@dell.com> Acked-by: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] Add flush_kernel_dcache_page() APIJames Bottomley
We have a problem in a lot of emulated storage in that it takes a page from get_user_pages() and does something like kmap_atomic(page) modify page kunmap_atomic(page) However, nothing has flushed the kernel cache view of the page before the kunmap. We need a lightweight API to do this, so this new API would specifically be for flushing the kernel cache view of a user page which the kernel has modified. The driver would need to add flush_kernel_dcache_page(page) before the final kunmap. Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-03-26[PATCH] Add API for flushing Anon pagesJames Bottomley
Currently, get_user_pages() returns fully coherent pages to the kernel for anything other than anonymous pages. This is a problem for things like fuse and the SCSI generic ioctl SG_IO which can potentially wish to do DMA to anonymous pages passed in by users. The fix is to add a new memory management API: flush_anon_page() which is used in get_user_pages() to make anonymous pages coherent. Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>