summaryrefslogtreecommitdiffstats
path: root/include/asm-i386
AgeCommit message (Collapse)Author
2006-12-08[PATCH] Generic BUG for i386Jeremy Fitzhardinge
This makes i386 use the generic BUG machinery. There are no functional changes from the old i386 implementation. The main advantage in using the generic BUG machinery for i386 is that the inlined overhead of BUG is just the ud2a instruction; the file+line(+function) information are no longer inlined into the instruction stream. This reduces cache pollution, and makes disassembly work properly. Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Andi Kleen <ak@muc.de> Cc: Hugh Dickens <hugh@veritas.com> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07Merge branch 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6Linus Torvalds
* 'for-linus' of git://one.firstfloor.org/home/andi/git/linux-2.6: (156 commits) [PATCH] x86-64: Export smp_call_function_single [PATCH] i386: Clean up smp_tune_scheduling() [PATCH] unwinder: move .eh_frame to RODATA [PATCH] unwinder: fully support linker generated .eh_frame_hdr section [PATCH] x86-64: don't use set_irq_regs() [PATCH] x86-64: check vector in setup_ioapic_dest to verify if need setup_IO_APIC_irq [PATCH] x86-64: Make ix86 default to HIGHMEM4G instead of NOHIGHMEM [PATCH] i386: replace kmalloc+memset with kzalloc [PATCH] x86-64: remove remaining pc98 code [PATCH] x86-64: remove unused variable [PATCH] x86-64: Fix constraints in atomic_add_return() [PATCH] x86-64: fix asm constraints in i386 atomic_add_return [PATCH] x86-64: Correct documentation for bzImage protocol v2.05 [PATCH] x86-64: replace kmalloc+memset with kzalloc in MTRR code [PATCH] x86-64: Fix numaq build error [PATCH] x86-64: include/asm-x86_64/cpufeature.h isn't a userspace header [PATCH] unwinder: Add debugging output to the Dwarf2 unwinder [PATCH] x86-64: Clarify error message in GART code [PATCH] x86-64: Fix interrupt race in idle callback (3rd try) [PATCH] x86-64: Remove unwind stack pointer alignment forcing again ... Fixed conflict in include/linux/uaccess.h manually Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] cleanup asm/setup.h userspace visibilityAdrian Bunk
Make the contents of the userspace asm/setup.h header consistent on all architectures: - export setup.h to userspace on all architectures - export only COMMAND_LINE_SIZE to userspace - frv: move COMMAND_LINE_SIZE from param.h - i386: remove duplicate COMMAND_LINE_SIZE from param.h - arm: - export ATAGs to userspace - change u8/u16/u32 to __u8/__u16/__u32 Signed-off-by: Adrian Bunk <bunk@stusta.de> Acked-by: Russell King <rmk@arm.linux.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] Pass struct dev pointer to dma_cache_sync()Ralf Baechle
Pass struct dev pointer to dma_cache_sync() dma_cache_sync() is ill-designed in that it does not have a struct device pointer argument which makes proper support for systems that consist of a mix of coherent and non-coherent DMA devices hard. Change dma_cache_sync to take a struct device pointer as first argument and fix all its callers to pass it. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] Add struct dev pointer to dma_is_consistent()Ralf Baechle
dma_is_consistent() is ill-designed in that it does not have a struct device pointer argument which makes proper support for systems that consist of a mix of coherent and non-coherent DMA devices hard. Change dma_is_consistent to take a struct device pointer as first argument and fix the sole caller to pass it. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Cc: James Bottomley <James.Bottomley@steeleye.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Greg KH <greg@kroah.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] remove kernel syscallsArnd Bergmann
The last thing we agreed on was to remove the macros entirely for 2.6.19, on all architectures. Unfortunately, I think nobody actually _did_ that, so they are still there. [akpm@osdl.org: x86_64 fix] Cc: David Woodhouse <dwmw2@infradead.org> Cc: Greg Schafer <gschafer@zip.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] lockdep: name some old style locksPeter Zijlstra
Name some of the remaning 'old_style_spin_init' locks Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] swsusp: Support i386 systems with PAE or without PSERafael J. Wysocki
Make swsusp support i386 systems with PAE or without PSE. This is done by creating temporary page tables located in resume-safe page frames before the suspend image is restored in the same way as x86_64 does it. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Cc: Andi Kleen <ak@suse.de> Cc: Dave Jones <davej@redhat.com> Cc: Nigel Cunningham <ncunningham@linuxmail.org> Cc: Pavel Machek <pavel@ucw.cz> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] slab: remove kmem_cache_tChristoph Lameter
Replace all uses of kmem_cache_t with struct kmem_cache. The patch was generated using the following script: #!/bin/sh # # Replace one string by another in all the kernel sources. # set -e for file in `find * -name "*.c" -o -name "*.h"|xargs grep -l $1`; do quilt add $file sed -e "1,\$s/$1/$2/g" $file >/tmp/$$ mv /tmp/$$ $file quilt refresh done The script was run like this sh replace kmem_cache_t "struct kmem_cache" Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] silence unused pgdat warning from alloc_bootmem_node and friendsAndy Whitcroft
x86 NUMA systems only define bootmem for node 0. alloc_bootmem_node() and friends therefore ignore the passed pgdat and use NODE_DATA(0) in all cases. This leads to the following warnings as we are not using the passed parameter: .../mm/page_alloc.c: In function 'zone_wait_table_init': .../mm/page_alloc.c:2259: warning: unused variable 'pgdat' One option would be to define all variables used with these macros __attribute__ ((unused)), but this would leave us exposed should these become genuinely unused. The key here is that we _are_ using the value, we ignore it but that is a deliberate action. This patch adds a nested local variable within the alloc_bootmem_node helper to which the pgdat parameter is assigned making it 'used'. The nested local is marked __attribute__ ((unused)) to silence this same warning for it. Signed-off-by: Andy Whitcroft <apw@shadowen.org> Cc: Christoph Lameter <clameter@engr.sgi.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] mm: pagefault_{disable,enable}()Peter Zijlstra
Introduce pagefault_{disable,enable}() and use these where previously we did manual preempt increments/decrements to make the pagefault handler do the atomic thing. Currently they still rely on the increased preempt count, but do not rely on the disabled preemption, this might go away in the future. (NOTE: the extra barrier() in pagefault_disable might fix some holes on machines which have too many registers for their own good) [heiko.carstens@de.ibm.com: s390 fix] Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Nick Piggin <npiggin@suse.de> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2006-12-07[PATCH] i386: replace kmalloc+memset with kzallocBurman Yan
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] x86-64: remove remaining pc98 codeAdrian Bunk
Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] x86-64: fix asm constraints in i386 atomic_add_returnDuncan Sands
Since v->counter is both read and written, it should be an output as well as an input for the asm. The current code only gets away with this because counter is volatile. Also, according to Documents/atomic_ops.txt, atomic_add_return should provide a memory barrier, in particular a compiler barrier, so the asm should be marked as clobbering memory. Test case: #include <stdio.h> typedef struct { int counter; } atomic_t; /* NB: no "volatile" */ #define ATOMIC_INIT(i) { (i) } #define atomic_read(v) ((v)->counter) static __inline__ int atomic_add_return(int i, atomic_t *v) { int __i = i; __asm__ __volatile__( "lock; xaddl %0, %1;" :"=r"(i) :"m"(v->counter), "0"(i)); /* __asm__ __volatile__( "lock; xaddl %0, %1" :"+r" (i), "+m" (v->counter) : : "memory"); */ return i + __i; } int main (void) { atomic_t a = ATOMIC_INIT(0); int x; x = atomic_add_return (1, &a); if ((x!=1) || (atomic_read(&a)!=1)) printf("fail: %i, %i\n", x, atomic_read(&a)); } Signed-off-by: Duncan Sands <baldrick@free.fr> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Acked-by: David Howells <dhowells@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] unwinder: more sanity checks in Dwarf2 unwinderJan Beulich
Tighten the requirements on both input to and output from the Dwarf2 unwinder. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: always enable regparmAdrian Bunk
-mregparm=3 has been enabled by default for some time on i386, and AFAIK there aren't any problems with it left. This patch removes the REGPARM config option and sets -mregparm=3 unconditionally. Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: i386 add Intel BTS cpufeature bit and detection (take 2)Stephane Eranian
Here is a small patch for i386 which adds a cpufeature flag and detection code for Intel's Branch Trace Store (BTS) feature. This feature can be found on Intel P4 and Core 2 processors among others. It can also be used by perfmon. changelog: - add CPU_FEATURE_BTS - add Branch Trace Store detection signed-off-by: stephane eranian <eranian@hpl.hp.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] x86: fix the irqbalance quirk for E7320/E7520/E7525Siddha, Suresh B
Move the irqbalance quirks for E7320/E7520/E7525(Errata 23 in http://download.intel.com/design/chipsets/specupdt/30304203.pdf) to early quirks. And add a PCI quirk for these platforms to check(which happens very late during the boot) if the APIC routing is indeed set to default flat mode. This fixes the breakage(in x86_64) of this quirk due to cpu hotplug which selects physical mode instead of the logical flat(as needed for this errata workaround). Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: "Li, Shaohua" <shaohua.li@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: introduce the mechanism of disabling cpu hotplug controlSiddha, Suresh B
Add 'enable_cpu_hotplug' flag and when cleared, the hotplug control file ("online") will not be added under /sys/devices/system/cpu/cpuX/ Next patch doing PCI quirks will use this. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@suse.de> Cc: "Li, Shaohua" <shaohua.li@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: Add support for compilation for Core2Andi Kleen
gcc doesn't support -mtune=core2 yet, but will be soon. Use -mtune=generic or -mtune=i686 as fallback TBD need benchmarking for INTEL_USERCOPY etc. So far I used the same defaults as MPENTIUMM Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] paravirt: fix missing pte updateZachary Amsden
The function ptep_get_and_clear uses an atomic instruction sequence to get and clear an active pte. Rather than add such an atomic operator to all virtual machine implementations in paravirt-ops, it is easier to support the raw atomic sequence and use either a trapping writable pagetable approach, or a post-update notification. For the post update notification, we require the pte_update function to be called after the access. Combine the 2-level and 3-level paging operators into one common function which does the post-update notification, and rename the actual atomic sequences to raw_ptep_xxx operators. Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] paravirt: fix parameter names in mmu operationsZachary Amsden
Make parameter names match function argument names for the yet to be defined pte_update_defer accessor. Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] paravirt: Preparatory mmu header movementZachary Amsden
Move header includes for the nopud / nopmd types to the location of the actual pte / pgd type definitions. This allows generic 4-level page type code to be written before the split 2/3 level page table headers are included. Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Andi Kleen <ak@muc.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] paravirt: Add MMU virtualization to paravirt_opsRusty Russell
Add the three bare TLB accessor functions to paravirt-ops. Most amusingly, flush_tlb is redefined on SMP, so I can't call the paravirt op flush_tlb. Instead, I chose to indicate the actual flush type, kernel (global) vs. user (non-global). Global in this sense means using the global bit in the page table entry, which makes TLB entries persistent across CR3 reloads, not global as in the SMP sense of invoking remote shootdowns, so the term is confusingly overloaded. AK: folded in fix from Zach for PAE compilation Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] paravirt: Add APIC accessors to paravirt-ops.Rusty Russell
Add APIC accessors to paravirt-ops. Unfortunately, we need two write functions, as some older broken hardware requires workarounds for Pentium APIC errata - this is the purpose of apic_write_atomic. AK: replaced __inline with inline Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] paravirt: Allow selected bug checks to beRusty Russell
Allow selected bug checks to be skipped by paravirt kernels. The two most important are the F00F workaround (which is either done by the hypervisor, or not required), and the 'hlt' instruction check, which can break under some hypervisors. Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] paravirt: Add startup infrastructure for paravirtualizationRusty Russell
1) Each hypervisor writes a probe function to detect whether we are running under that hypervisor. paravirt_probe() registers this function. 2) If vmlinux is booted with ring != 0, we call all the probe functions (with registers except %esp intact) in link order: the winner will not return. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: cpu_detect extractionRusty Russell
Both lhype and Xen want to call the core of the x86 cpu detect code before calling start_kernel. (extracted from larger patch) AK: folded in start_kernel header patch Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] paravirt: Patch inline replacements for paravirt interceptsRusty Russell
It turns out that the most called ops, by several orders of magnitude, are the interrupt manipulation ops. These are obvious candidates for patching, so mark them up and create infrastructure for it. The method used is that the ops structure has a patch function, which is called for each place which needs to be patched: this returns a number of instructions (the rest are NOP-padded). Usually we can spare a register (%eax) for the binary patched code to use, but in a couple of critical places in entry.S we can't: we make the clobbers explicit at the call site, and manually clobber the allowed registers in debug mode as an extra check. And: Don't abuse CONFIG_DEBUG_KERNEL, add CONFIG_DEBUG_PARAVIRT. And: AK: Fix warnings in x86-64 alternative.c build And: AK: Fix compilation with defconfig And: ^From: Andrew Morton <akpm@osdl.org> Some binutlises still like to emit references to __stop_parainstructions and __start_parainstructions. And: AK: Fix warnings about unused variables when PARAVIRT is disabled. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Zachary Amsden <zach@vmware.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] paravirt: header and stubs for paravirtualisationRusty Russell
Create a paravirt.h header for all the critical operations which need to be replaced with hypervisor calls, and include that instead of defining native operations, when CONFIG_PARAVIRT. This patch does the dumbest possible replacement of paravirtualized instructions: calls through a "paravirt_ops" structure. Currently these are function implementations of native hardware: hypervisors will override the ops structure with their own variants. All the pv-ops functions are declared "fastcall" so that a specific register-based ABI is used, to make inlining assember easier. And: +From: Andy Whitcroft <apw@shadowen.org> The paravirt ops introduce a 'weak' attribute onto memory_setup(). Code ordering leads to the following warnings on x86: arch/i386/kernel/setup.c:651: warning: weak declaration of `memory_setup' after first use results in unspecified behavior Move memory_setup() to avoid this. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Zachary Amsden <zach@vmware.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andy Whitcroft <apw@shadowen.org>
2006-12-07[PATCH] x86: comment magic constants in delay.hPaolo 'Blaisorblade' Giarrusso
For both i386 and x86_64, copy from arch/$ARCH/lib/delay.c comments about the used magic constants, plus a few other niceties. Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andi Kleen <ak@suse.de> include/asm-i386/delay.h | 5 ++++- include/asm-x86_64/delay.h | 5 ++++- 2 files changed, 8 insertions(+), 2 deletions(-)
2006-12-07[PATCH] i386: Move memory map printing and other code to e820.cbibo,mao
This patch moves e820 memory map print and memmap boot param parsing function from setup.c to e820.c, also adds limit_regions and print_memory_map declaration in header file. Signed-off-by: bibo,mao <bibo.mao@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 152 +++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 158 --------------------------------- include/asm-i386/e820.h | 2 arch/i386/kernel/e820.c | 152 ++++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 153 ----------------------------------------------- include/asm-i386/e820.h | 2 3 files changed, 155 insertions(+), 152 deletions(-)
2006-12-07[PATCH] i386: Move e820/efi memmap walking code to e820.cbibo,mao
This patch moves e820/efi memmap table walking function from setup.c to e820.c, also this patch adds extern declaration in header file. Signed-off-by: bibo,mao <bibo.mao@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> arch/i386/kernel/e820.c | 115 +++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 118 ----------------------------------- include/asm-i386/e820.h | 2 arch/i386/kernel/e820.c | 115 +++++++++++++++++++++++++++++++++++++++++++++ arch/i386/kernel/setup.c | 118 ----------------------------------------------- include/asm-i386/e820.h | 2 3 files changed, 117 insertions(+), 118 deletions(-)
2006-12-07[PATCH] i386: Move find_max_pfn function to e820.cbibo,mao
Move more code from setup.c into e820.c Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: Use CLFLUSH instead of WBINVD in change_page_attrAndi Kleen
CLFLUSH is a lot faster than WBINVD so try to use that. Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: Retrieve CLFLUSH size from CPUIDAndi Kleen
Also report it in /proc/cpuinfo similar to x86-64. Needed for followon patch Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: Implement CONFIG_PHYSICAL_ALIGNVivek Goyal
o Now CONFIG_PHYSICAL_START is being replaced with CONFIG_PHYSICAL_ALIGN. Hardcoding the kernel physical start value creates a problem in relocatable kernel context due to boot loader limitations. For ex, if somebody compiles a relocatable kernel to be run from address 4MB, but this kernel will run from location 1MB as grub loads the kernel at physical address 1MB. Kernel thinks that I am a relocatable kernel and I should run from the address I have been loaded at. So somebody wanting to run kernel from 4MB alignment location (for improved performance regions) can't do that. o Hence, Eric proposed that probably CONFIG_PHYSICAL_ALIGN will make more sense in relocatable kernel context. At run time kernel will move itself to a physical addr location which meets user specified alignment restrictions. Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: CONFIG_PHYSICAL_START cleanupEric W. Biederman
Defining __PHYSICAL_START and __KERNEL_START in asm-i386/page.h works but it triggers a full kernel rebuild for the silliest of reasons. This modifies the users to directly use CONFIG_PHYSICAL_START and linux/config.h which prevents the full rebuild problem, which makes the code much more maintainer and hopefully user friendly. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: define __pa_symbol()Eric W. Biederman
On x86_64 we have to be careful with calculating the physical address of kernel symbols. Both because of compiler odditities and because the symbols live in a different range of the virtual address space. Having a defintition of __pa_symbol that works on both x86_64 and i386 simplifies writing code that works for both x86_64 and i386 that has these kinds of dependencies. So this patch adds the trivial i386 __pa_symbol definition. Added assembly magic similar to RELOC_HIDE as suggested by Andi Kleen. Just picked it up from x86_64. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Vivek Goyal <vgoyal@in.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: PDA: Fix math emulator for new pt_regsAndi Kleen
This patch fixes the math emulator, which had not been adjusted to match the changed struct pt_regs. AK: extracted from larger patch by Jeremy. Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: Store the interrupt regs pointer in the PDAJeremy Fitzhardinge
Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: Implement "current" with the PDAJeremy Fitzhardinge
Use the pcurrent field in the PDA to implement the "current" macro. This ends up compiling down to a single instruction to get the current task. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: Implement smp_processor_id() with the PDAJeremy Fitzhardinge
Use the cpu_number in the PDA to implement raw_smp_processor_id. This is a little simpler than using thread_info, though the cpu field in thread_info cannot be removed since it is used for things other than getting the current CPU in common code. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: Update sys_vm86 to cope with changed pt_regs and %gs usageJeremy Fitzhardinge
sys_vm86 uses a struct kernel_vm86_regs, which is identical to pt_regs, but adds an extra space for all the segment registers. Previously this structure was completely independent, so changes in pt_regs had to be reflected in kernel_vm86_regs. This changes just embeds pt_regs in kernel_vm86_regs, and makes the appropriate changes to vm86.c to deal with the new naming. Also, since %gs is dealt with differently in the kernel, this change adjusts vm86.c to reflect this. While making these changes, I also cleaned up some frankly bizarre code which was added when auditing was added to sys_vm86. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Jason Baron <jbaron@redhat.com> Cc: Chris Wright <chrisw@sous-sol.org> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: Fix places where using %gs changes the usermode ABIJeremy Fitzhardinge
There are a few places where the change in struct pt_regs and the use of %gs affect the userspace ABI. These are primarily debugging interfaces where thread state can be inspected or extracted. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: Use %gs as the PDA base-segment in the kernelJeremy Fitzhardinge
This patch is the meat of the PDA change. This patch makes several related changes: 1: Most significantly, %gs is now used in the kernel. This means that on entry, the old value of %gs is saved away, and it is reloaded with __KERNEL_PDA. 2: entry.S constructs the stack in the shape of struct pt_regs, and this is passed around the kernel so that the process's saved register state can be accessed. Unfortunately struct pt_regs doesn't currently have space for %gs (or %fs). This patch extends pt_regs to add space for gs (no space is allocated for %fs, since it won't be used, and it would just complicate the code in entry.S to work around the space). 3: Because %gs is now saved on the stack like %ds, %es and the integer registers, there are a number of places where it no longer needs to be handled specially; namely context switch, and saving/restoring the register state in a signal context. 4: And since kernel threads run in kernel space and call normal kernel code, they need to be created with their %gs == __KERNEL_PDA. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: Initialize the per-CPU data areaJeremy Fitzhardinge
When a CPU is brought up, a PDA and GDT are allocated for it. The GDT's __KERNEL_PDA entry is pointed to the allocated PDA memory, so that all references using this segment descriptor will refer to the PDA. This patch rearranges CPU initialization a bit, so that the GDT/PDA are set up as early as possible in cpu_init(). Also for secondary CPUs, GDT+PDA are preallocated and initialized so all the secondary CPU needs to do is set up the ldt and load %gs. This will be important once smp_processor_id() and current use the PDA. In all cases, the PDA is set up in head.S, before a CPU starts running C code, so the PDA is always available. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Cc: James Bottomley <James.Bottomley@SteelEye.com> Cc: Matt Tolentino <matthew.e.tolentino@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: Basic definitions for i386-pdaJeremy Fitzhardinge
This patch has the basic definitions of struct i386_pda, and the segment selector in the GDT. asm-i386/pda.h is more or less a direct copy of asm-x86_64/pda.h. The most interesting difference is the use of _proxy_pda, which is used to give gcc a model for the actual memory operations on the real pda structure. No actual reference is ever made to _proxy_pda, so it is never defined. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Andi Kleen <ak@suse.de> Cc: Chuck Ebbert <76306.1226@compuserve.com> Cc: Zachary Amsden <zach@vmware.com> Cc: Jan Beulich <jbeulich@novell.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org>
2006-12-07[PATCH] i386: add Intel Core related PMU MSRsStephane Eranian
- add Intel Precise-Event Based sampling (PEBS) related MSR - add Intel Data Save (DS) Area related MSR - add Intel Core microarchitecure performance counter MSRs Signed-off-by: stephane eranian <eranian@hpl.hp.com> Signed-off-by: Andi Kleen <ak@suse.de>
2006-12-07[PATCH] i386: add sleazy FPU optimizationChuck Ebbert
i386 port of the sLeAZY-fpu feature. Chuck reports that this gives him a +/- 0.4% improvement on his simple benchmark x86_64 description follows: Right now the kernel on x86-64 has a 100% lazy fpu behavior: after *every* context switch a trap is taken for the first FPU use to restore the FPU context lazily. This is of course great for applications that have very sporadic or no FPU use (since then you avoid doing the expensive save/restore all the time). However for very frequent FPU users... you take an extra trap every context switch. The patch below adds a simple heuristic to this code: After 5 consecutive context switches of FPU use, the lazy behavior is disabled and the context gets restored every context switch. If the app indeed uses the FPU, the trap is avoided. (the chance of the 6th time slice using FPU after the previous 5 having done so are quite high obviously). After 256 switches, this is reset and lazy behavior is returned (until there are 5 consecutive ones again). The reason for this is to give apps that do longer bursts of FPU use still the lazy behavior back after some time. Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com> Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de>