summaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel
AgeCommit message (Collapse)Author
2008-01-30x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detectionAndi Kleen
Need this in the next patch in time_init and that happens early. This includes a minor fix on i386 where early_intel_workarounds() [which is now called early_init_intel] really executes early as the comments say. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: fix sched_clock()Ingo Molnar
[ andi@firstfloor.org: build fix ] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: remove get_cycles_syncAndi Kleen
rdtsc is now speculation-safe, so no need for the sync variants of the APIs. [ mingo@elte.hu: removed the nsec_barrier() complication. ] Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: read_tsc syncIngo Molnar
make native_read_tsc() always non-speculative. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: map vsyscalls early enoughIngo Molnar
map vsyscalls early enough. This is important if a __vsyscall_fn function is used by other kernel code too. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: move native_read_tsc() offlineIngo Molnar
move native_read_tsc() offline. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: lfence fixIngo Molnar
LFENCE is available on XMM2 or higher Intel CPUs - not XMM or higher... this caused boot failures on XMM1 & !XMM1 capable CPUs. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: Implement support to synchronize RDTSC with LFENCE on Intel CPUsAndi Kleen
According to Intel RDTSC can be always synchronized with LFENCE on all current CPUs. Implement the necessary CPUID bit for that. It is unclear yet if that is true for all future CPUs too, but if there's another way the kernel can be always updated. Cc: asit.k.mallick@intel.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: implement support to synchronize RDTSC through MFENCE on AMD CPUsAndi Kleen
According to AMD RDTSC can be synchronized through MFENCE. Implement the necessary CPUID bit for that. Cc: andreas.herrmann3@amd.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: clean up apic_32.c, take 2Hiroshi Shimamoto
More white space and coding style clean up. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: clean up apic_32/64.cHiroshi Shimamoto
White space and coding style clean up. Make apic_32/64.c similar. Signed-off-by: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: code clarification patch to Kprobes arch codeQuentin Barnes
When developing the Kprobes arch code for ARM, I ran across some code found in x86 and s390 Kprobes arch code which I didn't consider as good as it could be. Once I figured out what the code was doing, I changed the code for ARM Kprobes to work the way I felt was more appropriate. I've tested the code this way in ARM for about a year and would like to push the same change to the other affected architectures. The code in question is in kprobe_exceptions_notify() which does: ==== /* kprobe_running() needs smp_processor_id() */ preempt_disable(); if (kprobe_running() && kprobe_fault_handler(args->regs, args->trapnr)) ret = NOTIFY_STOP; preempt_enable(); ==== For the moment, ignore the code having the preempt_disable()/ preempt_enable() pair in it. The problem is that kprobe_running() needs to call smp_processor_id() which will assert if preemption is enabled. That sanity check by smp_processor_id() makes perfect sense since calling it with preemption enabled would return an unreliable result. But the function kprobe_exceptions_notify() can be called from a context where preemption could be enabled. If that happens, the assertion in smp_processor_id() happens and we're dead. So what the original author did (speculation on my part!) is put in the preempt_disable()/preempt_enable() pair to simply defeat the check. Once I figured out what was going on, I considered this an inappropriate approach. If kprobe_exceptions_notify() is called from a preemptible context, we can't be in a kprobe processing context at that time anyways since kprobes requires preemption to already be disabled, so just check for preemption enabled, and if so, blow out before ever calling kprobe_running(). I wrote the ARM kprobe code like this: ==== /* To be potentially processing a kprobe fault and to * trust the result from kprobe_running(), we have * be non-preemptible. */ if (!preemptible() && kprobe_running() && kprobe_fault_handler(args->regs, args->trapnr)) ret = NOTIFY_STOP; ==== The above code has been working fine for ARM Kprobes for a year. So I changed the x86 code (2.6.24-rc6) to be the same way and ran the Systemtap tests on that kernel. As on ARM, Systemtap on x86 comes up with the same test results either way, so it's a neutral external functional change (as expected). This issue has been discussed previously on linux-arm-kernel and the Systemtap mailing lists. Pointers to the by base for the two discussions: http://lists.arm.linux.org.uk/lurker/message/20071219.223225.1f5c2a5e.en.html http://sourceware.org/ml/systemtap/2007-q1/msg00251.html Signed-off-by: Quentin Barnes <qbarnes@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com> Acked-by: Ananth N Mavinakayahanalli <ananth@in.ibm.com>
2008-01-30x86: get rid of checkpatch.pl complains on apm_32.cCyrill Gorcunov
This patch eliminates most of code-style errors discovered by checkpatch.pl on arch/x86/kernel/apm_32.c no code changed: text data bss dec hex filename 12142 1837 84 14063 36ef apm_32.o.before 12142 1837 84 14063 36ef apm_32.o.after md5: 2676b881ad55e387da4a995e8b9ee372 apm_32.o.before.asm 2676b881ad55e387da4a995e8b9ee372 apm_32.o.after.asm Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: coding style cleanup for kernel/bootflag.cCyrill Gorcunov
This patch eliminates checkpatch.pl complaints on bootflag.c No code changed: text data bss dec hex filename 321 8 0 329 149 bootflag.o.before 321 8 0 329 149 bootflag.o.after md5: 9c1b474bcf25ddc1724a29c19880043f bootflag.o.before.asm 9c1b474bcf25ddc1724a29c19880043f bootflag.o.after.asm Signed-off-by: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: unify arch/x86/kernel/Makefile(s)Sam Ravnborg
Combine the 32 and 64 bit specific Makefiles in one file. While doing so link order was (almost) preserved on 32 bit but on 64 bit link order changed a lot. Patch was checked with defconfig + allyesconfig builds. The same .o files were linked in these configurations. To keep readability of the Makefiles a few Kconfig symbols was added/modified and it was checked that they were not used anywhere else. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: kprobes change kprobe_handler flowHarvey Harrison
Make the control flow of kprobe_handler more obvious. Collapse the separate if blocks/gotos with if/else blocks this unifies the duplication of the check for a breakpoint instruction race with another cpu. Create two jump targets: preempt_out: re-enables preemption before returning ret out: only returns ret Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30arch/x86/kernel/io_apic_{64,32}.c: use time_beforeJulia Lawall
The functions time_before, time_before_eq, time_after, and time_after_eq are more robust for comparing jiffies against other values. A simplified version of the semantic patch making this change is as follows: (http://www.emn.fr/x-info/coccinelle/) // <smpl> @ change_compare_np @ expression E; @@ ( - jiffies <= E + time_before_eq(jiffies,E) | - jiffies >= E + time_after_eq(jiffies,E) | - jiffies < E + time_before(jiffies,E) | - jiffies > E + time_after(jiffies,E) ) @ include depends on change_compare_np @ @@ #include <linux/jiffies.h> @ no_include depends on !include && change_compare_np @ @@ #include <linux/...> + #include <linux/jiffies.h> // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30arch/x86/kernel/apm_32.c: use time_before, time_before_eq,Julia Lawall
The functions time_before, time_before_eq, time_after, and time_after_eq are more robust for comparing jiffies against other values. A simplified version of the semantic patch making this change is as follows: (http://www.emn.fr/x-info/coccinelle/) // <smpl> @ change_compare_np @ expression E; @@ ( - jiffies <= E + time_before_eq(jiffies,E) | - jiffies >= E + time_after_eq(jiffies,E) | - jiffies < E + time_before(jiffies,E) | - jiffies > E + time_after(jiffies,E) ) @ include depends on change_compare_np @ @@ #include <linux/jiffies.h> @ no_include depends on !include && change_compare_np @ @@ #include <linux/...> + #include <linux/jiffies.h> // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: kprobes remove fix_riprel #ifdefHarvey Harrison
Move #ifdef around function definiton into the function and unconditionally return on X86_32. Saves an ifdef from the one callsite. [ mingo@elte.hu: minor cleanup. ] Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: introduce REX prefix helper for kprobesHarvey Harrison
Fold some small ifdefs into a helper function. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: arch/x86/kernel/cpu/mcheck/k7.c checkpatch fixesAndrew Morton
#88: FILE: arch/x86/kernel/cpu/mcheck/k7.c:34: + rdmsr(MSR_IA32_MC0_STATUS+i*4,low, high); ^ ERROR: need space after that ',' (ctx:VxV) #142: FILE: arch/x86/kernel/cpu/mcheck/p4.c:170: + rdmsr(MSR_IA32_MC0_STATUS+i*4,low, high); ^ ERROR: need space after that ',' (ctx:VxV) #180: FILE: arch/x86/kernel/cpu/mcheck/p6.c:34: + rdmsr(MSR_IA32_MC0_STATUS+i*4,low, high); ^ total: 3 errors, 0 warnings, 114 lines checked Your patch has style problems, please review. If any of these errors are false positives report them to the maintainer, see CHECKPATCH in MAINTAINERS. Please run checkpatch prior to sending patches Cc: Min Zhang <mzhang@mvista.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: arch/x86/kernel/cpu/mcheck/ checkpatch fixesAndrew Morton
#40: FILE: arch/x86/kernel/cpu/mcheck/k7.c:46: + snprintf (misc, 20, "[%08x%08x]", ahigh, alow); WARNING: line over 80 characters #45: FILE: arch/x86/kernel/cpu/mcheck/k7.c:50: + snprintf (addr, 24, " at %08x%08x", ahigh, alow); WARNING: no space between function name and open parenthesis '(' #45: FILE: arch/x86/kernel/cpu/mcheck/k7.c:50: + snprintf (addr, 24, " at %08x%08x", ahigh, alow); WARNING: no space between function name and open parenthesis '(' #48: FILE: arch/x86/kernel/cpu/mcheck/k7.c:52: + printk (KERN_EMERG "CPU %d: Bank %d: %08x%08x%s%s\n", WARNING: no space between function name and open parenthesis '(' #65: FILE: arch/x86/kernel/cpu/mcheck/p4.c:161: + printk (KERN_DEBUG "CPU %d: EIP: %08x EFLAGS: %08x\n" WARNING: no space between function name and open parenthesis '(' #88: FILE: arch/x86/kernel/cpu/mcheck/p4.c:182: + snprintf (misc, 20, "[%08x%08x]", ahigh, alow); WARNING: line over 80 characters #93: FILE: arch/x86/kernel/cpu/mcheck/p4.c:186: + snprintf (addr, 24, " at %08x%08x", ahigh, alow); WARNING: no space between function name and open parenthesis '(' #93: FILE: arch/x86/kernel/cpu/mcheck/p4.c:186: + snprintf (addr, 24, " at %08x%08x", ahigh, alow); WARNING: no space between function name and open parenthesis '(' #96: FILE: arch/x86/kernel/cpu/mcheck/p4.c:188: + printk (KERN_EMERG "CPU %d: Bank %d: %08x%08x%s%s\n", WARNING: no space between function name and open parenthesis '(' #120: FILE: arch/x86/kernel/cpu/mcheck/p6.c:46: + snprintf (misc, 20, "[%08x%08x]", ahigh, alow); WARNING: line over 80 characters #125: FILE: arch/x86/kernel/cpu/mcheck/p6.c:50: + snprintf (addr, 24, " at %08x%08x", ahigh, alow); WARNING: no space between function name and open parenthesis '(' #125: FILE: arch/x86/kernel/cpu/mcheck/p6.c:50: + snprintf (addr, 24, " at %08x%08x", ahigh, alow); WARNING: no space between function name and open parenthesis '(' #128: FILE: arch/x86/kernel/cpu/mcheck/p6.c:52: + printk (KERN_EMERG "CPU %d: Bank %d: %08x%08x%s%s\n", total: 0 errors, 13 warnings, 100 lines checked Your patch has style problems, please review. If any of these errors are false positives report them to the maintainer, see CHECKPATCH in MAINTAINERS. Please run checkpatch prior to sending patches Cc: Min Zhang <mzhang@mvista.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30arch/x86/kernel/cpu/mcheck/p4.c: cleanupsMin Zhang
SMP, the machine check exception dispatches all logical processors within a physical package to the machine-check exception handler, so the printk within each handler outputs concurrently and makes the output unreadable. Refer to Intel system programming guide Part 1 Section 7.8.5 http://developer.intel.com/design/processor/manuals/253668.pdf Signed-off-by: Min Zhang <mzhang@mvista.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: 32-bit EFI runtime service support: fixes in sync with 64-bit supportHuang, Ying
support according to fixes of x86_64 support. - Delete efi_rt_lock because it is used during system early boot, before SMP is initialized. - Change local_flush_tlb() to __flush_tlb_all() to flush global page mapping. - Clean up includes. - Revise Kconfig description. - Enable noefi kernel parameter on i386. Signed-off-by: Huang Ying <ying.huang@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30replace x86_read/write_per_cpu with a common function.Glauber de Oliveira Costa
x86_read_per_cpu() and its writeish sister are not present in x86_64. So in this patch, we replace them with __get_cpu_var(), which is present in both Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: patching functions on 64-bitGlauber de Oliveira Costa
Like i386, x86_64 also need to include its own patching function. (Well, if you're not in a hurry, and don't care about speed, you don't really _need_ ;-)) So here they are. Not much different in essence from i386 Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: move patching code to arch-specific file.Glauber de Oliveira Costa
The core patching code for paravirt is sufficiently different among i386 and x86_64, and we move them to specific files. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: replace privileged instructions with paravirt macrosGlauber de Oliveira Costa
The assembly code in entry_64.S issues a bunch of privileged instructions, like cli, sti, swapgs, and others. Paravirt guests are forbidden to do so, and we then replace them with macros that will do the right thing. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: adds paravirt hook for swapgsGlauber de Oliveira Costa
This patch adds paravirt hook for swapgs operation, which is a privileged operation in x86_64. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: provide paravirtualized hook for rdtscpGlauber de Oliveira Costa
This patch adds a field in pv_cpu_ops for a paravirtualized hook for rdtscp, needed for x86_64. Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: change paravirt_32.c nameGlauber de Oliveira Costa
This patch changes paravirt_32.c to paravirt.c. The goal is to have paravirt support in x86_64, so we do it in a common file Signed-off-by: Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86, ptrace: add buffer size checksMarkus Metzger
Pass the buffer size for (most) ptrace commands that pass user-allocated buffers and check that size before accessing the buffer. Unfortunately, PTRACE_BTS_GET already uses all 4 parameters. Commands that access user buffers return the number of bytes or records read or written. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86, ptrace: support 32bit-cross-64bit BTS recordingMarkus Metzger
Support BTS recording of 32bit and 64bit tasks from 32bit or 64bit tasks. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86, ptrace: rlimit BTS buffer allocationMarkus Metzger
Check the rlimit of the tracing task for total and locked memory when allocating the BTS buffer. Signed-off-by: Markus Metzger <markus.t.metzger@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: move deeply indented code to reenter_kprobeMasami Hiramatsu
Move some deeply indented code related to re-entrance processing from kprobe_handler() to reenter_kprobe(). Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Jim Keniston <jkenisto@us.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: add reenter_kprobe helperHarvey Harrison
[ mhiramat@redhat.com: updated it to latest x86.git ] Factor common X86_32, X86_64 kprobe reenter logic from deeply indented section to helper function. Signed-off-by: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Jim Keniston <jkenisto@us.ibm.com>
2008-01-30x86: fix kprobe_handler reenable preemptionMasami Hiramatsu
Fix a preemption bug in kprobe_handler(). It has to call preempt_enable() before returning. Signed-off-by: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: voluntary leave_mm before entering ACPI C3Venki Pallipadi
Aviod TLB flush IPIs during C3 states by voluntary leave_mm() before entering C3. The performance impact of TLB flush on C3 should not be significant with respect to C3 wakeup latency. Also, CPUs tend to flush TLB in hardware while in C3 anyways. On a 8 logical CPU system, running make -j2, the number of tlbflush IPIs goes down from 40 per second to ~ 0. Total number of interrupts during the run of this workload was ~1200 per second, which makes it ~3% savings in wakeups. There was no measurable performance or power impact however. [ akpm@linux-foundation.org: symbol export fixes. ] Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 ptrace generic requestsRoland McGrath
This removes duplicated code by calling the generic ptrace_request and compat_ptrace_request functions for the things they already handle. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 core dump TLSRoland McGrath
This makes ELF core dumps of 32-bit processes include a new note type NT_386_TLS (0x200) giving the contents of the TLS slots in struct user_desc format. This lets post mortem examination figure out what the segment registers mean like the debugger does with get_thread_area on a live process. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 user_regset cleanupRoland McGrath
This removes a bunch of dead code that is no longer needed now that the user_regset interfaces are being used for all these jobs. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 ptrace user_regsetRoland McGrath
This cleans up the PTRACE_*REGS* request code so each one is just a simple call to copy_regset_to_user or copy_regset_from_user. The ptrace layouts already match the user_regset formats (core dump formats). Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 user_regset_viewRoland McGrath
This defines task_user_regset_view and the tables describing the x86 user_regset layouts for 32 and 64. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 user_regset general regsRoland McGrath
This adds accessor functions in the user_regset style for the general registers (struct user_regs_struct). Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 user_regset TLSRoland McGrath
This adds accessor functions in the user_regset style for the TLS data. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 TLS desc_struct cleanupRoland McGrath
This cleans up the TLS code to use struct desc_struct and to separate the encoding and installation magic from the interface wrappers. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 i387 cleanupRoland McGrath
This removes all the old code that is no longer used after the i387 unification and cleanup. The i387_64.h is renamed to i387.h with no changes, but since it replaces the nonempty one-line stub i387.h it looks like a big diff and not a rename. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 i387 user_regsetRoland McGrath
This revamps the i387 code to be shared across 32-bit, 64-bit, and 32-on-64. It does so by consolidating the code in one place based on the user_regset accessor interfaces. This switches 32-bit to using the i387_64.h header and 64-bit to using the i387.c that was previously i387_32.c, but that's what took the least cleanup in each file. Here i387.h is stubbed to always include i387_64.h rather than renaming the file, to keep this diff smaller and easier to read. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: i387 renamingRoland McGrath
This renames arch/x86/kernel/{i387_32.c => i387.c}. This is a pure renaming, but paves the way for merging the 32-bit and 64-bit versions of this code. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-01-30x86: x86 i387 header cleanupRoland McGrath
This moves some code into asm-x86/i387_64.h in preparation for unifying this code between 32 and 64. The 32-bit versions of some things are copied in some existing names changed to match 32-bit names and share code. For 64, save_i387 is moved into an inline from i387_64.c; this matches restore_i387, which is already an inline, and makes sense since there is exactly one caller (in signal_64.c). The save_i387 function could use more cosmetic cleanup, but it is just moved verbatim in this patch. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>