summaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel
AgeCommit message (Collapse)Author
2010-08-02x86: Detect whether we should use Xen SWIOTLB.Konrad Rzeszutek Wilk
It is paramount that we call pci_xen_swiotlb_detect before pci_swiotlb_detect as both implementations use the 'swiotlb' and 'swiotlb_force' flags. The pci-xen_swiotlb_detect inhibits the swiotlb_force and swiotlb flag so that the native SWIOTLB implementation is not enabled when running under Xen. [since v1 changed two Cc's to Acked-by] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> [http://lkml.org/lkml/2010/7/27/374] Cc: Albert Herranz <albert_herranz@yahoo.es> Cc: Ian Campbell <Ian.Campbell@citrix.com> Cc: Thomas Gleixner <tglx@linutronix.de> Acked-by: "H. Peter Anvin" <hpa@zytor.com> [conditional http://lkml.org/lkml/2010/8/2/324] Cc: x86@kernel.org Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
2010-08-02x86: Fix keeping track of AMD C1EMichal Schmidt
Accomodate the original C1E-aware idle routine to the different times during boot when the BIOS enables C1E. While at it, remove the synthetic CPUID flag in favor of a single global setting which denotes C1E status on the system. [ hpa: changed c1e_enabled to be a bool; clarified cpu bit 3:21 comment ] Signed-off-by: Michal Schmidt <mschmidt@redhat.com> LKML-Reference: <20100727165335.GA11630@aftab> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Acked-by: Thomas Gleixner <tglx@linutronix.de>
2010-08-02Merge commit 'v2.6.35' into perf/coreIngo Molnar
Conflicts: tools/perf/Makefile tools/perf/util/hist.c Merge reason: Resolve the conflicts and update to latest upstream. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-08-01x86-32, asm: Directly access per-cpu GDTBrian Gerst
Use a direct per-cpu reference for the GDT instead of using a scratch register. Signed-off-by: Brian Gerst <brgerst@gmail.com> LKML-Reference: <1280594903-6341-2-git-send-email-brgerst@gmail.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-08-01x86-64, asm: Directly access per-cpu ISTBrian Gerst
Use a direct per-cpu reference for the IST instead of using a scratch register. Signed-off-by: Brian Gerst <brgerst@gmail.com> LKML-Reference: <1280594903-6341-1-git-send-email-brgerst@gmail.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-08-01x86: Export FPU API for KVM useSheng Yang
Also add some constants. Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2010-08-01x86, UV: Initialize BAU hub mapCliff Wickman
Fix uninitialized uvhub_mask: - An unitialized bit map variable was causing initialization of non-existant hubs (this one causes boot panics). - And the bit map was too small for large machines. This patch makes it dynamic in size. - Fix the case where socket 0 has no enabled cpu's. Don't assume every hub has a socket 0. - uv_init_per_cpu() should be __init. Signed-off-by: Cliff Wickman <cpw@sgi.com> Cc: <stable@kernel.org> # for .35.x LKML-Reference: <E1Oeuyt-0004XS-0y@eag09.americas.sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2010-07-30x86, olpc: Constify an olpc_ofw() argAndres Salomon
The arguments passed to OFW shouldn't be modified; update the 'args' argument of olpc_ofw to reflect this. This saves us some later casting away of consts. Signed-off-by: Andres Salomon <dilinger@queued.net> LKML-Reference: <20100628220029.1555ac24@debian> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-30x86, olpc: Use pr_debug() for EC commandsAndres Salomon
Unconditionally printing EC debug messages was helpful when we were actually debugging the EC, but during normal operation it can get pretty annoying. Using pr_debug allows us finer-grained control. Signed-off-by: Andres Salomon <dilinger@queued.net> LKML-Reference: <20100616231928.16b539f0@dev.queued.net> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-30x86, cpu: Package Level Thermal Control, Power Limit Notification definitionsFenghua Yu
Add package level thermal and power limit feature support. The two MSRs and features are new starting with Intel's Sandy Bridge processor. Please check Intel 64 and IA-32 Architectures SDMV Vol 3A 14.5.6 Power Limit Notification and 14.6 Package Level Thermal Management. This patch also fixes a bug which defines reverse THERM_INT_LOW_ENABLE bit and THERM_INT_HIGH_ENABLE bit. [ hpa: fixed up against current tip:x86/cpu ] Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> LKML-Reference: <1280448826-12004-2-git-send-email-fenghua.yu@intel.com> Reviewed-by: Len Brown <len.brown@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-30x86, mtrr: Use stop machine context to rendezvous all the cpu'sSuresh Siddha
Use the stop machine context rather than IPI's to rendezvous all the cpus for MTRR initialization that happens during cpu bringup or for MTRR modifications during runtime. This avoids deadlock scenario (reported by Prarit) like: cpu A holds a read_lock (tasklist_lock for example) with irqs enabled cpu B waits for the same lock with irqs disabled using write_lock_irq cpu C doing set_mtrr() (during AP bringup for example), which will try to rendezvous all the cpus using IPI's This will result in C and A come to the rendezvous point and waiting for B. B is stuck forever waiting for the lock and thus not reaching the rendezvous point. Using stop cpu (run in the process context of per cpu based keventd) to do this rendezvous, avoids this deadlock scenario. Also make sure all the cpu's are in the rendezvous handler before we proceed with the local_irq_save() on each cpu. This lock step disabling irqs on all the cpus will avoid other deadlock scenarios (for example involving with the blocking smp_call_function's etc). [ This problem is very old. Marking -stable only for 2.6.35 as the stop_one_cpu_nowait() API is present only in 2.6.35. Any older kernel interested in this fix need to do some more work in backporting this patch. ] Reported-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <1280515602.2682.10.camel@sbsiddha-MOBL3.sc.intel.com> Acked-by: Prarit Bhargava <prarit@redhat.com> Cc: stable@kernel.org [2.6.35] Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-30PCI: MSI: Restore read_msi_msg_desc(); add get_cached_msi_msg_desc()Ben Hutchings
commit 2ca1af9aa3285c6a5f103ed31ad09f7399fc65d7 "PCI: MSI: Remove unsafe and unnecessary hardware access" changed read_msi_msg_desc() to return the last MSI message written instead of reading it from the device, since it may be called while the device is in a reduced power state. However, the pSeries platform code really does need to read messages from the device, since they are initially written by firmware. Therefore: - Restore the previous behaviour of read_msi_msg_desc() - Add new functions get_cached_msi_msg{,_desc}() which return the last MSI message written - Use the new functions where appropriate Acked-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2010-07-29Introduce CONFIG_XEN_PVHVM compile optionStefano Stabellini
This patch introduce a CONFIG_XEN_PVHVM compile time option to enable/disable Xen PV on HVM support. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2010-07-28x86,kgdb: Fix hw breakpoint regressionJason Wessel
HW breakpoints events stopped working correctly with kgdb as a result of commit: 018cbffe6819f6f8db20a0a3acd9bab9bfd667e4 (Merge commit 'v2.6.33' into perf/core). The regression occurred because the behavior changed for setting NOTIFY_STOP as the return value to the die notifier if the breakpoint was known to the HW breakpoint API. Because kgdb is using the HW breakpoint API to register HW breakpoints slots, it must also now implement the overflow_handler call back else kgdb does not get to see the events from the die notifier. The kgdb_ll_trap function will be changed to be general purpose code which can allow an easy way to implement the hw_breakpoint API overflow call back. Signed-off-by: Jason Wessel <jason.wessel@windriver.com> Acked-by: Dongdong Deng <dongdong.deng@windriver.com> Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
2010-07-28x86, asm: Move cmpxchg emulation code to arch/x86/libH. Peter Anvin
Move cmpxchg emulation code from arch/x86/kernel/cpu (which is otherwise CPU identification) to arch/x86/lib, where other emulation code lives already. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> LKML-Reference: <AANLkTikAmaDPji-TVDarmG1yD=fwbffcsmEU=YEuP+8r@mail.gmail.com>
2010-07-28x86, cpu: Export AMD errata definitionsH. Peter Anvin
Exprot the AMD errata definitions, since they are needed by kvm_amd.ko if that is built as a module. Doing "make allmodconfig" during testing would have caught this. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Hans Rosenfeld <hans.rosenfeld@amd.com> LKML-Reference: <1280336972-865982-1-git-send-email-hans.rosenfeld@amd.com>
2010-07-28x86, cpu: Use AMD errata checking framework for erratum 383Hans Rosenfeld
Use the AMD errata checking framework instead of open-coding the test. Signed-off-by: Hans Rosenfeld <hans.rosenfeld@amd.com> LKML-Reference: <1280336972-865982-3-git-send-email-hans.rosenfeld@amd.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-28x86, cpu: Clean up AMD erratum 400 workaroundHans Rosenfeld
Remove check_c1e_idle() and use the new AMD errata checking framework instead. Signed-off-by: Hans Rosenfeld <hans.rosenfeld@amd.com> LKML-Reference: <1280336972-865982-2-git-send-email-hans.rosenfeld@amd.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-28x86, cpu: AMD errata checking frameworkHans Rosenfeld
Errata are defined using the AMD_LEGACY_ERRATUM() or AMD_OSVW_ERRATUM() macros. The latter is intended for newer errata that have an OSVW id assigned, which it takes as first argument. Both take a variable number of family-specific model-stepping ranges created by AMD_MODEL_RANGE(). Iff an erratum has an OSVW id, OSVW is available on the CPU, and the OSVW id is known to the hardware, it is used to determine whether an erratum is present. Otherwise, the model-stepping ranges are matched against the current CPU to find out whether the erratum applies. For certain special errata, the code using this framework might have to conduct further checks to make sure an erratum is really (not) present. Signed-off-by: Hans Rosenfeld <hans.rosenfeld@amd.com> LKML-Reference: <1280336972-865982-1-git-send-email-hans.rosenfeld@amd.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-28Merge remote branch 'linus/master' into x86/cpuH. Peter Anvin
2010-07-28fanotify: sys_fanotify_mark declartionEric Paris
This patch simply declares the new sys_fanotify_mark syscall int fanotify_mark(int fanotify_fd, unsigned int flags, u64_mask, int dfd const char *pathname) Signed-off-by: Eric Paris <eparis@redhat.com>
2010-07-28fanotify: fanotify_init syscall declarationEric Paris
This patch defines a new syscall fanotify_init() of the form: int sys_fanotify_init(unsigned int flags, unsigned int event_f_flags, unsigned int priority) This syscall is used to create and fanotify group. This is very similar to the inotify_init() syscall. Signed-off-by: Eric Paris <eparis@redhat.com>
2010-07-27Merge remote branch 'origin/x86/urgent' into x86/asmH. Peter Anvin
2010-07-27x86-32: Align IRQ stacks properlyChristoph Hellwig
As suggested by Steven Rostedt we need to align the IRQ stacks to the stack size, not just the page size to make them work for stack traces and other things that depend on finding the stack slot itself with 8k stacks. Signed-off-by: Christoph Hellwig <hch@lst.de> LKML-Reference: <20100727121313.GA19976@lst.de> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-07-27Merge branches 'iommu-api/2.6.36' and 'amd-iommu/2.6.36' into iommu/2.6.36Joerg Roedel
2010-07-27x86/amd-iommu: Export cache-coherency capabilityJoerg Roedel
This patch exports the capability of the AMD IOMMU to force cache coherency of DMA transactions through the IOMMU-API. This is required to disable some nasty hacks in KVM when this capability is not available. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2010-07-27x86: Convert common clocksources to use clocksource_register_hz/khzJohn Stultz
This converts the most common of the x86 clocksources over to use clocksource_register_hz/khz. Signed-off-by: John Stultz <johnstul@us.ibm.com> LKML-Reference: <1279068988-21864-11-git-send-email-johnstul@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2010-07-27timkeeping: Fix update_vsyscall to provide wall_to_monotonic offsetJohn Stultz
update_vsyscall() did not provide the wall_to_monotoinc offset, so arch specific implementations tend to reference wall_to_monotonic directly. This limits future cleanups in the timekeeping core, so this patch fixes the update_vsyscall interface to provide wall_to_monotonic, allowing wall_to_monotonic to be made static as planned in Documentation/feature-removal-schedule.txt Signed-off-by: John Stultz <johnstul@us.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Anton Blanchard <anton@samba.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Tony Luck <tony.luck@intel.com> LKML-Reference: <1279068988-21864-7-git-send-email-johnstul@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2010-07-27x86: Fix vtime/file timestamp inconsistenciesJohn Stultz
Due to vtime calling vgettimeofday(), its possible that an application could call time();create("stuff",O_RDRW); only to see the file's creation timestamp to be before the value returned by time. A similar way to reproduce the issue is to compare the vsyscall time() with the syscall time(), and observe ordering issues. The modified test case from Oleg Nesterov below can illustrate this: int main(void) { time_t sec1,sec2; do { sec1 = time(&sec2); sec2 = syscall(__NR_time, NULL); } while (sec1 <= sec2); printf("vtime: %d.000000\n", sec1); printf("time: %d.000000\n", sec2); return 0; } The proper fix is to make vtime use the same time value as current_kernel_time() (which is exported via update_vsyscall) instead of vgettime(). Thanks to Jiri Olsa for bringing up the issue and catching bugs in earlier verisons of this fix. Signed-off-by: John Stultz <johnstul@us.ibm.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> LKML-Reference: <1279068988-21864-2-git-send-email-johnstul@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2010-07-26xen/pvhvm: fix build problem when !CONFIG_XENJeremy Fitzhardinge
x86_hyper_xen_hvm is only defined when Xen is enabled in the kernel config. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2010-07-26Merge branch 'x86-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Do not try to disable hpet if it hasn't been initialized before x86, i8259: Only register sysdev if we have a real 8259 PIC
2010-07-26[CPUFREQ] powernow-k8: Limit Pstate transition latency checkBorislav Petkov
The Pstate transition latency check was added for broken F10h BIOSen which wrongly contain a value of 0 for transition and bus master latency. Fam11h and later, however, (will) have similar transition latency so extend that behavior for them too. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Signed-off-by: Dave Jones <davej@redhat.com>
2010-07-26[CPUFREQ] Fix PCC driver error pathMatthew Garrett
The PCC cpufreq driver unmaps the mailbox address range if any CPUs fail to initialise, but doesn't do anything to remove the registered CPUs from the cpufreq core resulting in failures further down the line. We're better off simply returning a failure - the cpufreq core will unregister us cleanly if we end up with no successfully registered CPUs. Tidy up the failure path and also add a sanity check to ensure that the firmware gives us a realistic frequency - the core deals badly with that being set to 0. Signed-off-by: Matthew Garrett <mjg@redhat.com> Cc: Naga Chumbalkar <nagananda.chumbalkar@hp.com> Signed-off-by: Dave Jones <davej@redhat.com>
2010-07-26[CPUFREQ] fix double freeing in error path of pcc-cpufreqDaniel J Blueman
Prevent double freeing on error path. Signed-off-by: Daniel J Blueman <daniel.blueman@gmail.com> Signed-off-by: Dave Jones <davej@redhat.com>
2010-07-26[CPUFREQ] pcc driver should check for pcch method before calling _OSCMatthew Garrett
The pcc specification documents an _OSC method that's incompatible with the one defined as part of the ACPI spec. This shouldn't be a problem as both are supposed to be guarded with a UUID. Unfortunately approximately nobody (including HP, who wrote this spec) properly check the UUID on entry to the _OSC call. Right now this could result in surprising behaviour if the pcc driver performs an _OSC call on a machine that doesn't implement the pcc specification. Check whether the PCCH method exists first in order to reduce this probability. Signed-off-by: Matthew Garrett <mjg@redhat.com> Cc: Naga Chumbalkar <nagananda.chumbalkar@hp.com> Signed-off-by: Dave Jones <davej@redhat.com>
2010-07-24Merge branch 'bugzilla-16396' into releaseLen Brown
2010-07-24ACPI / Sleep: Allow the NVS saving to be skipped during suspend to RAMRafael J. Wysocki
Commit 2a6b69765ad794389f2fc3e14a0afa1a995221c2 (ACPI: Store NVS state even when entering suspend to RAM) caused the ACPI suspend code save the NVS area during suspend and restore it during resume unconditionally, although it is known that some systems need to use acpi_sleep=s4_nonvs for hibernation to work. To allow the affected systems to avoid saving and restoring the NVS area during suspend to RAM and resume, introduce kernel command line option acpi_sleep=nonvs and make acpi_sleep=s4_nonvs work as its alias temporarily (add acpi_sleep=s4_nonvs to the feature removal file). Addresses https://bugzilla.kernel.org/show_bug.cgi?id=16396 . Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Reported-and-tested-by: tomas m <tmezzadra@gmail.com> Signed-off-by: Len Brown <len.brown@intel.com>
2010-07-23x86: Do not try to disable hpet if it hasn't been initialized beforeStefano Stabellini
hpet_disable is called unconditionally on machine reboot if hpet support is compiled in the kernel. hpet_disable only checks if the machine is hpet capable but doesn't make sure that hpet has been initialized. [ tglx: Made it a one liner and removed the redundant hpet_address check ] Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Venkatesh Pallipadi <venki@google.com> LKML-Reference: <alpine.DEB.2.00.1007211726240.22235@kaball-desktop> Cc: stable@kernel.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2010-07-22x86/xen: event channels delivery on HVM.Sheng Yang
Set the callback to receive evtchns from Xen, using the callback vector delivery mechanism. The traditional way for receiving event channel notifications from Xen is via the interrupts from the platform PCI device. The callback vector is a newer alternative that allow us to receive notifications on any vcpu and doesn't need any PCI support: we allocate a vector exclusively to receive events, in the vector handler we don't need to interact with the vlapic, therefore we avoid a VMEXIT. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2010-07-22x86: early PV on HVM features initialization.Sheng Yang
Initialize basic pv on hvm features adding a new Xen HVM specific hypervisor_x86 structure. Don't try to initialize xen-kbdfront and xen-fbfront when running on HVM because the backends are not available. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Sheng Yang <sheng@linux.intel.com> Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2010-07-22Merge branch 'bugzilla-15886' into releaseLen Brown
2010-07-22ACPI: skip checking BM_STS if the BIOS doesn't ask for itLen Brown
It turns out that there is a bit in the _CST for Intel FFH C3 that tells the OS if we should be checking BM_STS or not. Linux has been unconditionally checking BM_STS. If the chip-set is configured to enable BM_STS, it can retard or completely prevent entry into deep C-states -- as illustrated by turbostat: http://userweb.kernel.org/~lenb/acpi/utils/pmtools/turbostat/ ref: Intel Processor Vendor-Specific ACPI Interface Specification table 4 "_CST FFH GAS Field Encoding" Bit 1: Set to 1 if OSPM should use Bus Master avoidance for this C-state https://bugzilla.kernel.org/show_bug.cgi?id=15886 Signed-off-by: Len Brown <len.brown@intel.com>
2010-07-22x86 cpufreq, perf: Make trace_power_frequency cpufreq driver independentThomas Renninger
and fix the broken case if a core's frequency depends on others. trace_power_frequency was only implemented in a rather ungeneric way in acpi-cpufreq driver's target() function only. -> Move the call to trace_power_frequency to cpufreq.c:cpufreq_notify_transition() where CPUFREQ_POSTCHANGE notifier is triggered. This will support power frequency tracing by all cpufreq drivers. trace_power_frequency did not trace frequency changes correctly when the userspace governor was used or when CPU cores' frequency depend on each other. -> Moving this into the CPUFREQ_POSTCHANGE notifier and pass the cpu which gets switched automatically fixes this. Robert Schoene provided some important fixes on top of my initial quick shot version which are integrated in this patch: - Forgot some changes in power_end trace (TP_printk/variable names) - Variable dummy in power_end must now be cpu_id - Use static 64 bit variable instead of unsigned int for cpu_id [akpm@linux-foundation.org: build fix] Signed-off-by: Thomas Renninger <trenn@suse.de> Cc: davej@codemonkey.org.uk Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Dave Jones <davej@codemonkey.org.uk> Acked-by: Arjan van de Ven <arjan@infradead.org> Cc: Robert Schoene <robert.schoene@tu-dresden.de> Tested-by: Robert Schoene <robert.schoene@tu-dresden.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2010-07-21x86-64: Simplify loading initial_gsBrian Gerst
Load initial_gs as two 32-bit values instead of splitting a 64-bit value. Signed-off-by: Brian Gerst <brgerst@gmail.com> LKML-Reference: <1279371808-24804-3-git-send-email-brgerst@gmail.com> Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-07-21x86: Use symbolic MSR namesBrian Gerst
Use symbolic MSR names instead of hardcoding the MSR index. Signed-off-by: Brian Gerst <brgerst@gmail.com> LKML-Reference: <1279371808-24804-2-git-send-email-brgerst@gmail.com> Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2010-07-21x86: auditsyscall: fix fastpath return value after rescheduleRoland McGrath
In the CONFIG_AUDITSYSCALL fast-path for x86 64-bit system calls, we can pass a bad return value and/or error indication for the system call to audit_syscall_exit(). This happens when TIF_NEED_RESCHED was set as the system call returned, so we went out to schedule() and came back to the exit-audit fast-path. The fix is to reload the user return value register from the pt_regs before using it for audit_syscall_exit(). Both the 32-bit kernel's fast path and the 64-bit kernel's 32-bit system call fast paths work slightly differently, so that they always leave the fast path entirely to reschedule and don't return there, so they don't have the analogous bugs. Reported-by: Alexander Viro <aviro@redhat.com> Signed-off-by: Roland McGrath <roland@redhat.com>
2010-07-21x86, xsave: Make xstate_enable_boot_cpu() __init, protect on CPU 0H. Peter Anvin
xstate_enable_boot_cpu() is, as the name implies, only used on the boot CPU; furthermore, it invokes alloc_bootmem(), which is __init; hence it needs to be tagged __init rather than __cpuinit. Furthermore, it is *not* safe in the long run to rely on CPU 0 only coming online during the early boot -- at some point we're going to support offlining (and re-onlining) the boot CPU, and at that point we must not call xstate_enable_boot_cpu() again. The code is a fair bit more obscure than one would like, because the __ref overrides aren't quite powerful enough. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4C476236.1020302@zytor.com>
2010-07-21x86, xsave: Add __init attribute to setup_xstate_features()Robert Richter
This is called only from initialization code. Signed-off-by: Robert Richter <robert.richter@amd.com> LKML-Reference: <1279731838-1522-6-git-send-email-robert.richter@amd.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-21x86, xsave: Make init_xstate_buf staticRobert Richter
The pointer is only used in xsave.c. Making it static. Signed-off-by: Robert Richter <robert.richter@amd.com> LKML-Reference: <1279731838-1522-5-git-send-email-robert.richter@amd.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2010-07-21x86, xsave: Check cpuid level for XSTATE_CPUID (0x0d)Robert Richter
The patch introduces the XSTATE_CPUID macro and adds a check that tests if XSTATE_CPUID exists. Signed-off-by: Robert Richter <robert.richter@amd.com> LKML-Reference: <1279731838-1522-4-git-send-email-robert.richter@amd.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>