summaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel
AgeCommit message (Collapse)Author
2009-11-27x86/amd-iommu: Let domain_for_device handle aliasesJoerg Roedel
If there is no domain associated to a device yet and the device has an alias device which already has a domain, the original device needs to have the same domain as the alias device. This patch changes domain_for_device to handle this situation and directly assigns the alias device domain to the device in this situation. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Remove iommu specific handling from dma_ops pathJoerg Roedel
This patch finishes the removal of all iommu specific handling code in the dma_ops path. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Remove iommu parameter from __(un)map_singleJoerg Roedel
With the prior changes this parameter is not longer required. This patch removes it from the function and all callers. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Make alloc_new_range aware of multiple IOMMUsJoerg Roedel
Since the assumption that an dma_ops domain is only bound to one IOMMU was given up we need to make alloc_new_range aware of it. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Remove iommu parameter from dma_ops_domain_(un)mapJoerg Roedel
The parameter is unused in these function so remove it from the parameter list. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Use check_device in get_device_resourcesJoerg Roedel
Every call-place of get_device_resources calls check_device before it. So call it from get_device_resources directly and simplify the code. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Use check_device for amd_iommu_dma_supportedJoerg Roedel
The check_device logic needs to include the dma_supported checks to be really sure. Merge the dma_supported logic into check_device and use it to implement dma_supported. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Make np-cache a global flagJoerg Roedel
The non-present cache flag was IOMMU local until now which doesn't make sense. Make this a global flag so we can remove the lase user of 'struct iommu' in the map/unmap path. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Reimplement flush_all_domains_on_iommu()Joerg Roedel
This patch reimplements the function flush_all_domains_on_iommu to use the global protection domain list. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Reimplement amd_iommu_flush_all_domains()Joerg Roedel
This patch reimplementes the amd_iommu_flush_all_domains function to use the global protection domain list instead of flushing every domain on every IOMMU. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Implement protection domain listJoerg Roedel
This patch adds code to keep a global list of all protection domains. This allows to simplify the resume code. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Remove iommu_flush_domain functionJoerg Roedel
This iommu_flush_tlb_pde function does essentially the same. So the iommu_flush_domain function is redundant and can be removed. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Use __iommu_flush_pages for tlb flushesJoerg Roedel
This patch re-implements iommu_flush_tlb functions to use the __iommu_flush_pages logic. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Make iommu_flush_pages aware of multiple IOMMUsJoerg Roedel
This patch extends the iommu_flush_pages function to flush the TLB entries on all IOMMUs the domain has devices on. This basically gives up the former assumption that dma_ops domains are only bound to one IOMMU in the system. For dma_ops domains this is still true but not for IOMMU-API managed domains. Giving this assumption up for dma_ops domains too allows code simplification. Further it splits out the main logic into a generic function which can be used by iommu_flush_tlb too. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Add function to complete a tlb flushJoerg Roedel
This patch adds a function to the AMD IOMMU driver which completes all queued commands an all IOMMUs a specific domain has devices attached on. This is required in a later patch when per-domain flushing is implemented. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Add per IOMMU reference countingJoerg Roedel
This patch adds reference counting for protection domains per IOMMU. This allows a smarter TLB flushing strategy. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Add an index field to struct amd_iommuJoerg Roedel
This patch adds an index field to struct amd_iommu which can be used to lookup it up in an array. This index will be used in struct protection_domain to keep track which protection domain has devices behind which IOMMU. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Update copyright headersJoerg Roedel
This patch updates the copyright headers in the relevant AMD IOMMU driver files to match the date of the latest changes. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27x86/amd-iommu: Separate internal interface definitionsJoerg Roedel
This patch moves all function declarations which are only used inside the driver code to a seperate header file. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-27hw-breakpoints: Use struct perf_event_attr to define user breakpointsFrederic Weisbecker
In-kernel user breakpoints are created using functions in which we pass breakpoint parameters as individual variables: address, length and type. Although it fits well for x86, this just does not scale across archictectures that may support this api later as these may have more or different needs. Pass in a perf_event_attr structure instead because it is meant to evolve as much as possible into a generic hardware breakpoint parameter structure. Reported-by: K.Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1259294154-5197-1-git-send-regression-fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26x86: SGI UV: Map low MMR rangesJack Steiner
Explicitly mmap the UV chipset MMR address ranges used to access blade-local registers. Although these same MMRs are also mmaped at higher addresses, the low range is more convenient when accessing blade-local registers. The low range addresses always alias to the local blade regardless of the blade id. Signed-off-by: Jack Steiner <steiner@sgi.com> LKML-Reference: <20091125162018.GA25445@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26x86, 64-bit: Set data segments to null after switching to 64-bit modeBrian Gerst
This prevents kernel threads from inheriting non-null segment selectors, and causing optimizations in __switch_to() to be ineffective. Signed-off-by: Brian Gerst <brgerst@gmail.com> Cc: Tim Blechmann <tim@klingt.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jan Beulich <JBeulich@novell.com> LKML-Reference: <1259165856-3512-1-git-send-email-brgerst@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26x86, mce: Add __cpuinit to hotplug callback functionsHidetoshi Seto
The mce_disable_cpu() and mce_reenable_cpu() are called only from mce_cpu_callback() which is marked as __cpuinit. So these functions can be __cpuinit too. Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Andi Kleen <ak@linux.intel.com> LKML-Reference: <4B0E3C4E.4090809@jp.fujitsu.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26x86: Limit number of per cpu TSC sync messagesMike Travis
Limit the number of per cpu TSC sync messages by only printing to the console if an error occurs, otherwise print as a DEBUG message. The info message "Skipping synchronization ..." is only printed after the last cpu has booted. Signed-off-by: Mike Travis <travis@sgi.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Roland Dreier <rdreier@cisco.com> Cc: Randy Dunlap <rdunlap@xenotime.net> Cc: Tejun Heo <tj@kernel.org> Cc: Andi Kleen <andi@firstfloor.org> Cc: Greg Kroah-Hartman <gregkh@suse.de> Cc: Yinghai Lu <yhlu.kernel@gmail.com> Cc: David Rientjes <rientjes@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Jack Steiner <steiner@sgi.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20091118002222.181053000@alcatraz.americas.sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26x86/hw-breakpoints: Don't lose GE flag while disabling a breakpointFrederic Weisbecker
When we schedule out a breakpoint from the cpu, we also incidentally remove the "Global exact breakpoint" flag from the breakpoint control register. It makes us losing the fine grained precision about the origin of the instructions that may trigger breakpoint exceptions for the other breakpoints running in this cpu. Reported-by: Prasad <prasad@linux.vnet.ibm.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <1259211878-6013-1-git-send-regression-fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26hw-breakpoints: Simplify error handling in breakpoint creation requestsFrederic Weisbecker
This simplifies the error handling when we create a breakpoint. We don't need to check the NULL return value corner case anymore since we have improved perf_event_create_kernel_counter() to always return an error code in the failure case. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Prasad <prasad@linux.vnet.ibm.com> LKML-Reference: <1259210142-5714-3-git-send-regression-fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26x86: dumpstack, 64-bit: Disable preemption when walking the IRQ/exception stacksIngo Molnar
This warning: [ 847.140022] rb_producer D 0000000000000000 5928 519 2 0x00000000 [ 847.203627] BUG: using smp_processor_id() in preemptible [00000000] code: khungtaskd/517 [ 847.207360] caller is show_stack_log_lvl+0x2e/0x241 [ 847.210364] Pid: 517, comm: khungtaskd Not tainted 2.6.32-rc8-tip+ #13761 [ 847.213395] Call Trace: [ 847.215847] [<ffffffff81413bde>] debug_smp_processor_id+0x1f0/0x20a [ 847.216809] [<ffffffff81015eae>] show_stack_log_lvl+0x2e/0x241 [ 847.220027] [<ffffffff81018512>] show_stack+0x1c/0x1e [ 847.223365] [<ffffffff8107b7db>] sched_show_task+0xe4/0xe9 [ 847.226694] [<ffffffff8112f21f>] check_hung_task+0x140/0x199 [ 847.230261] [<ffffffff8112f4a8>] check_hung_uninterruptible_tasks+0x1b7/0x20f [ 847.233371] [<ffffffff8112f500>] ? watchdog+0x0/0x50 [ 847.236683] [<ffffffff8112f54e>] watchdog+0x4e/0x50 [ 847.240034] [<ffffffff810cee56>] kthread+0x97/0x9f [ 847.243372] [<ffffffff81012aea>] child_rip+0xa/0x20 [ 847.246690] [<ffffffff81e43494>] ? restore_args+0x0/0x30 [ 847.250019] [<ffffffff81e43083>] ? _spin_lock+0xe/0x10 [ 847.253351] [<ffffffff810cedbf>] ? kthread+0x0/0x9f [ 847.256833] [<ffffffff81012ae0>] ? child_rip+0x0/0x20 Happens because on preempt-RCU, khungd calls show_stack() with preemption enabled. Make sure we are not preemptible while walking the IRQ and exception stacks on 64-bit. (32-bit stack dumping is preemption safe.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-26x86: dumpstack: Clean up the x86_stack_ids[][] initalization and other detailsIngo Molnar
Make the initialization more readable, plus tidy up a few small visual details as well. No change in functionality. LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-25x86: Rename global percpu symbol dr7 to cpu_dr7Tejun Heo
Percpu symbols now occupy the same namespace as other global symbols and as such short global symbols without subsystem prefix tend to collide with local variables. dr7 percpu variable used by x86 was hit by this. Rename it to cpu_dr7. The rename also makes it more consistent with its fellow cpu_debugreg percpu variable. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org>, Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <20091125115856.GA17856@elte.hu> Signed-off-by: Ingo Molnar <mingo@elte.hu> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
2009-11-25x86: Fix iommu=soft boot optionFUJITA Tomonori
iommu=soft boot option forces the kernel to use swiotlb. ( This has the side-effect of enabling the swiotlb over the GART if this boot option is provided. This is the desired behavior of the swiotlb boot option and works like that for all other hw-IOMMU drivers. ) Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: yinghai@kernel.org LKML-Reference: <20091125084611O.fujita.tomonori@lab.ntt.co.jp> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-24ACPICA: Add post-order callback to acpi_walk_namespaceLin Ming
The existing interface only has a pre-order callback. This change adds an additional parameter for a post-order callback which will be more useful for bus scans. ACPICA BZ 779. Also update the external calls to acpi_walk_namespace. http://www.acpica.org/bugzilla/show_bug.cgi?id=779 Signed-off-by: Lin Ming <ming.m.lin@intel.com> Signed-off-by: Bob Moore <robert.moore@intel.com> Signed-off-by: Len Brown <len.brown@intel.com>
2009-11-24x86, mtrr: Fix sorting of mtrr after subtractingYinghai Lu
In some cases we can coalesce MTRR entries after cleanup; this may allow us to have more entries. As such, introduce clean_sort_range to to sort and coaelsce the MTRR entries. Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4B0BB9A3.5020908@kernel.org> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-11-24[ACPI/CPUFREQ] Introduce bios_limit per cpu cpufreq sysfs interfaceThomas Renninger
This interface is mainly intended (and implemented) for ACPI _PPC BIOS frequency limitations, but other cpufreq drivers can also use it for similar use-cases. Why is this needed: Currently it's not obvious why cpufreq got limited. People see cpufreq/scaling_max_freq reduced, but this could have happened by: - any userspace prog writing to scaling_max_freq - thermal limitations - hardware (_PPC in ACPI case) limitiations Therefore export bios_limit (in kHz) to: - Point the user that it's the BIOS (broken or intended) which limits frequency - Export it as a sysfs interface for userspace progs. While this was a rarely used feature on laptops, there will appear more and more server implemenations providing "Green IT" features like allowing the service processor to limit the frequency. People want to know about HW/BIOS frequency limitations. All ACPI P-state driven cpufreq drivers are covered with this patch: - powernow-k8 - powernow-k7 - acpi-cpufreq Tested with a patched DSDT which limits the first two cores (_PPC returns 1) via _PPC, exposed by bios_limit: # echo 2200000 >cpu2/cpufreq/scaling_max_freq # cat cpu*/cpufreq/scaling_max_freq 2600000 2600000 2200000 2200000 # #scaling_max_freq shows general user/thermal/BIOS limitations # cat cpu*/cpufreq/bios_limit 2600000 2600000 2800000 2800000 # #bios_limit only shows the HW/BIOS limitation CC: Pallipadi Venkatesh <venkatesh.pallipadi@intel.com> CC: Len Brown <lenb@kernel.org> CC: davej@codemonkey.org.uk CC: linux@dominikbrodowski.net Signed-off-by: Thomas Renninger <trenn@suse.de> Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24[CPUFREQ] use an enum for speedstep processor identificationRusty Russell
The "unsigned int processor" everywhere confused Rusty, leading to breakage when he passed in smp_processor_id(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Acked-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24[CPUFREQ] powernow-k6: set transition latency value so ondemand governor can ↵Krzysztof Helt
be used Set the transition latency to value smaller than CPUFREQ_ETERNAL so governors other than "performance" work (like the "ondemand" one). The value is found in "AMD PowerNow! Technology Platform Design Guide for Embedded Processors" dated December 2000 (AMD doc #24267A). There is the answer to one of FAQs on page 40 which states that suggested complete transition period is 200 us. Tested on K6-2+ CPU with K6-3 core (model 13, stepping 4). Signed-off-by: Krzysztof Helt <krzysztof.h1@wp.pl> Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24[CPUFREQ] cpumask: don't put a cpumask on the stack in ↵Rusty Russell
x86...cpufreq/powernow-k8.c It's still mugging the current process's cpumask, but as comment in 1ff6e97f1d says, it's not a trivial fix. So, at least we can use a cpumask_var_t to do the Wrong Thing the Right Way :) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> To: cpufreq@vger.kernel.org Cc: Mark Langsdorf <mark.langsdorf@amd.com> Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24[CPUFREQ] Enable ACPI PDC handshake for VIA/Centaur CPUsHarald Welte
In commit 0de51088e6a82bc8413d3ca9e28bbca2788b5b53, we introduced the use of acpi-cpufreq on VIA/Centaur CPU's by removing a vendor check for VENDOR_INTEL. However, as it turns out, at least the Nano CPU's also need the PDC (processor driver capabilities) handshake in order to activate the methods required for acpi-cpufreq. Since arch_acpi_processor_init_pdc() contains another vendor check for Intel, the PDC is not initialized on VIA CPU's. The resulting behavior of a current mainline kernel on such systems is: acpi-cpufreq loads and it indicates CPU frequency changes. However, the CPU stays at a single frequency This trivial patch ensures that init_intel_pdc() is called on Intel and VIA/Centaur CPU's alike. Signed-off-by: Harald Welte <HaraldWelte@viatech.com> Signed-off-by: Dave Jones <davej@redhat.com>
2009-11-24perf_events, x86: Fix validate_event bugStephane Eranian
The validate_event() was failing on valid event combinations. The function was assuming that if x86_schedule_event() returned 0, it meant error. But x86_schedule_event() returns the counter index and 0 is a perfectly valid value. An error is returned if the function returns a negative value. Furthermore, validate_event() was also failing for event groups because the event->pmu was not set until after hw_perf_event_init(). Signed-off-by: Stephane Eranian <eranian@google.com> Cc: peterz@infradead.org Cc: paulus@samba.org Cc: perfmon2-devel@lists.sourceforge.net Cc: eranian@gmail.com LKML-Reference: <4b0bdf36.1818d00a.07cc.25ae@mx.google.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> -- arch/x86/kernel/cpu/perf_event.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
2009-11-24x86: Move find_smp_config() earlier and avoid bootmem usageYinghai Lu
Move the find_smp_config() call to before bootmem is initialized. Use reserve_early() instead of reserve_bootmem() in it. This simplifies the code, we only need to call find_smp_config() once and can remove the now unneeded reserve parameter from x86_init_mpparse::find_smp_config. We thus also reduce x86's dependency on bootmem allocations. Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4B0BB9F2.70907@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23x86, platform: Change is_untracked_pat_range() to bool; cleanup initH. Peter Anvin
- Change is_untracked_pat_range() to return bool. - Clean up the initialization of is_untracked_pat_range() -- by default, we simply point it at is_ISA_range() directly. - Move is_untracked_pat_range to the end of struct x86_platform, since it is the newest field. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Jack Steiner <steiner@sgi.com> LKML-Reference: <20091119202341.GA4420@sgi.com>
2009-11-23x86, cpu: mv display_cacheinfo -> cpu_detect_cache_sizesBorislav Petkov
display_cacheinfo() doesn't display anything anymore and it is used to detect CPU cache sizes. Rename it accordingly. Signed-off-by: Borislav Petkov <petkovbb@gmail.com> LKML-Reference: <20091121130145.GA31357@liondog.tnic> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2009-11-23x86: UV SGI: Don't track GRU space in PATJack Steiner
GRU space is always mapped as WB in the page table. There is no need to track the mappings in the PAT. This also eliminates the "freeing invalid memtype" messages when the GRU space is unmapped. Signed-off-by: Jack Steiner <steiner@sgi.com> LKML-Reference: <20091119202341.GA4420@sgi.com> [ v2: fix build failure ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23x86: UV RTC: Always enable RTC clocksourceDimitri Sivanich
Always enable the RTC clocksource on UV systems. Signed-off-by: Dimitri Sivanich <sivanich@sgi.com> LKML-Reference: <20091120214826.GA20016@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23x86: Tighten conditionals on MCE related statisticsJan Beulich
irq_thermal_count is only being maintained when X86_THERMAL_VECTOR, and both X86_THERMAL_VECTOR and X86_MCE_THRESHOLD don't need extra wrapping in X86_MCE conditionals. Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Cc: Yong Wang <yong.y.wang@intel.com> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Arjan van de Ven <arjan@infradead.org> LKML-Reference: <4B06AFA902000078000211F8@vpn.id2.novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23x86: SGI UV: Fix BAU initializationCliff Wickman
A memory mapped register that affects the SGI UV Broadcast Assist Unit's interrupt handling may sometimes be unintialized. Remove the condition on its initialization, as that condition can be randomly satisfied by a hardware reset. Signed-off-by: Cliff Wickman <cpw@sgi.com> Cc: <stable@kernel.org> LKML-Reference: <E1NBGB9-0005nU-Dp@eag09.americas.sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23hw-breakpoint: Attribute authorship of hw-breakpoint related filesK.Prasad
Attribute authorship to developers of hw-breakpoint related files. Signed-off-by: K.Prasad <prasad@linux.vnet.ibm.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20091123154713.GA5593@in.ibm.com> [ v2: moved it to latest -tip ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23x86/amd-iommu: attach devices to pre-allocated domains earlyJoerg Roedel
For some devices the ACPI table may define unity map requirements which must me met when the IOMMU is enabled. So we need to attach devices to their domains as early as possible so that these mappings are in place when needed. This patch assigns the domains right after they are allocated. Otherwise this can result in I/O page faults before a driver binds to a device and BIOS is still using it. Cc: stable@kernel.org Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-23x86/amd-iommu: un__init iommu_setup_msiJoerg Roedel
This function may be called on the resume path and can not be dropped after booting. Cc: stable@kernel.org Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2009-11-23perf events: Do not generate function trace entries in perf codeIngo Molnar
Decreases perf overhead when function tracing is enabled, by about 50%. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-11-23x86, numa: Use near(er) online node instead of roundrobin for NUMAYinghai Lu
CPU to node mapping is set via the following sequence: 1. numa_init_array(): Set up roundrobin from cpu to online node 2. init_cpu_to_node(): Set that according to apicid_to_node[] according to srat only handle the node that is online, and leave other cpu on node without ram (aka not online) to still roundrobin. 3. later call srat_detect_node for Intel/AMD, will use first_online node or nearby node. Problem is that setup_per_cpu_areas() is not called between 2 and 3, the per_cpu for cpu on node with ram is on different node, and could put that on node with two hops away. So try to optimize this and add find_near_online_node() and call init_cpu_to_node(). Signed-off-by: Yinghai Lu <yinghai@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: David Rientjes <rientjes@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <4B07A739.3030104@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>