summaryrefslogtreecommitdiffstats
path: root/arch/powerpc
AgeCommit message (Collapse)Author
2009-09-11Merge branch 'perfcounters-core-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perfcounters-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (60 commits) perf tools: Avoid unnecessary work in directory lookups perf stat: Clean up statistics calculations a bit more perf stat: More advanced variance computation perf stat: Use stddev_mean in stead of stddev perf stat: Remove the limit on repeat perf stat: Change noise calculation to use stddev x86, perf_counter, bts: Do not allow kernel BTS tracing for now x86, perf_counter, bts: Correct pointer-to-u64 casts x86, perf_counter, bts: Fail if BTS is not available perf_counter: Fix output-sharing error path perf trace: Fix read_string() perf trace: Print out in nanoseconds perf tools: Seek to the end of the header area perf trace: Fix parsing of perf.data perf trace: Sample timestamps as well perf_counter: Introduce new (non-)paranoia level to allow raw tracepoint access perf trace: Sample the CPU too perf tools: Work around strict aliasing related warnings perf tools: Clean up warnings list in the Makefile perf tools: Complete support for dynamic strings ...
2009-09-11Merge branch 'core-locking-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (32 commits) locking, m68k/asm-offsets: Rename signal defines locking: Inline spinlock code for all locking variants on s390 locking: Simplify spinlock inlining locking: Allow arch-inlined spinlocks locking: Move spinlock function bodies to header file locking, m68k: Calculate thread_info offset with asm offset locking, m68k/asm-offsets: Rename pt_regs offset defines locking, sparc: Rename __spin_try_lock() and friends locking, powerpc: Rename __spin_try_lock() and friends lockdep: Remove recursion stattistics lockdep: Simplify lock_stat seqfile code lockdep: Simplify lockdep_chains seqfile code lockdep: Simplify lockdep seqfile code lockdep: Fix missing entries in /proc/lock_chains lockdep: Fix missing entry in /proc/lock_stat lockdep: Fix memory usage info of BFS lockdep: Reintroduce generation count to make BFS faster lockdep: Deal with many similar locks lockdep: Introduce lockdep_assert_held() lockdep: Fix style nits ...
2009-09-11Merge branch 'core-iommu-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-iommu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (59 commits) x86/gart: Do not select AGP for GART_IOMMU x86/amd-iommu: Initialize passthrough mode when requested x86/amd-iommu: Don't detach device from pt domain on driver unbind x86/amd-iommu: Make sure a device is assigned in passthrough mode x86/amd-iommu: Align locking between attach_device and detach_device x86/amd-iommu: Fix device table write order x86/amd-iommu: Add passthrough mode initialization functions x86/amd-iommu: Add core functions for pd allocation/freeing x86/dma: Mark iommu_pass_through as __read_mostly x86/amd-iommu: Change iommu_map_page to support multiple page sizes x86/amd-iommu: Support higher level PTEs in iommu_page_unmap x86/amd-iommu: Remove old page table handling macros x86/amd-iommu: Use 2-level page tables for dma_ops domains x86/amd-iommu: Remove bus_addr check in iommu_map_page x86/amd-iommu: Remove last usages of IOMMU_PTE_L0_INDEX x86/amd-iommu: Change alloc_pte to support 64 bit address space x86/amd-iommu: Introduce increase_address_space function x86/amd-iommu: Flush domains if address space size was increased x86/amd-iommu: Introduce set_dte_entry function x86/amd-iommu: Add a gneric version of amd_iommu_flush_all_devices ...
2009-09-05powerpc: Fix i8259 interrupt driver kernel crash on ML510Roderick Colenbrander
This patch fixes a null pointer exception caused by removal of 'ack()' for level interrupts in the Xilinx interrupt driver. A recent change to the xilinx interrupt controller removed the ack hook for level irqs. Signed-off-by: Roderick Colenbrander <thunderbird2k@gmail.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-09-04Merge branch 'amd-iommu/2.6.32' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux-2.6-iommu into core/iommu
2009-09-03perf_counter/powerpc: Fix cache event codes for POWER7Paul Mackerras
I had the codes for L1 D-cache load accesses and misses swapped around, and the wrong codes for LL-cache accesses and misses. This corrects them. Reported-by: Corey Ashford <cjashfor@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@kernel.org> LKML-Reference: <19103.8514.709300.585484@cargo.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-09-02Merge branch 'perfcounters/urgent' into perfcounters/coreIngo Molnar
Merge reason: We are going to modify a place modified by perfcounters/urgent. Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-31locking, powerpc: Rename __spin_try_lock() and friendsHeiko Carstens
Needed to avoid namespace conflicts when the common code function bodies of _spin_try_lock() etc. are moved to a header file where the function name would be __spin_try_lock(). Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Horst Hartmann <horsth@linux.vnet.ibm.com> Cc: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: David Miller <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: <linux-arch@vger.kernel.org> LKML-Reference: <20090831124415.918799705@de.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-27powerpc/ps3: Update ps3_defconfigGeoff Levand
Update ps3_defconfig. o Refresh for 2.6.31. o Remove MTD support. o Add more HID drivers. Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-27powerpc/ps3: Add missing check for PS3 to rtc-ps3 platform device registrationGeert Uytterhoeven
On non-PS3, we get: | kernel BUG at drivers/rtc/rtc-ps3.c:36! because the rtc-ps3 platform device is registered unconditionally in a kernel with builtin support for PS3. Reported-by: Sachin Sant <sachinp@in.ibm.com> Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Acked-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-18perf_counter: powerpc: Add callchain supportPaul Mackerras
This adds support for tracing callchains for powerpc, both 32-bit and 64-bit, and both in the kernel and userspace, from PMU interrupt context. The first three entries stored for each callchain are the NIP (next instruction pointer), LR (link register), and the contents of the LR save area in the second stack frame (the first is ignored because the ABI convention on powerpc is that functions save their return address in their caller's stack frame). Because leaf functions don't have to save their return address (LR value) and don't have to establish a stack frame, it's possible for either or both of LR and the second stack frame's LR save area to have valid return addresses in them. This is basically impossible to disambiguate without either reading the code or looking at auxiliary information such as CFI tables. Since we don't want to do either of those things at interrupt time, we store both LR and the second stack frame's LR save area. Once we get past the second stack frame, there is no ambiguity; all return addresses we get are reliable. For kernel traces, we check whether they are valid kernel instruction addresses and store zero instead if they are not (rather than omitting them, which would make it impossible for userspace to know which was which). We also store zero instead of the second stack frame's LR save area value if it is the same as LR. For kernel traces, we check for interrupt frames, and for user traces, we check for signal frames. In each case, since we're starting a new trace, we store a PERF_CONTEXT_KERNEL/USER marker so that userspace knows that the next three entries are NIP, LR and the second stack frame for the interrupted context. We read user memory with __get_user_inatomic. On 64-bit, if this PMU interrupt occurred while interrupts are soft-disabled, and there is no MMU hash table entry for the page, we will get an -EFAULT return from __get_user_inatomic even if there is a valid Linux PTE for the page, since hash_page isn't reentrant. Thus we have code here to read the Linux PTE and access the page via the kernel linear mapping. Since 64-bit doesn't use (or need) highmem there is no need to do kmap_atomic. On 32-bit, we don't do soft interrupt disabling, so this complication doesn't occur and there is no need to fall back to reading the Linux PTE, since hash_page (or the TLB miss handler) will get called automatically if necessary. Note that we cannot get PMU interrupts in the interval during context switch between switch_mm (which switches the user address space) and switch_to (which actually changes current to the new process). On 64-bit this is because interrupts are hard-disabled in switch_mm and stay hard-disabled until they are soft-enabled later, after switch_to has returned. So there is no possibility of trying to do a user stack trace when the user address space is not current's address space. Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2009-08-18powerpc: Allow perf_counters to access user memory at interrupt timePaul Mackerras
This provides a mechanism to allow the perf_counters code to access user memory in a PMU interrupt routine. Such an access can cause various kinds of interrupt: SLB miss, MMU hash table miss, segment table miss, or TLB miss, depending on the processor. This commit only deals with 64-bit classic/server processors, which use an MMU hash table. 32-bit processors are already able to access user memory at interrupt time. Since we don't soft-disable on 32-bit, we avoid the possibility of reentering hash_page or the TLB miss handlers, since they run with interrupts disabled. On 64-bit processors, an SLB miss interrupt on a user address will update the slb_cache and slb_cache_ptr fields in the paca. This is OK except in the case where a PMU interrupt occurs in switch_slb, which also accesses those fields. To prevent this, we hard-disable interrupts in switch_slb. Interrupts are already soft-disabled at this point, and will get hard-enabled when they get soft-enabled later. This also reworks slb_flush_and_rebolt: to avoid hard-disabling twice, and to make sure that it clears the slb_cache_ptr when called from other callers than switch_slb, the existing routine is renamed to __slb_flush_and_rebolt, which is called by switch_slb and the new version of slb_flush_and_rebolt. Similarly, switch_stab (used on POWER3 and RS64 processors) gets a hard_irq_disable() to protect the per-cpu variables used there and in ste_allocate. If a MMU hashtable miss interrupt occurs, normally we would call hash_page to look up the Linux PTE for the address and create a HPTE. However, hash_page is fairly complex and takes some locks, so to avoid the possibility of deadlock, we check the preemption count to see if we are in a (pseudo-)NMI handler, and if so, we don't call hash_page but instead treat it like a bad access that will get reported up through the exception table mechanism. An interrupt whose handler runs even though the interrupt occurred when soft-disabled (such as the PMU interrupt) is considered a pseudo-NMI handler, which should use nmi_enter()/nmi_exit() rather than irq_enter()/irq_exit(). Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2009-08-18powerpc/32: Always order writes to halves of 64-bit PTEsPaul Mackerras
On 32-bit systems with 64-bit PTEs, the PTEs have to be written in two 32-bit halves. On SMP we write the higher-order half and then the lower-order half, with a write barrier between the two halves, but on UP there was no particular ordering of the writes to the two halves. This extends the ordering that we already do on SMP to the UP case as well. The reason is that with the perf_counter subsystem potentially accessing user memory at interrupt time to get stack traces, we have to be careful not to create an incorrect but apparently valid PTE even on UP. Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2009-08-10Merge branch 'perfcounters-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (27 commits) perf_counter: Zero dead bytes from ftrace raw samples size alignment perf_counter: Subtract the buffer size field from the event record size perf_counter: Require CAP_SYS_ADMIN for raw tracepoint data perf_counter: Correct PERF_SAMPLE_RAW output perf tools: callchain: Fix bad rounding of minimum rate perf_counter tools: Fix libbfd detection for systems with libz dependency perf: "Longum est iter per praecepta, breve et efficax per exempla" perf_counter: Fix a race on perf_counter_ctx perf_counter: Fix tracepoint sampling to be part of generic sampling perf_counter: Work around gcc warning by initializing tracepoint record unconditionally perf tools: callchain: Fix sum of percentages to be 100% by displaying amount of ignored chains in fractal mode perf tools: callchain: Fix 'perf report' display to be callchain by default perf tools: callchain: Fix spurious 'perf report' warnings: ignore empty callchains perf record: Fix the -A UI for empty or non-existent perf.data perf util: Fix do_read() to fail on EOF instead of busy-looping perf list: Fix the output to not include tracepoints without an id perf_counter/powerpc: Fix oops on cpus without perf_counter hardware support perf stat: Fix tool option consistency: rename -S/--scale to -c/--scale perf report: Add debug help for the finding of symbol bugs - show the symtab origin (DSO, build-id, kernel, etc) perf report: Fix per task mult-counter stat reporting ...
2009-08-10powerpc/dma: pci_set_dma_mask() shouldn't fail if mask fits in RAMBenjamin Herrenschmidt
On an iMac G5, the b43 driver is failing to initialise because trying to set the dma mask to 30-bit fails. Even though there's only 512MiB of RAM in the machine anyway: https://bugzilla.redhat.com/show_bug.cgi?id=514787 We should probably let it succeed if the available RAM in the system doesn't exceed the requested limit. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-09Merge branch 'kvm-updates/2.6.31' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
* 'kvm-updates/2.6.31' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: Avoid redelivery of edge interrupt before next edge KVM: MMU: limit rmap chain length KVM: ia64: fix build failures due to ia64/unsigned long mismatches KVM: Make KVM_HPAGES_PER_HPAGE unsigned long to avoid build error on powerpc KVM: fix ack not being delivered when msi present KVM: s390: fix wait_queue handling KVM: VMX: Fix locking imbalance on emulation failure KVM: VMX: Fix locking order in handle_invalid_guest_state KVM: MMU: handle n_free_mmu_pages > n_alloc_mmu_pages in kvm_mmu_change_mmu_pages KVM: SVM: force new asid on vcpu migration KVM: x86: verify MTRR/PAT validity KVM: PIT: fix kpit_elapsed division by zero KVM: Fix KVM_GET_MSR_INDEX_LIST
2009-08-09perf_counter/powerpc: Fix oops on cpus without perf_counter hardware supportPaul Mackerras
If we have the powerpc perf_counter backend compiled in, but the cpu we are running on is one where we don't support the PMU, we currently oops in hw_perf_group_sched_in if we try to use any counters, because ppmu is NULL in that case, and we unconditionally dereference ppmu. This fixes the problem by adding a check if ppmu is NULL at the beginning of hw_perf_group_sched_in, and also at the beginning of the other functions that get called from the perf_counter core, i.e. hw_perf_disable, hw_perf_enable, and hw_perf_counter_setup. Signed-off-by: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: benh@kernel.crashing.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-06perf_counter/powerpc: Check oprofile_cpu_type for NULL before using itBenjamin Herrenschmidt
If the current CPU doesn't support performance counters, cur_cpu_spec->oprofile_cpu_type can be NULL. The current perf_counter modules don't test for that case and would thus crash at boot time. Bug reported by David Woodhouse. Reported-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Paul Mackerras <paulus@samba.org> LKML-Reference: <19066.48028.446975.501454@cargo.ozlabs.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-08-05KVM: Make KVM_HPAGES_PER_HPAGE unsigned long to avoid build error on powerpcStephen Rothwell
Eliminates this compiler warning: arch/powerpc/kvm/../../../virt/kvm/kvm_main.c:1178: error: integer overflow in expression Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Avi Kivity <avi@redhat.com>
2009-07-29powerpc: Update defconfigs for embedded 6xx/7xxx, 8xx, 8{3,5,6}xxxKumar Gala
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-07-29powerpc/86xx: Update GE Fanuc sbc310 default configurationMartyn Welch
General update of defconfig including the following notable changes: - Enable Highmem support. - Support for PCMCIA based daughter card. Signed-off-by: Martyn Welch <martyn.welch@gefanuc.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-07-29powerpc/86xx: Update defconfig for GE Fanuc's PPC9AMartyn Welch
General update of defconfig including the following notable changes: - Enable GPIO access via sysfs on GE Fanuc's PPC9A. - Enable Highmem support. - Support for PCMCIA based daughter card. Signed-off-by: Martyn Welch <martyn.welch@gefanuc.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-07-29powerpc/83xx: Fix PCI IO base address on MPC837xE-RDB boardsAnton Vorontsov
U-Boot maps PCI IO at 0xe0300000, while current dts files specify 0xe2000000. This leads to the following oops with CONFIG_8139TOO_PIO=y. 8139too Fast Ethernet driver 0.9.28 Machine check in kernel mode. Caused by (from SRR1=41000): Transfer error ack signal Oops: Machine check, sig: 7 [#1] MPC837x RDB [...] NIP [00000900] 0x900 LR [c0439df8] rtl8139_init_board+0x238/0x524 Call Trace: [cf831d90] [c0439dcc] rtl8139_init_board+0x20c/0x524 (unreliable) [cf831de0] [c043a15c] rtl8139_init_one+0x78/0x65c [cf831e40] [c0235250] pci_call_probe+0x20/0x30 [...] This patch fixes the issue by specifying the correct PCI IO base address. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-07-29powerpc/85xx: Don't scan for TBI PHY addresses on MPC8569E-MDS boardsAnton Vorontsov
Sometimes (e.g. when there are no UEMs attached to a board) fsl_pq_mdio_find_free() fails to find a spare address for a TBI PHY, this is because get_phy_id() returns bogus 0x0000ffff values (0xffffffff is expected), and therefore mdio bus probing fails with the following message: fsl-pq_mdio: probe of e0082120.mdio failed with error -16 And obviously ethernet doesn't work after this. This patch solves the problem by adding tbi-phy node into mdio node, so that we won't scan for spare addresses, we'll just use a fixed one. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-07-29powerpc/85xx: Fix ethernet link detection on MPC8569E-MDS boardsAnton Vorontsov
Linux isn't able to detect link changes on ethernet ports that were used by U-Boot. This is because U-Boot wrongly clears interrupt polarity bit (INTPOL, 0x400) in the extended status register (EXT_SR, 0x1b) of Marvell PHYs. There is no easy way for PHY drivers to know IRQ line polarity (we could extract it from the device tree and pass it to phydevs, but that'll be quite a lot of work), so for now just reset the PHYs to their default states. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-07-29powerpc/mm: Fix SMP issue with MMU context handling codeKumar Gala
In switch_mmu_context() if we call steal_context_smp() to get a context to use we shouldn't fall through and than call steal_context_up(). Doing so can be problematic in that the 'mm' that steal_context_up() ends up using will not get marked dirty in the stale_map[] for other CPUs that might have used that mm. Thus we could end up with stale TLB entries in the other CPUs that can cause all kinda of havoc. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-07-28powerpc: remove unused swiotlb_phys_to_bus() and swiotlb_bus_to_phys()FUJITA Tomonori
phys_to_dma() and dma_to_phys() are used instead of swiotlb_phys_to_bus() and swiotlb_bus_to_phys(). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-07-28x86, IA64, powerpc: add phys_to_dma() and dma_to_phys()FUJITA Tomonori
This adds two functions, phys_to_dma() and dma_to_phys() to x86, IA64 and powerpc. swiotlb uses them. phys_to_dma() converts a physical address to a dma address. dma_to_phys() does the opposite. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-07-28powerpc: remove unncesary swiotlb_arch_address_needs_mappingFUJITA Tomonori
swiotlb doesn't use swiotlb_arch_address_needs_mapping(); it uses dma_capalbe(). We can remove unnecessary swiotlb_arch_address_needs_mapping(). We can remove swiotlb_addr_needs_map() and is_buffer_dma_capable() in swiotlb_pci_addr_needs_map() too; dma_capable() handles the features that both provide. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-07-28powerpc: add dma_capable() to replace is_buffer_dma_capable()FUJITA Tomonori
dma_capable() eventually replaces is_buffer_dma_capable(), which tells if a memory area is dma-capable or not. The problem of is_buffer_dma_capable() is that it doesn't take a pointer to struct device so it doesn't work for POWERPC. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-07-28swiotlb: remove unnecessary swiotlb_bus_to_virtFUJITA Tomonori
swiotlb_bus_to_virt is unncessary; we can use swiotlb_bus_to_phys and phys_to_virt instead. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Acked-by: Becky Bruce <beckyb@kernel.crashing.org>
2009-07-27mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()Benjamin Herrenschmidt
mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() Upcoming paches to support the new 64-bit "BookE" powerpc architecture will need to have the virtual address corresponding to PTE page when freeing it, due to the way the HW table walker works. Basically, the TLB can be loaded with "large" pages that cover the whole virtual space (well, sort-of, half of it actually) represented by a PTE page, and which contain an "indirect" bit indicating that this TLB entry RPN points to an array of PTEs from which the TLB can then create direct entries. Thus, in order to invalidate those when PTE pages are deleted, we need the virtual address to pass to tlbilx or tlbivax instructions. The old trick of sticking it somewhere in the PTE page struct page sucks too much, the address is almost readily available in all call sites and almost everybody implemets these as macros, so we may as well add the argument everywhere. I added it to the pmd and pud variants for consistency. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV] Acked-by: Nick Piggin <npiggin@suse.de> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-15powerpc: Fix another bug in move of altivec code to vector.SAndreas Schwab
When moving load_up_altivec to vector.S a typo in a comment caused a thinko setting the wrong variable. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-15powerpc: Fix booke user_disable_single_step()Dave Kleikamp
On booke processors, gdb is seeing spurious SIGTRAPs when setting a watchpoint. user_disable_single_step() simply quits when the DAC is non-zero. It should be clearing the DBCR0_IC and DBCR0_BT bits from the dbcr0 register and TIF_SINGLESTEP from the thread flag. Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-12headers: smp_lock.h reduxAlexey Dobriyan
* Remove smp_lock.h from files which don't need it (including some headers!) * Add smp_lock.h to files which do need it * Make smp_lock.h include conditional in hardirq.h It's needed only for one kernel_locked() usage which is under CONFIG_PREEMPT This will make hardirq.h inclusion cheaper for every PREEMPT=n config (which includes allmodconfig/allyesconfig, BTW) Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-10Merge branch 'perfcounters-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (50 commits) perf report: Add "Fractal" mode output - support callchains with relative overhead rate perf_counter tools: callchains: Manage the cumul hits on the fly perf report: Change default callchain parameters perf report: Use a modifiable string for default callchain options perf report: Warn on callchain output request from non-callchain file x86: atomic64: Inline atomic64_read() again x86: atomic64: Clean up atomic64_sub_and_test() and atomic64_add_negative() x86: atomic64: Improve atomic64_xchg() x86: atomic64: Export APIs to modules x86: atomic64: Improve atomic64_read() x86: atomic64: Code atomic(64)_read and atomic(64)_set in C not CPP x86: atomic64: Fix unclean type use in atomic64_xchg() x86: atomic64: Make atomic_read() type-safe x86: atomic64: Reduce size of functions x86: atomic64: Improve atomic64_add_return() x86: atomic64: Improve cmpxchg8b() x86: atomic64: Improve atomic64_read() x86: atomic64: Move the 32-bit atomic64_t implementation to a .c file x86: atomic64: The atomic64_t data type should be 8 bytes aligned on 32-bit too perf report: Annotate variable initialization ...
2009-07-10sched: INIT_PREEMPT_COUNTPeter Zijlstra
Pull the initial preempt_count value into a single definition site. Maintainers for: alpha, ia64 and m68k, please have a look, your arch code is funny. The header magic is a bit odd, but similar to the KERNEL_DS one, CPP waits with expanding these macros until the INIT_THREAD_INFO macro itself is expanded, which is in arch/*/kernel/init_task.c where we've already included sched.h so we're good. Cc: tony.luck@intel.com Cc: rth@twiddle.net Cc: geert@linux-m68k.org Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Matt Mackall <mpm@selenic.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-07-08powerpc: Don't use alloc_bootmem() in init_IRQ() pathAnton Vorontsov
This patch fixes various badnesses like this for all interrupt controllers: ------------[ cut here ]------------ Badness at c04db9dc [verbose debug info unavailable] NIP: c04db9dc LR: c04db9ac CTR: 00000000 REGS: c053de30 TRAP: 0700 Not tainted (2.6.31-rc1-00432-ge69b2b5-dirty) MSR: 00021000 <ME,CE> CR: 22020084 XER: 00000000 TASK = c0500480[0] 'swapper' THREAD: c053c000 GPR00: 00000001 c053dee0 c0500480 00000000 00000050 00000020 3fffffff 00000000 GPR08: 00000001 c0540000 e0080080 00000000 22000084 64183600 3ff8f800 00000000 GPR16: 841b0240 449a0303 00000000 00000000 00000000 00000000 00000000 c04f5bf4 GPR24: 00000000 00000000 00000000 00000050 00000020 00000000 3fffffff 00000050 NIP [c04db9dc] alloc_arch_preferred_bootmem+0x48/0x74 LR [c04db9ac] alloc_arch_preferred_bootmem+0x18/0x74 Call Trace: [c053dee0] [c000a5a4] __of_address_to_resource+0x44/0xd0 (unreliable) [c053def0] [c04dba58] ___alloc_bootmem_nopanic+0x50/0x108 [c053df20] [c04dbb28] ___alloc_bootmem+0x18/0x50 [c053df30] [c04d5de0] qe_ic_init+0x5c/0x1b0 [c053df70] [c04d77b0] mpc85xx_mds_pic_init+0xb8/0x10c [c053dfb0] [c04cf374] init_IRQ+0x28/0x3c p.s. commit 85355bb272db31a3f2dd99d547eef794805e1319 ("powerpc: Fix mpic alloc warning") missed some alloc_bootmem() instances, this is now fixed. Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com> Acked-by: Timur Tabi <timur@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc: Fix spin_event_timeout() to be robust over context switchesGrant Likely
Current implementation of spin_event_timeout can be interrupted by an IRQ or context switch after testing the condition, but before checking the timeout. This can cause the loop to report a timeout when the condition actually became true in the middle. This patch adds one final check of the condition upon exit of the loop if the last test of the condition was still false. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Timur Tabi <timur@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc: Use pr_devel() in do_dcache_icache_coherency()Michael Ellerman
pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 2036 368 8 2412 96c arch/powerpc/mm/pgtable.o size after: text data bss dec hex filename 1677 248 8 1933 78d arch/powerpc/mm/pgtable.o Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc/cell: Use pr_devel() in axon_msi.cMichael Ellerman
pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 7083 1616 0 8699 21fb arch/powerpc/../axon_msi.o size after: text data bss dec hex filename 5772 1208 0 6980 1b44 arch/powerpc/../axon_msi.o Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc: Use pr_devel() in arch/powerpc/mm/gup.cMichael Ellerman
pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 3252 384 0 3636 e34 arch/powerpc/mm/gup.o size after: text data bss dec hex filename 2576 96 0 2672 a70 arch/powerpc/mm/gup.o Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc: Cleanup & use pr_devel() in arch/powerpc/mm/slb.cMichael Ellerman
pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 3261 416 4 3681 e61 arch/powerpc/mm/slb.o size after: text data bss dec hex filename 2861 248 4 3113 c29 arch/powerpc/mm/slb.o Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc/perf_counter: Remove duplicated #includeHuang Weiyi
Remove duplicated #include('s) in arch/powerpc/kernel/mpc7450-pmu.c arch/powerpc/kernel/ppc970-pmu.c Signed-off-by: Huang Weiyi <weiyi.huang@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc: Use pr_devel() in arch/powerpc/mm/mmu_context_nohash.cMichael Ellerman
pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 1508 48 28 1584 630 powerpc/mm/mmu_context_nohash.o size after: text data bss dec hex filename 1088 0 28 1116 45c powerpc/mm/mmu_context_nohash.o Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc/pseries: Use pr_devel() in xics.cMichael Ellerman
pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 7720 5488 296 13504 34c0 platforms/pseries/xics.o size after: text data bss dec hex filename 7535 5456 296 13287 33e7 platforms/pseries/xics.o Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc: Remove unnecessary semicolonsJoe Perches
Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08powerpc/pseries: Use pr_devel() in pseries LPAR HPTE routinesMichael Ellerman
pr_debug() can now result in code being generated even when DEBUG is not defined. That's not really desirable in some places. In particular, pSeries_lpar_hpte_insert() goes from 185 instructions to 77 instructions as a result of this patch. Luckily that code isn't called very often ... With CONFIG_DYNAMIC_DEBUG=y: size before: text data bss dec hex filename 7284 1552 296 9132 23ac platforms/pseries/lpar.o size after: text data bss dec hex filename 5806 1096 296 7198 1c1e platforms/pseries/lpar.o Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-07-08Merge commit 'jwb/merge' into mergeBenjamin Herrenschmidt
2009-07-05powerpc/44x: Fix build error with -Werror for Warp platformJosh Boyer
With -Werror enabled during the build, the warp.c file fails to build due to the temp_isr function not containing a return statement. This fixes the build error and documents that the function never returns. Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>