summaryrefslogtreecommitdiffstats
path: root/arch/powerpc
AgeCommit message (Collapse)Author
2012-03-05KVM: PPC: Only get pages when actually needed, not in prepare_memory_region()Paul Mackerras
This removes the code from kvmppc_core_prepare_memory_region() that looked up the VMA for the region being added and called hva_to_page to get the pfns for the memory. We have no guarantee that there will be anything mapped there at the time of the KVM_SET_USER_MEMORY_REGION ioctl call; userspace can do that ioctl and then map memory into the region later. Instead we defer looking up the pfn for each memory page until it is needed, which generally means when the guest does an H_ENTER hcall on the page. Since we can't call get_user_pages in real mode, if we don't already have the pfn for the page, kvmppc_h_enter() will return H_TOO_HARD and we then call kvmppc_virtmode_h_enter() once we get back to kernel context. That calls kvmppc_get_guest_page() to get the pfn for the page, and then calls back to kvmppc_h_enter() to redo the HPTE insertion. When the first vcpu starts executing, we need to have the RMO or VRMA region mapped so that the guest's real mode accesses will work. Thus we now have a check in kvmppc_vcpu_run() to see if the RMO/VRMA is set up and if not, call kvmppc_hv_setup_rma(). It checks if the memslot starting at guest physical 0 now has RMO memory mapped there; if so it sets it up for the guest, otherwise on POWER7 it sets up the VRMA. The function that does that, kvmppc_map_vrma, is now a bit simpler, as it calls kvmppc_virtmode_h_enter instead of creating the HPTE itself. Since we are now potentially updating entries in the slot_phys[] arrays from multiple vcpu threads, we now have a spinlock protecting those updates to ensure that we don't lose track of any references to pages. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Make the H_ENTER hcall more reliablePaul Mackerras
At present, our implementation of H_ENTER only makes one try at locking each slot that it looks at, and doesn't even retry the ldarx/stdcx. atomic update sequence that it uses to attempt to lock the slot. Thus it can return the H_PTEG_FULL error unnecessarily, particularly when the H_EXACT flag is set, meaning that the caller wants a specific PTEG slot. This improves the situation by making a second pass when no free HPTE slot is found, where we spin until we succeed in locking each slot in turn and then check whether it is full while we hold the lock. If the second pass fails, then we return H_PTEG_FULL. This also moves lock_hpte to a header file (since later commits in this series will need to use it from other source files) and renames it to try_lock_hpte, which is a somewhat less misleading name. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Add an interface for pinning guest pages in Book3s HV guestsPaul Mackerras
This adds two new functions, kvmppc_pin_guest_page() and kvmppc_unpin_guest_page(), and uses them to pin the guest pages where the guest has registered areas of memory for the hypervisor to update, (i.e. the per-cpu virtual processor areas, SLB shadow buffers and dispatch trace logs) and then unpin them when they are no longer required. Although it is not strictly necessary to pin the pages at this point, since all guest pages are already pinned, later commits in this series will mean that guest pages aren't all pinned. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Keep page physical addresses in per-slot arraysPaul Mackerras
This allocates an array for each memory slot that is added to store the physical addresses of the pages in the slot. This array is vmalloc'd and accessed in kvmppc_h_enter using real_vmalloc_addr(). This allows us to remove the ram_pginfo field from the kvm_arch struct, and removes the 64GB guest RAM limit that we had. We use the low-order bits of the array entries to store a flag indicating that we have done get_page on the corresponding page, and therefore need to call put_page when we are finished with the page. Currently this is set for all pages except those in our special RMO regions. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Keep a record of HV guest view of hashed page table entriesPaul Mackerras
This adds an array that parallels the guest hashed page table (HPT), that is, it has one entry per HPTE, used to store the guest's view of the second doubleword of the corresponding HPTE. The first doubleword in the HPTE is the same as the guest's idea of it, so we don't need to store a copy, but the second doubleword in the HPTE has the real page number rather than the guest's logical page number. This allows us to remove the back_translate() and reverse_xlate() functions. This "reverse mapping" array is vmalloc'd, meaning that to access it in real mode we have to walk the kernel's page tables explicitly. That is done by the new real_vmalloc_addr() function. (In fact this returns an address in the linear mapping, so the result is usable both in real mode and in virtual mode.) There are also some minor cleanups here: moving the definitions of HPT_ORDER etc. to a header file and defining HPT_NPTE for HPT_NPTEG << 3. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Make wakeups work again for Book3S HV guestsPaul Mackerras
When commit f43fdc15fa ("KVM: PPC: booke: Improve timer register emulation") factored out some code in arch/powerpc/kvm/powerpc.c into a new helper function, kvm_vcpu_kick(), an error crept in which causes Book3s HV guest vcpus to stall. This fixes it. On POWER7 machines, guest vcpus are grouped together into virtual CPU cores that share a single waitqueue, so it's important to use vcpu->arch.wqp rather than &vcpu->wq. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Avoid patching paravirt template codeLiu Yu-B13201
Currently we patch the whole code include paravirt template code. This isn't safe for scratch area and has impact to performance. Signed-off-by: Liu Yu <yu.liu@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: e500: use hardware hint when loading TLB0 entriesScott Wood
The hardware maintains a per-set next victim hint. Using this reduces conflicts, especially on e500v2 where a single guest TLB entry is mapped to two shadow TLB entries (user and kernel). We want those two entries to go to different TLB ways. sesel is now only used for TLB1. Reported-by: Liu Yu <yu.liu@freescale.com> Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: e500: Fix TLBnCFG in KVM_CONFIG_TLBScott Wood
The associativity, not just total size, can differ from the host hardware. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Book3S: PR: Fix signal check raceAlexander Graf
As Scott put it: > If we get a signal after the check, we want to be sure that we don't > receive the reschedule IPI until after we're in the guest, so that it > will cause another signal check. we need to have interrupts disabled from the point we do signal_check() all the way until we actually enter the guest. This patch fixes potential signal loss races. Reported-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: align vcpu_kick with x86Alexander Graf
Our vcpu kick implementation differs a bit from x86 which resulted in us not disabling preemption during the kick. Get it a bit closer to what x86 does. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Use get/set for to_svcpu to help preemptionAlexander Graf
When running the 64-bit Book3s PR code without CONFIG_PREEMPT_NONE, we were doing a few things wrong, most notably access to PACA fields without making sure that the pointers stay stable accross the access (preempt_disable()). This patch moves to_svcpu towards a get/put model which allows us to disable preemption while accessing the shadow vcpu fields in the PACA. That way we can run preemptible and everyone's happy! Reported-by: Jörg Sommer <joerg@alea.gnuu.de> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Book3s: PR: No irq_disable in vcpu_runAlexander Graf
Somewhere during merges we ended up from local_irq_enable() foo(); local_irq_disable() to always keeping irqs enabled during that part. However, we now have the following code: foo(); local_irq_disable() which disables interrupts without the surrounding code enabling them again! So let's remove that disable and be happy. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Book3s: PR: Disable preemption in vcpu_runAlexander Graf
When entering the guest, we want to make sure we're not getting preempted away, so let's disable preemption on entry, but enable it again while handling guest exits. Reported-by: Jörg Sommer <joerg@alea.gnuu.de> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: booke: Improve timer register emulationScott Wood
Decrementers are now properly driven by TCR/TSR, and the guest has full read/write access to these registers. The decrementer keeps ticking (and setting the TSR bit) regardless of whether the interrupts are enabled with TCR. The decrementer stops at zero, rather than going negative. Decrementers (and FITs, once implemented) are delivered as level-triggered interrupts -- dequeued when the TSR bit is cleared, not on delivery. Signed-off-by: Liu Yu <yu.liu@freescale.com> [scottwood@freescale.com: significant changes] Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Paravirtualize SPRG4-7, ESR, PIR, MASnScott Wood
This allows additional registers to be accessed by the guest in PR-mode KVM without trapping. SPRG4-7 are readable from userspace. On booke, KVM will sync these registers when it enters the guest, so that accesses from guest userspace will work. The guest kernel, OTOH, must consistently use either the real registers or the shared area between exits. This also applies to the already-paravirted SPRG3. On non-booke, it's not clear to what extent SPRG4-7 are supported (they're not architected for book3s, but exist on at least some classic chips). They are copied in the get/set regs ioctls, but I do not see any non-booke emulation. I also do not see any syncing with real registers (in PR-mode) including the user-readable SPRG3. This patch should not make that situation any worse. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: booke: Paravirtualize wrteeScott Wood
Also fix wrteei 1 paravirt to check for a pending interrupt. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: booke: Fix int_pending calculation for MSR[EE] paravirtScott Wood
int_pending was only being lowered if a bit in pending_exceptions was cleared during exception delivery -- but for interrupts, we clear it during IACK/TSR emulation. This caused paravirt for enabling MSR[EE] to be ineffective. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: booke: Check for MSR[WE] in prepare_to_enterScott Wood
This prevents us from inappropriately blocking in a KVM_SET_REGS ioctl -- the MSR[WE] will take effect when the guest is next entered. It also causes SRR1[WE] to be set when we enter the guest's interrupt handler, which is what e500 hardware is documented to do. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Move prepare_to_enter call site into subarch codeScott Wood
This function should be called with interrupts disabled, to avoid a race where an exception is delivered after we check, but the resched kick is received before we disable interrupts (and thus doesn't actually trigger the exit code that would recheck exceptions). booke already does this properly in the lightweight exit case, but not on initial entry. For now, move the call of prepare_to_enter into subarch-specific code so that booke can do the right thing here. Ideally book3s would do the same thing, but I'm having a hard time seeing where it does any interrupt disabling of this sort (plus it has several additional call sites), so I'm deferring the book3s fix to someone more familiar with that code. book3s behavior should be unchanged by this patch. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Rename deliver_interrupts to prepare_to_enterScott Wood
This function also updates paravirt int_pending, so rename it to be more obvious that this is a collection of checks run prior to (re)entering a guest. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: booke: check for signals in kvmppc_vcpu_runScott Wood
Currently we check prior to returning from a lightweight exit, but not prior to initial entry. book3s already does a similar test. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: booke: Do Not start decrementer when SPRN_DEC set 0Bharat Bhushan
As per specification the decrementer interrupt not happen when DEC is written with 0. Also when DEC is zero, no decrementer running. So we should not start hrtimer for decrementer when DEC = 0. Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: Fix DEC truncation for greater than 0xffff_ffff/1000Bharat Bhushan
kvmppc_emulate_dec() uses dec_nsec of type unsigned long and does below calculation: dec_nsec = vcpu->arch.dec; dec_nsec *= 1000; This will truncate if DEC value "vcpu->arch.dec" is greater than 0xffff_ffff/1000. For example : For tb_ticks_per_usec = 4a, we can not set decrementer more than ~58ms. Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com> Acked-by: Liu Yu <yu.liu@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05PPC: Fix race in mtmsr paravirt implementationBharat Bhushan
The current implementation of mtmsr and mtmsrd are racy in that it does: * check (int_pending == 0) ---> host sets int_pending = 1 <--- * write shared page * done while instead we should check for int_pending after the shared page is written. Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: E500: Support hugetlbfsAlexander Graf
With hugetlbfs support emerging on e500, we should also support KVM backing its guest memory by it. This patch adds support for hugetlbfs into the e500 shadow mmu code. Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: e500: Don't hardcode PIR=0Scott Wood
The hardcoded behavior prevents proper SMP support. user space shall specify the vcpu's PIR as the vcpu id. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: e500: tlbsx: fix tlb0 eselScott Wood
It should contain the way, not the absolute TLB0 index. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: e500: MMU APIScott Wood
This implements a shared-memory API for giving host userspace access to the guest's TLB. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: e500: clear up confusion between host and guest entriesScott Wood
Split out the portions of tlbe_priv that should be associated with host entries into tlbe_ref. Base victim selection on the number of hardware entries, not guest entries. For TLB1, where one guest entry can be mapped by multiple host entries, we use the host tlbe_ref for tracking page references. For the guest TLB0 entries, we still track it with gtlb_priv, to avoid having to retranslate if the entry is evicted from the host TLB but not the guest TLB. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: e500: Eliminate preempt_disable in local_sid_destroy_allScott Wood
The only place it makes sense to call this function already needs to have preemption disabled. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: PPC: e500: don't translate gfn to pfn with preemption disabledScott Wood
Delay allocation of the shadow pid until we're ready to disable preemption and write the entry. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: provide synchronous registers in kvm_runChristian Borntraeger
On some cpus the overhead for virtualization instructions is in the same range as a system call. Having to call multiple ioctls to get set registers will make certain userspace handled exits more expensive than necessary. Lets provide a section in kvm_run that works as a shared save area for guest registers. We also provide two 64bit flags fields (architecture specific), that will specify 1. which parts of these fields are valid. 2. which registers were modified by userspace Each bit for these flag fields will define a group of registers (like general purpose) or a single register. Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: s390: ucontrol: export SIE control block to userCarsten Otte
This patch exports the s390 SIE hardware control block to userspace via the mapping of the vcpu file descriptor. In order to do so, a new arch callback named kvm_arch_vcpu_fault is introduced for all architectures. It allows to map architecture specific pages. Signed-off-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-05KVM: s390: add parameter for KVM_CREATE_VMCarsten Otte
This patch introduces a new config option for user controlled kernel virtual machines. It introduces a parameter to KVM_CREATE_VM that allows to set bits that alter the capabilities of the newly created virtual machine. The parameter is passed to kvm_arch_init_vm for all architectures. The only valid modifier bit for now is KVM_VM_S390_UCONTROL. This requires CAP_SYS_ADMIN privileges and creates a user controlled virtual machine on s390 architectures. Signed-off-by: Carsten Otte <cotte@de.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-22powerpc: Fix various issues with return to userspaceBenjamin Herrenschmidt
We have a few problems when returning to userspace. This is a quick set of fixes for 3.3, I'll look into a more comprehensive rework for 3.4. This fixes: - We kept interrupts soft-disabled when schedule'ing or calling do_signal when returning to userspace as a result of a hardware interrupt. - Rename do_signal to do_notify_resume like all other archs (and do_signal_pending back to do_signal, which it was before Roland changed it). - Add the missing call to key_replace_session_keyring() to do_notify_resume(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> ---
2012-02-22powerpc: Fix program check handling when lockdep is enabledMichael Ellerman
In commit 54321242afe ("Disable interrupts early in Program Check"), we switched from enabling to disabling interrupts in program_check_common. Whereas ENABLE_INTS leaves r3 untouched, if lockdep is enabled DISABLE_INTS calls into lockdep code and will clobber r3. That means we pass a bogus struct pt_regs* into program_check_exception() and all hell breaks loose. So load our regs pointer into r3 after we call DISABLE_INTS. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-22powerpc: Remove references to cpu_*_mapRusty Russell
This has been obsolescent for a while; time for the final push. In adjacent context, replaced old cpus_* with cpumask_*. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-16powerpc/perf: power_pmu_start restores incorrect values, breaking frequency ↵Anton Blanchard
events perf on POWER stopped working after commit e050e3f0a71b (perf: Fix broken interrupt rate throttling). That patch exposed a bug in the POWER perf_events code. Since the PMCs count upwards and take an exception when the top bit is set, we want to write 0x80000000 - left in power_pmu_start. We were instead programming in left which effectively disables the counter until we eventually hit 0x80000000. This could take seconds or longer. With the patch applied I get the expected number of samples: SAMPLE events: 9948 Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: <stable@kernel.org>
2012-02-16powerpc: Disable interrupts early in Program CheckBenjamin Herrenschmidt
Program Check exceptions are the result of WARNs, BUGs, some type of breakpoints, kprobe, and other illegal instructions. We want interrupts (and thus preemption) to remain disabled while doing the initial stage of testing the reason and branching off to a debugger or kprobe, so we are still on the original CPU which makes debugging easier in various cases. This is how the code was intended, hence the local_irq_enable() right in the middle of program_check_exception(). However, the assembly exception prologue for that exception was incorrectly marked as enabling interrupts, which defeats that (and records a redundant enable with lockdep). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-16powerpc: Remove legacy iSeries from ppc64_defconfigStephen Rothwell
Since we are heading towards removing the Legacy iSeries platform, start by no longer building it for ppc64_defconfig. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-16powerpc/fsl/pci: Fix PCIe fixup regressionBenjamin Herrenschmidt
Upstream changes to the way PHB resources are registered broke the resource fixup for FSL boards. We can no longer rely on the resource pointer array for the PHB's pci_bus structure, so let's leave it alone and go straight for the PHB resources instead. This also makes the code generally more readable. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-16powerpc: Fix kernel log of oops/panic instruction dumpIra Snyder
A kernel oops/panic prints an instruction dump showing several instructions before and after the instruction which caused the oops/panic. The code intended that the faulting instruction be enclosed in angle brackets, however a bug caused the faulting instruction to be interpreted by printk() as the message log level. To fix this, the KERN_CONT log level is added before the actual text of the printed message. === Before the patch === [ 1081.587266] Instruction dump: [ 1081.590236] 7c000110 7c0000f8 5400077c 552907f6 7d290378 992b0003 4e800020 38000001 [ 1081.598034] 3d20c03a 9009a114 7c0004ac 39200000 [ 1081.602500] 4e800020 3803ffd0 2b800009 <4>[ 1081.587266] Instruction dump: <4>[ 1081.590236] 7c000110 7c0000f8 5400077c 552907f6 7d290378 992b0003 4e800020 38000001 <4>[ 1081.598034] 3d20c03a 9009a114 7c0004ac 39200000 <98090000>[ 1081.602500] 4e800020 3803ffd0 2b800009 === After the patch === [ 51.385216] Instruction dump: [ 51.388186] 7c000110 7c0000f8 5400077c 552907f6 7d290378 992b0003 4e800020 38000001 [ 51.395986] 3d20c03a 9009a114 7c0004ac 39200000 <98090000> 4e800020 3803ffd0 2b800009 <4>[ 51.385216] Instruction dump: <4>[ 51.388186] 7c000110 7c0000f8 5400077c 552907f6 7d290378 992b0003 4e800020 38000001 <4>[ 51.395986] 3d20c03a 9009a114 7c0004ac 39200000 <98090000> 4e800020 3803ffd0 2b800009 Signed-off-by: Ira W. Snyder <iws@ovro.caltech.edu> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-14powerpc/pseries/eeh: Fix crash when error happens during device probeThadeu Lima de Souza Cascardo
EEH may happen during a PCI driver probe. If the driver is trying to access some register in a loop, the EEH code will try to print the driver name. But the driver pointer in struct pci_dev is not set until probe returns successfully. Use a function to test if the device and the driver pointer is NULL before accessing the driver's name. Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-14powerpc/pseries: Fix partition migration hang in stop_topology_updateBrian King
This fixes a hang that was observed during live partition migration. Since stop_topology_update must not be called from an interrupt context, call it earlier in the migration process. The hang observed can be seen below: WARNING: at kernel/timer.c:1011 Modules linked in: ip6t_LOG xt_tcpudp xt_pkttype ipt_LOG xt_limit ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_raw xt_NOTRACK ipt_REJECT xt_state iptable_raw iptable_filter ip6table_mangle nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables ip6table_filter ip6_tables x_tables ipv6 fuse loop ibmveth sg ext3 jbd mbcache raid456 async_raid6_recov async_pq raid6_pq async_xor xor async_memcpy async_tx raid10 raid1 raid0 scsi_dh_alua scsi_dh_rdac scsi_dh_hp_sw scsi_dh_emc dm_round_robin dm_multipath scsi_dh sd_mod crc_t10dif ibmvfc scsi_transport_fc scsi_tgt scsi_mod dm_snapshot dm_mod NIP: c0000000000c52d8 LR: c00000000004be28 CTR: 0000000000000000 REGS: c00000005ffd77d0 TRAP: 0700 Not tainted (3.2.0-git-00001-g07d106d) MSR: 8000000000021032 <ME,CE,IR,DR> CR: 48000084 XER: 00000001 CFAR: c00000000004be20 TASK = c00000005ec78860[0] 'swapper/3' THREAD: c00000005ec98000 CPU: 3 GPR00: 0000000000000001 c00000005ffd7a50 c000000000fbbc98 c000000000ec8340 GPR04: 00000000282a0020 0000000000000000 0000000000004000 0000000000000101 GPR08: 0000000000000012 c00000005ffd4000 0000000000000020 c000000000f3ba88 GPR12: 0000000000000000 c000000007f40900 0000000000000001 0000000000000004 GPR16: 0000000000000001 0000000000000000 0000000000000000 c000000001022310 GPR20: 0000000000000001 0000000000000000 0000000000200200 c000000001029e14 GPR24: 0000000000000000 0000000000000001 0000000000000040 c00000003f74bc80 GPR28: c00000003f74bc84 c000000000f38038 c000000000f16b58 c000000000ec8340 NIP [c0000000000c52d8] .del_timer_sync+0x28/0x60 LR [c00000000004be28] .stop_topology_update+0x20/0x38 Call Trace: [c00000005ffd7a50] [c00000005ec78860] 0xc00000005ec78860 (unreliable) [c00000005ffd7ad0] [c00000000004be28] .stop_topology_update+0x20/0x38 [c00000005ffd7b40] [c000000000028378] .__rtas_suspend_last_cpu+0x58/0x260 [c00000005ffd7bf0] [c0000000000fa230] .generic_smp_call_function_interrupt+0x160/0x358 [c00000005ffd7cf0] [c000000000036ec8] .smp_ipi_demux+0x88/0x100 [c00000005ffd7d80] [c00000000005c154] .icp_hv_ipi_action+0x5c/0x80 [c00000005ffd7e00] [c00000000012a088] .handle_irq_event_percpu+0x100/0x318 [c00000005ffd7f00] [c00000000012e774] .handle_percpu_irq+0x84/0xd0 [c00000005ffd7f90] [c000000000022ba8] .call_handle_irq+0x1c/0x2c [c00000005ec9ba20] [c00000000001157c] .do_IRQ+0x22c/0x2a8 [c00000005ec9bae0] [c0000000000054bc] hardware_interrupt_entry+0x18/0x1c Exception: 501 at .cpu_idle+0x194/0x2f8 LR = .cpu_idle+0x194/0x2f8 [c00000005ec9bdd0] [c000000000017e58] .cpu_idle+0x188/0x2f8 (unreliable) [c00000005ec9be90] [c00000000067ec18] .start_secondary+0x3e4/0x524 [c00000005ec9bf90] [c0000000000093e8] .start_secondary_prolog+0x10/0x14 Instruction dump: ebe1fff8 4e800020 fbe1fff8 7c0802a6 f8010010 7c7f1b78 f821ff81 78290464 80090014 5400019e 7c0000d0 78000fe0 <0b000000> 4800000c 7c210b78 7c421378 Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-14powerpc/powernv: Disable interrupts while taking phb->lockMichael Ellerman
We need to disable interrupts when taking the phb->lock. Otherwise we could deadlock with pci_lock taken from an interrupt. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-14powerpc: Fix WARN_ON in decrementer_check_overflowBenjamin Herrenschmidt
We use __get_cpu_var() which triggers a false positive warning in smp_processor_id() thinking interrupts are enabled (at this point, they are soft-enabled but hard-disabled). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-14powerpc/wsp: Fix IRQ affinity settingBenjamin Herrenschmidt
We call the cache_hwirq_map() function with a linux IRQ number but it expects a HW irq number. This triggers a BUG on multic-chip setups in addition to not doing the right thing. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-14powerpc: Implement GET_IP/SET_IPSrikar Dronamraju
With this change, helpers such as instruction_pointer() et al, get defined in the generic header in terms of GET_IP Removed the unnecessary definition of profile_pc in !CONFIG_SMP case as suggested by Mike Frysinger. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-14powerpc/wsp: Permanently enable PCI class code workaroundBenjamin Herrenschmidt
It appears that on the Chroma card, the class code of the root complex is still wrong even on DD2 or later chips. This could be a firmware issue, but that breaks resource allocation so let's unconditionally fix it up. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>