Age | Commit message (Collapse) | Author |
|
Tie in the Emma Mobile GPIO driver "em-gio" to
support the GPIOs on Emma Mobile EV2.
A static IRQ range is used to allow boards to
hook up their platform devices to the GPIOs.
DT support is still on the TODO for the GPIO driver,
so only platform device support is included here.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
|
This is V3 of Emma Mobile EV2 SMP support.
At this point only the most basic form of SMP operation
is supported. TWD and CPU Hotplug support is excluded.
Tied to both the Emma Mobile EV2 and the KZM9D board
due to the need to switch on board in platsmp.c and
the newly introduced need for static mappings.
The static mappings are needed to allow hardware
acces early during boot when SMP is initialized.
This early requirement forces us to also map in
the SMU registers.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
|
V3 of basic KZM9D board support. At this point a quite
thin layer that makes use of the Emma Mobile EV2 SoC code.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
|
This is V3 of the Emma Mobile EV2 SoC support.
Included here is support for serial and timer
devices which is just about enough to boot a kernel.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
|
There is nothing that is protected by this lock and it's getting in the
way of RT.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
These old mail addresses dont exist any more.
This changes all occurences to the authors' current addresses.
Signed-off-by: Oskar Schirmer <oskar@scara.com>
Cc: Chris Zankel <chris@zankel.net>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
Despite lots of investigation into why this is needed we don't
know or have an elegant cure. The only answer found on this
laptop is to mark a problem region as used so that Linux doesn't
put anything there.
Currently all the users add reserve= command lines and anyone
not knowing this needs to find the magic page that documents it.
Automate it instead.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Tested-and-bugfixed-by: Arne Fitzenreiter <arne@fitzenreiter.de>
Resolves-bug: https://bugzilla.kernel.org/show_bug.cgi?id=10231
Link: http://lkml.kernel.org/r/20120515174347.5109.94551.stgit@bluebook
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM: SoC fixes from Olof Johansson:
"I will stop trying to predict when we're done with fixes for a
release.
Here's another small batch of three patches for arm-soc:
- A fix for a boot time WARN_ON() due to irq domain conversion on
PRIMA2
- Fix for a regression in Tegra SMP spinup code due to swapped
register offsets
- Fixed config dependency for mv_cesa crypto driver to avoid build
breakage"
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: PRIMA2: fix irq domain size and IRQ mask of internal interrupt controller
crypto: mv_cesa requires on CRYPTO_HASH to build
ARM: tegra: Fix flow controller accesses
|
|
'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf, x86 and scheduler updates from Ingo Molnar.
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
tracing: Do not enable function event with enable
perf stat: handle ENXIO error for perf_event_open
perf: Turn off compiler warnings for flex and bison generated files
perf stat: Fix case where guest/host monitoring is not supported by kernel
perf build-id: Fix filename size calculation
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86, kvm: KVM paravirt kernels don't check for CPUID being unavailable
x86: Fix section annotation of acpi_map_cpu2node()
x86/microcode: Ensure that module is only loaded on supported Intel CPUs
* 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
sched: Fix KVM and ia64 boot crash due to sched_groups circular linked list assumption
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into usb-next
|
|
Commit ff9a184c ("ARM: 7400/1: vfp: clear fpscr length and stride bits
on entry to sig handler") flushes the VFP state prior to entering a
signal handler so that a VFP operation inside the handler will trap and
force a restore of ABI-compliant registers. Reflushing and disabling VFP
on the sigreturn path is predicated on the saved thread state indicating
that VFP was used by the handler -- however for SMP platforms this is
only set on context-switch, making the check unreliable and causing VFP
register corruption in userspace since the register values are not
necessarily those restored from the sigframe.
This patch unconditionally flushes the VFP state after a signal handler.
Since we already perform the flush before the handler and the flushing
itself happens lazily, the redundant flush when VFP is not used by the
handler is essentially a nop.
Reported-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
A zero value for prot_sect in the memory types table implies that
section mappings should never be created for the memory type in question.
This is checked for in alloc_init_section().
With LPAE, we set a bit to mask access flag faults for kernel mappings.
This breaks the aforementioned (!prot_sect) check in alloc_init_section().
This patch fixes this bug by first checking for a non-zero
prot_sect before setting the PMD_SECT_AF flag.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
That old mail address doesnt exist any more.
This changes all occurences to a current address.
Signed-off-by: Oskar Schirmer <oskar@scara.com>
Acked-by: Sebastian Heß <shess@hessware.de>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
It's been broken forever (i.e. it's not scheduling in a power
aware fashion), as reported by Suresh and others sending
patches, and nobody cares enough to fix it properly ...
so remove it to make space free for something better.
There's various problems with the code as it stands today, first
and foremost the user interface which is bound to topology
levels and has multiple values per level. This results in a
state explosion which the administrator or distro needs to
master and almost nobody does.
Furthermore large configuration state spaces aren't good, it
means the thing doesn't just work right because it's either
under so many impossibe to meet constraints, or even if
there's an achievable state workloads have to be aware of
it precisely and can never meet it for dynamic workloads.
So pushing this kind of decision to user-space was a bad idea
even with a single knob - it's exponentially worse with knobs
on every node of the topology.
There is a proposal to replace the user interface with a single
3 state knob:
sched_balance_policy := { performance, power, auto }
where 'auto' would be the preferred default which looks at things
like Battery/AC mode and possible cpufreq state or whatever the hw
exposes to show us power use expectations - but there's been no
progress on it in the past many months.
Aside from that, the actual implementation of the various knobs
is known to be broken. There have been sporadic attempts at
fixing things but these always stop short of reaching a mergable
state.
Therefore this wholesale removal with the hopes of spurring
people who care to come forward once again and work on a
coherent replacement.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1326104915.2442.53.camel@twins
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
The current code was going to initialize irq of plat_sci_port.
Not irq, irqs is right.
Signed-off-by: Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Reported by sfr on -next merge.
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
the old codes will cause 3.4 kernel warning as irq domain size is wrong:
------------[ cut here ]------------
WARNING: at kernel/irq/irqdomain.c:74 irq_domain_legacy_revmap+0x24/0x48()
Modules linked in:
[<c0013f50>] (unwind_backtrace+0x0/0xf8) from [<c001e7d8>] (warn_slowpath_common+0x54/0x64)
[<c001e7d8>] (warn_slowpath_common+0x54/0x64) from [<c001e804>] (warn_slowpath_null+0x1c/0x24)
[<c001e804>] (warn_slowpath_null+0x1c/0x24) from [<c005c3c4>] (irq_domain_legacy_revmap+0x24/0x48)
[<c005c3c4>] (irq_domain_legacy_revmap+0x24/0x48) from [<c005c704>] (irq_create_mapping+0x20/0x120)
[<c005c704>] (irq_create_mapping+0x20/0x120) from [<c005c880>] (irq_create_of_mapping+0x7c/0xf0)
[<c005c880>] (irq_create_of_mapping+0x7c/0xf0) from [<c01a6c48>] (irq_of_parse_and_map+0x2c/0x34)
[<c01a6c48>] (irq_of_parse_and_map+0x2c/0x34) from [<c01a6c68>] (of_irq_to_resource+0x18/0x74)
[<c01a6c68>] (of_irq_to_resource+0x18/0x74) from [<c01a6ce8>] (of_irq_count+0x24/0x34)
[<c01a6ce8>] (of_irq_count+0x24/0x34) from [<c01a7220>] (of_device_alloc+0x58/0x158)
[<c01a7220>] (of_device_alloc+0x58/0x158) from [<c01a735c>] (of_platform_device_create_pdata+0x3c/0x80)
[<c01a735c>] (of_platform_device_create_pdata+0x3c/0x80) from [<c01a7468>] (of_platform_bus_create+0xc8/0x190)
[<c01a7468>] (of_platform_bus_create+0xc8/0x190) from [<c01a74cc>] (of_platform_bus_create+0x12c/0x190)
---[ end trace 1b75b31a2719ed32 ]---
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
|
|
To remove duplicate code, have the ftrace arch_ftrace_update_code()
use the generic ftrace_modify_all_code(). This requires that the
default ftrace_replace_code() becomes a weak function so that an
arch may override it.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Current IRQ16-IRQ31 irq number are located around 800 from
1ee8299a9ec1ce5137a044c7768293007b9a3267
(ARM: mach-shmobile: Use 0x3400 as INTCS vector offset)
But, the PINT0/1 IRQ number are also located around 800 from
0df1a838d678fc6ab49f983a19e905f6a42297a0
(ARM: mach-shmobile: sh73a0 PINT IRQ base fix)
This patch relocates PINT0/1 IRQ number to around 700 where is not used,
and adds current IRQ location table in comment area.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Acked-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
|
|
There is no need to save any active fpu state to the task structure
memory if the task is dead. Just drop the state instead.
For example, this saved some 1770 xsave's during the system boot
of a two socket Xeon system.
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/1336692811-30576-4-git-send-email-suresh.b.siddha@intel.com
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
|
Code paths like fork(), exit() and signal handling flush the fpu
state explicitly to the structures in memory.
BUG_ON() in __sanitize_i387_state() is checking that the fpu state
is not live any more. But for preempt kernels, task can be scheduled
out and in at any place and the preload_fpu logic during context switch
can make the fpu registers live again.
For example, consider a 64-bit Task which uses fpu frequently and as such
you will find its fpu_counter mostly non-zero. During its time slice, kernel
used fpu by doing kernel_fpu_begin/kernel_fpu_end(). After this, in the same
scheduling slice, task-A got a signal to handle. Then during the signal
setup path we got preempted when we are just before the sanitize_i387_state()
in arch/x86/kernel/xsave.c:save_i387_xstate(). And when we come back we
will have the fpu registers live that can hit the bug_on.
Similarly during core dump, other threads can context-switch in and out
(because of spurious wakeups while waiting for the coredump to finish in
kernel/exit.c:exit_mm()) and the main thread dumping core can run into this
bug when it finds some other thread with its fpu registers live on some other cpu.
So remove the paranoid check for now, even though it caught a bug in the
multi-threaded core dump case (fixed in the previous patch).
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/1336692811-30576-3-git-send-email-suresh.b.siddha@intel.com
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
|
Historical prepare_to_copy() is mostly a no-op, duplicated for majority of
the architectures and the rest following the x86 model of flushing the extended
register state like fpu there.
Remove it and use the arch_dup_task_struct() instead.
Suggested-by: Oleg Nesterov <oleg@redhat.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/1336692811-30576-1-git-send-email-suresh.b.siddha@intel.com
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Howells <dhowells@redhat.com>
Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Chris Zankel <chris@zankel.net>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
Cc: Mike Frysinger <vapier@gentoo.org>
Cc: Mark Salter <msalter@redhat.com>
Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Richard Kuo <rkuo@codeaurora.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: Helge Deller <deller@gmx.de>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Chen Liqin <liqin.chen@sunplusct.com>
Cc: Lennox Wu <lennox.wu@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
|
commit 97ce2c88f9ad42e3c60a9beb9fca87abf3639faa (jump-label: initialize
jump-label subsystem much earlier) breaks MIPS. The jump_label_init()
call was moved before trap_init() which is where we initialize
flush_icache_range().
In order to be good citizens, we move cache initialization earlier so
that we don't jump through a null flush_icache_range function pointer
when doing the jump label initialization.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3822/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3821/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3820/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Follow-on patches require this.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3818/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This is used in subsequent patches.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3819/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Pull kvm powerpc fixes from Marcelo Tosatti:
"Urgent KVM PPC updates, quoting Alexander Graf:
There are a few bugs in 3.4 that really should be fixed before
people can be all happy and fuzzy about KVM on PowerPC. These fixes
are:
* fix POWER7 bare metal with PR=y
* fix deadlock on HV=y book3s_64 mode in low memory cases
* fix invalid MMU scope of PR=y mode on book3s_64, possibly eading
to memory corruption"
* git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: PPC: Book3S HV: Fix bug leading to deadlock in guest HPT updates
powerpc/kvm: Fix VSID usage in 64-bit "PR" KVM
KVM: PPC: Book3S: PR: Fix hsrr code
KVM: PPC: Fix PR KVM on POWER7 bare metal
KVM: PPC: Book3S: PR: Handle EMUL_ASSIST
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile
Pull two Tile arch fixes from Chris Metcalf:
"These are both bug-fixes, one to avoid some issues in how we invoke
the "pending userspace work" flags on return to userspace, and the
other to provide the same signal handler arguments for tilegx32 that
we do for tilegx64."
* 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
arch/tile: apply commit 74fca9da0 to the compat signal handling as well
arch/tile: fix up some issues in calling do_work_pending()
|
|
Currently IA64 has a assembler implementation of sigrtprocmask.
Having a single architecture implement this in assembler language
is a serious maintenance problem that inhibits further evolution of the
signal subsystem. Everyone who wants to do deep changes to signals
would need to learn that assembler language.
Whatever performance improvements IA64 gets from this it cannot be worth
the price in maintainability.
We have some locking problems in signal that need to be fixed,
but this roadblock needs to be removed first.
So just disable the special assembler IA64 implementation and fall back to a
normal syscall there.
Acked-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Currently the inject_pending_event() call during guest entry happens after
kvm_mmu_reload(). This is for historical reasons - we used to
inject_pending_event() in atomic context, while kvm_mmu_reload() needs task
context.
A problem is that nested vmx can cause the mmu context to be reset, if event
injection is intercepted and causes a #VMEXIT instead (the #VMEXIT resets
CR0/CR3/CR4). If this happens, we end up with invalid root_hpa, and since
kvm_mmu_reload() has already run, no one will fix it and we end up entering
the guest this way.
Fix by reordering event injection to be before kvm_mmu_reload(). Use
->cancel_injection() to undo if kvm_mmu_reload() fails.
https://bugzilla.kernel.org/show_bug.cgi?id=42980
Reported-by: Luke-Jr <luke-jr+linuxbugs@utopios.org>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
Fixes klibc build on ia64 after 85f8f7759e418c814ee2ceacf73eddb9bed39492.
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: maximilian attems <max@stro.at>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Change EFER to be a single u64 field instead of two u32 fields; change
the order to maintain alignment. Note that on x86-64 cr4 is really
also a 64-bit quantity, although we can only set the low 32 bits from
the trampoline code since it is still executing in 32-bit mode at that
point.
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Jarkko Sakkinen <jarkko.sakkinen@intel.com>
|
|
If we've determined we can't do what the user asked, trying to do
something else isn't going to make the user's life better.
Without this the screen scrolls a bit and then you get a panic
anyway, and it's nice not to have so much scroll after the real
problem in bug reports.
Link: http://lkml.kernel.org/r/1337190206-12121-1-git-send-email-pjones@redhat.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
|
GETCPU(2) says:
int getcpu(unsigned *cpu, unsigned *node, struct getcpu_cache *tcache);
...
When either cpu or node is NULL nothing is written to the respective pointer.
But the fast system call path had no checks for NULL, and would
thus return -EFAULT if either (or both) of these were NULL.
Reported-by: Mike Frysinger <vapier@gentoo.org>
Tested-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
When the 32-bit compat code was deleted, we should also have removed
the task_size element from the thread structure - threads can only
be 64-bit now, so no need to keep track of how much virtual address
space each task can have ... everyone gets 0xa000000000000000.
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Keep all the realmode code together, including initialization (only
the rm/ subdirectory is actually built as real-mode code, anyway.)
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Jarkko Sakkinen <jarkko.sakkinen@intel.com>
|
|
Move the bits that aren't actually common out of trampoline_common.S
and into the arch-specific files. Furthermore, make sure the page
directory is first in the .bss section for trampoline_64.S in order to
not waste an entire page of memory.
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Jarkko Sakkinen <jarkko.sakkinen@intel.com>
|
|
Some AMD processors apparently #GP(0) if EFER.LMA is set in WRMSR,
rather than ignoring it. Thus, we need to mask it out.
Reported-by: Ingo Molnar <mingo@kernel.org>
Tested-by: Borislav Petkov <bp@alien8.de>
Cc: Jarkko Sakkinen <jarkko.sakkinen@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/1336501366-28617-24-git-send-email-jarkko.sakkinen@intel.com
|
|
* renesas/board2:
ARM: shmobile: fix smp build
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
I got build errors with the new version now because machine_is_kzm9g is no longer
defined:
arch/arm/mach-shmobile/platsmp.c: In function 'shmobile_smp_get_core_count':
arch/arm/mach-shmobile/platsmp.c:29:2: error: implicit declaration of function 'of_machine_is_compatible'
Replace the missing function with a call to of_machine_is_compatible.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: "Rafael J. Wysocki" <rjw@sisk.pl>
Acked-by: Magnus Damm <magnus.damm@gmail.com>
|
|
irq_of_parse_and_map does not have an empty definition for the
!CONFIG_OF case, so we should not try to call it then:
arch/arm/mach-exynos/common.c: In function 'combiner_init':
arch/arm/mach-exynos/common.c:576:3: warning: implicit declaration of function 'irq_of_parse_and_map'
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
|
|
This passes siginfo and mcontext to tilegx32 signal handlers that
don't have SA_SIGINFO set just as we have been doing for tilegx64.
Cc: stable@vger.kernel.org
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
First, we were at risk of handling thread-info flags, in particular
do_signal(), when returning from kernel space. This could happen
after a failed kernel_execve(), or when forking a kernel thread.
The fix is to test in do_work_pending() for user_mode() and return
immediately if so; we already had this test for one of the flags,
so I just hoisted it to the top of the function.
Second, if a ptraced process updated the callee-saved registers
in the ptregs struct and then processed another thread-info flag, we
would overwrite the modifications with the original callee-saved
registers. To fix this, we add a register to note if we've already
saved the registers once, and skip doing it on additional passes
through the loop. To avoid a performance hit from the couple of
extra instructions involved, I modified the GET_THREAD_INFO() macro
to be guaranteed to be one instruction, then bundled it with adjacent
instructions, yielding an overall net savings.
Reported-By: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Using RCU for lockless shadow walking can increase the amount of memory
in use by the system, since RCU grace periods are unpredictable. We also
have an unconditional write to a shared variable (reader_counter), which
isn't good for scaling.
Replace that with a scheme similar to x86's get_user_pages_fast(): disable
interrupts during lockless shadow walk to force the freer
(kvm_mmu_commit_zap_page()) to wait for the TLB flush IPI to find the
processor with interrupts enabled.
We also add a new vcpu->mode, READING_SHADOW_PAGE_TABLES, to prevent
kvm_flush_remote_tlbs() from avoiding the IPI.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
On x86_64, we can defer %ds and %es reload to the heavyweight context switch,
since nothing in the lightweight paths uses the host %ds or %es (they are
ignored by the processor). Furthermore we can avoid the load if the segments
are null, by letting the hardware load the null segments for us. This is the
expected case.
On i386, we could avoid the reload entirely, since the entry.S paths take care
of reload, except for the SYSEXIT path which leaves %ds and %es set to __USER_DS.
So we set them to the same values as well.
Saves about 70 cycles out of 1600 (around 4%; noisy measurements).
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
The vmx exit code unconditionally restores %ds and %es to __USER_DS. This
can override the user's values, since %ds and %es are not saved and restored
in x86_64 syscalls. In practice, this isn't dangerous since nobody uses
segment registers in long mode, least of all programs that use KVM.
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
The first thing bcm47xx_fill_sprom does is initialize (zero fill) the SPROM. For
BCMA SOC, this wipes out any values previously read by bcm47xx_fill_sprom_ethernet
(see arch/mips/bcm47xx/setup.c - bcm47xx_get_sprom_bcma). Move the initialization
of SPROM so it is called prior to filling in any values.
Signed-off-by: Nathan Hintz <nlhintz@hotmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|
|
When the boardrev with a prefix is not available, try to read it
without a prefix. This is based on code from the Broadcom SDK.
Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
|