Age | Commit message (Collapse) | Author |
|
This is a first cut at a generic DWARF unwinder for the kernel. It's
still lacking DWARF64 support and the DWARF expression support hasn't
been tested very well but it is generating proper stacktraces on SH for
WARN_ON() and NULL dereferences.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Updated to use the fixed BSS linker script macros from this
thread:
http://www.spinics.net/lists/kernel/msg913238.html
Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This patch converts the sh architecture to use the new linker script
macros in include/asm-generic/vmlinux.lds.h.
Signed-off-by: Tim Abbott <tabbott@ksplice.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: linux-sh@vger.kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Include empty_zero_page in _text. This fixes a problem
introduced by c3e2586b794b12ffcdf69b4e547030b51e18e6d9
which results in broken boot on R2D-Plus.
Signed-off-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
.init_ramfs ought to be .init.ramfs, fix it up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Tie this in to the Makefile directly, where we already know what we are
running on. This tidies up the linker script a bit, and is prep work for
unifying the arch/sh/boot/compressed linker scripts.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Just forcefully rename the _32 variant overtop, and kill off the now
unused _64 version.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Some cleanups to the SH linker script. This reorders some of the
data sections for more optimal placement, general tabification,
and plugging in omitted generic definitions.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
When the SH kernel used to support embedding a ramdisk in the
pre-initramfs days it was placed in a special section and made to
look like a regular initrd. Since that was removed ages ago, kill
off the remaining cruft that was missed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
These were previously discarded at link time, though as with MIPS
we keep them around until runtime to satisfy .rodata references.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
With the PERCPU() macro introduction .data.cacheline_aligned was
inhereting PAGE_SIZE alignment, fix that up for L1_CACHE_BYTES
again. Likewise, the initramfs section wants PAGE_SIZE alignment.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
The uClinux MTD device uses _ebss, add the symbol and corresponding
export.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Follow Al Viro's m68k change from l-k:
i.e. tell modpost that entry point code (that has to be outside
of .init.text for external reasons) is OK to refer to .init.*
Shuts up some section mismatch warnings from modpost.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
per cpu data section contains two types of data. One set which is
exclusively accessed by the local cpu and the other set which is per cpu,
but also shared by remote cpus. In the current kernel, these two sets are
not clearely separated out. This can potentially cause the same data
cacheline shared between the two sets of data, which will result in
unnecessary bouncing of the cacheline between cpus.
One way to fix the problem is to cacheline align the remotely accessed per
cpu data, both at the beginning and at the end. Because of the padding at
both ends, this will likely cause some memory wastage and also the
interface to achieve this is not clean.
This patch:
Moves the remotely accessed per cpu data (which is currently marked
as ____cacheline_aligned_in_smp) into a different section, where all the data
elements are cacheline aligned. And as such, this differentiates the local
only data and remotely accessed data cleanly.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: <linux-arch@vger.kernel.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
.machvec.init can be misaligned with the recent machvec changes,
forcibly align it on the boundary that it expects, as before.
Signed-off-by: Takashi YOSHII <takashi.yoshii.ze@hitachi.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This fixes up much of the machvec handling, allowing for it to be
overloaded on boot. Making practical use of this still requires
some Kconfig munging, however.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
With this consolidation we can now modify the .data
section definition in one spot for all archs.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
|
|
Move definition of .text section to asm-generic.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
|
|
Wire up GENERIC_BUG for SH. This moves off of the special bug
frame and on to the generic struct bug_entry. Roughly the same
semantics are retained, and we can kill off some of the verbose
BUG() reporting code.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Let's allow page-alignment in general for per-cpu data (wanted by Xen, and
Ingo suggested KVM as well).
Because larger alignments can use more room, we increase the max per-cpu
memory to 64k rather than 32k: it's getting a little tight.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Acked-by: Ingo Molnar <mingo@elte.hu>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
Previously this was using a hardcoded 32, use L1_CACHE_BYTES for
cacheline alignment instead.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Update all arch/*/kernel/vmlinux.lds.S to not include space for initramfs
when CONFIG_BLK_DEV_INITRAMFS is not selected. This saves another 4 kbytes
on most platfoms (some reserve PAGE_SIZE for initramfs).
Signed-off-by: Jean-Paul Saman <jean-paul.saman@nxp.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This had a bogus .data.idt reference, fix it up..
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Add a vmlinux.lds.h helper macro for defining the eight-level initcall table,
teach all the architectures to use it.
This is a prerequisite for a patch which performs initcall synchronisation for
multithreaded-probing.
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
[ Added AVR32 as well ]
Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
|
|
This enables support for 4K stacks on SH.
Currently this depends on DEBUG_KERNEL, but likely all boards
will switch to this as the default in the future.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This adds a DEBUG_STACK_USAGE and DEBUG_STACKOVERFLOW for SH.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
nommu needs to be able to shift PAGE_OFFSET, so we switch it to a
non-user-visible CONFIG_PAGE_OFFSET and use that in the few places
where it matters.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
We had a special .stack section in the ld script that
was being used to position r15 initially. This is
nonsensical, as we can just use a THREAD_SIZE offset
from the init_thread_union instead (as every other arch
does).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
|
|
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.
Let it rip!
|