Age | Commit message (Collapse) | Author |
|
- move all sparc64/mm/ files to arch/sparc/mm/
- commonly named files are named _64.c
- add files to sparc/mm/Makefile preserving link order
- delete now unused sparc64/mm/Makefile
- sparc64 now finds mm/ in sparc
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The kernel always executes in the TSO memory model now,
so none of this stuff is necessary any more.
With helpful feedback from Nick Piggin.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
1) Several exported symbols need extern decls, they are exported
not for C code but for assembler routines.
2) PAGE_EXEC isn't used, delete
3) Several larger than 32-bit constants need "UL" markers
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
1) set_brkpt() is referenced by nothing and hasn't been used by anyone
to my knowledge for many many years. So just delete it.
2) add extern decl for do_sparc64_fault() in asm/pgtable_64.h
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This was just needed to work around an ancient gcc bug that
we don't care about any more.
It was also causing a sparse warnings:
arch/sparc64/mm/tlb.c:22:52: warning: Using plain integer as NULL pointer
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
master.kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6
Conflicts:
arch/sparc/kernel/of_device.c
|
|
Don't clutter up the tree with sstate_blah() scattered all over the
place.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Three main things:
1) Make prober an arch initcall instead of using hard-coded invocation
from paging_init()
2) Shrink table size, the fpu ident stuff was never used.
3) Use named struct initialized in table.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This driver is now limited to just doing the basic clock board and FHC
chip initialization and registering the platform devices for the
per-board LEDs, which are driven by the new LEDS_STARFIRE driver.
The IRQ register handling is already confined purely to the device
tree code.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
First, lmb_enforce_memory_limit() interprets it's argument
(mostly, heh) as a size limit not an address limit. So pass
the raw cmdline_memory_size value into it. And we don't
need to check it against zero, lmb_enforce_memory_limit() does
that for us.
Next, free_initmem() needs special handling when the kernel
command line trims the available memory. The problem case is
if the trimmed out memory is where the kernel image itself
resides.
When that memory is trimmed out, we don't add those physical
ram areas to the sparsemem active ranges, amongst other things.
Which means that this free_initmem() code will free up invalid
page structs, resulting in either crashes or hangs.
Just quick fix this by not freeing initmem at all if "mem="
was given on the boot command line.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
If 'start' does not begin on a page boundary, we can overshoot
past 'end'.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Based upon a report and initial patch by Friedrich Oslage.
The intention is to provide this facility for
__trigger_all_cpu_backtrace even if MAGIC_SYSRQ is not set.
The only part that should have MAGIC_SYSRQ ifdef protection is the
sparc_globalreg_op sysrq regitration and immediate code.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Based upon a bug report by Mariusz Kozlowski
It uses smp_call_function_masked() now, which has a preemption-disabled
requirement.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
All the call sites are #if 0'd out and we have a much more
useful global cpu dumping facility these days. smp_report_regs()
is way too verbose to be usable.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Record one more level of stack frame program counter.
Particularly when lockdep and all sorts of spinlock debugging is
enabled, figuring out the caller of spin_lock() is difficult when the
cpu is stuck on the lock.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove arch-specific show_mem() in favor of the generic version.
This also removes the following redundant information display:
- free swap pages, printed by show_swap_cache_info()
- pages in swapcache, printed by show_swap_cache_info()
- dirty pages, writeback pages, mapped pages, slab pages,
pagetables pages, printed by show_free_areas()
where show_mem() calls show_free_areas(), which calls
show_swap_cache_info().
Signed-off-by: Johannes Weiner <hannes@saeurebad.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Straight forward extensions for huge pages located in the PUD instead of
PMDs.
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The goal of this patchset is to support multiple hugetlb page sizes. This
is achieved by introducing a new struct hstate structure, which
encapsulates the important hugetlb state and constants (eg. huge page
size, number of huge pages currently allocated, etc).
The hstate structure is then passed around the code which requires these
fields, they will do the right thing regardless of the exact hstate they
are operating on.
This patch adds the hstate structure, with a single global instance of it
(default_hstate), and does the basic work of converting hugetlb to use the
hstate.
Future patches will add more hstate structures to allow for different
hugetlbfs mounts to have different page sizes.
[akpm@linux-foundation.org: coding-style fixes]
Acked-by: Adam Litke <agl@us.ibm.com>
Acked-by: Nishanth Aravamudan <nacc@us.ibm.com>
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
There are a lot of places that define either a single bootmem descriptor or an
array of them. Use only one central array with MAX_NUMNODES items instead.
Signed-off-by: Johannes Weiner <hannes@saeurebad.de>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Kyle McMartin <kyle@parisc-linux.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Adrian Bunk reported that enabling 4MB page size breaks the build.
The problem is that MAX_ORDER combined with the page shift exceeds the
SECTION_SIZE_BITS we use in asm-sparc64/sparsemem.h
There are several ways I suppose we could work around this. For one
we could define a CONFIG_FORCE_MAX_ZONEORDER to decrease MAX_ORDER in
these higher page size cases.
But I also know that these page size cases are broken wrt. TLB miss
handling especially on pre-hypervisor systems, and there isn't an easy
way to fix that.
These options were meant to be fun experimental hacks anyways, and
only 8K and 64K make any sense to support.
So remove 512K and 4M base page size support. Of course, we still
support these page sizes for huge pages.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It's not even passed on to smp_call_function() anymore, since that
was removed. So kill it.
Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
|
|
When a cpu really is stuck in the kernel, it can be often
impossible to figure out which cpu is stuck where. The
worst case is when the stuck cpu has interrupts disabled.
Therefore, implement a global cpu state capture that uses
SMP message interrupts which are not disabled by the
normal IRQ enable/disable APIs of the kernel.
As long as we can get a sysrq 'y' to the kernel, we can
get a dump. Even if the console interrupt cpu is wedged,
we can trigger it from userspace using /proc/sysrq-trigger
The output is made compact so that this facility is more
useful on high cpu count systems, which is where this
facility will likely find itself the most useful :)
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch removes the CVS keywords that weren't updated for a long time
from comments.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This fixes the missing ram regression reported by
Mikael Pettersson <mikpe@it.uu.se>, much thanks for
all of this help in diagnosing this.
The second argument to lmb_reserve() is a size,
not an end address bounds.
Tested-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Read all of the OF memory and translation tables, then read
the physical available memory list twice.
When making these requests, OF can allocate more memory to
do it's job, which can remove pages from the available
memory list.
So fetch in all of the tables at once, and fetch the available
list last to make sure we read a stable value.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We die because we forget to convert initrd_start and
initrd_end to virtual addresses.
Reported by Mikael Pettersson
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The identical online_page() implementations from all architectures got
moved to mm/memory_hotplug.c - except for the sparc64 one that even was
dead code due to MEMORY_HOTPLUG not being available there.
Signed-off-by: Adrian Bunk <bunk@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
sparc64: remove duplicated include
sparc: Add kgdb support.
kgdbts: Sparc needs sstep emulation.
sparc32: Kill smp_message_pass() and related code.
sparc64: Kill PIL_RESERVED, unused.
sparc64: Split entry.S up into seperate files.
|
|
Current limitations:
1) On SMP single stepping has some fundamental issues,
shared with other sw single-step architectures such
as mips and arm.
2) On 32-bit sparc we don't support SMP kgdb yet. That
requires some reworking of the IPI mechanisms and
infrastructure on that platform.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Now that we indicate the "restart system call" in the
trap type field of pt_regs->magic, we don't need to
set the %l6 boolean in all of the trap return paths.
And we therefore don't need to pass it to do_notify_resume().
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently there is only code to parse NUMA attributes on
sun4v/niagara systems, but later on we will add such parsing
for older systems.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We have to do it like this before we can move the PROM and MDESC device
tree code over to using lmb_alloc().
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This allows us to kill the incredibly complicated and stupid function
trim_pavail().
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Call lmb_add() on available regions, and call lmb_reserve()
on the main kernel image and the ramdisk (if any).
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
And add some comments explaining all of the quirks involved in
the way the bootloader provides this information.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
ext4 uses ZERO_PAGE(0) to zero out blocks. We need to export
different symbols in different arches for the usage of ZERO_PAGE
in modules.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
NR_PAGEFLAGS specifies the number of page flags we are using. From that we
can calculate the number of bits leftover that can be used for zone, node (and
maybe the sections id). There is no need anymore for FLAGS_RESERVED if we use
NR_PAGEFLAGS.
Use the new methods to make NR_PAGEFLAGS available via the preprocessor.
NR_PAGEFLAGS is used to calculate field boundaries in the page flags fields.
These field widths have to be available to the preprocessor.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: David Miller <davem@davemloft.net>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Noticed by Andrew Morton.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Reported by Mariusz Kozlowski.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add 'UL' markers to DCU_* macros.
Declare C functions called from assembler in entry.h
Declare C functions called from within the sparc64 arch
code in include/asm-sparc64/*.h headers as appropriate.
Remove unused routines in traps.c
Signed-off-by: David S. Miller <davem@davemloft.net>
|