Age | Commit message (Collapse) | Author |
|
Fix building errors occuring due to a missing export of
flush_icache_range() in
kisskb.ellerman.id.au/kisskb/buildresult/11677809/
ERROR: "flush_icache_range" [drivers/misc/lkdtm.ko] undefined!
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Chris Zankel <chris@zankel.net>
Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa]
Cc: Noam Camus <noamc@ezchip.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Acked-by: Zhigang Lu <zlu@tilera.com> [tile]
Cc: Kirill Tkhai <tkhai@yandex.ru>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit bcf24e1daa94 ("mmc: omap_hsmmc: use the generic config for
omap2plus devices"), enabled the build for other platforms for compile
testing.
sh-allmodconfig now fails with:
include/linux/omap-dma.h:171:8: error: expected identifier before numeric constant
make[4]: *** [drivers/mmc/host/omap_hsmmc.o] Error 1
This happens because SuperH #defines "CCR", which is one of the enum
values in include/linux/omap-dma.h. There's a similar issue with "CCR2"
on sh2a.
As "CCR" and "CCR2" are too generic names for global #defines, prefix
them with "SH_" to fix this.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: Cong Wang <amwang@redhat.com>
|
|
This resolves a problem seen when using the Android dynamic linker.
Sometimes the dynamic linker would seg-fault at start up and this
was eventually traced to the handling of a COW fault for a page which
was being modified by the linker. If there was no cache aliasing between
the kernel and the user page, the page was not flushed, leaving the
newly copied data in the D-cache. However when executing instructions
from that page, the I-cache is filled directly from external memory,
rather than the D-cache, and causing garbage to be executed.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This follows the ARM change c01778001a4f5ad9c62d882776235f3f31922fdd
("ARM: 6379/1: Assume new page cache pages have dirty D-cache") for the
same rationale:
There are places in Linux where writes to newly allocated page
cache pages happen without a subsequent call to flush_dcache_page()
(several PIO drivers including USB HCD). This patch changes the
meaning of PG_arch_1 to be PG_dcache_clean and always flush the
D-cache for a newly mapped page in update_mmu_cache().
This addresses issues seen with executing binaries from MMC, in
addition to some of the other HCDs that don't explicitly do cache
management for their pipe-in buffers.
Requested-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This enables support for the hardware synonym avoidance handling on SH-X3
CPUs for the case where dcache aliases are possible. icache handling is
retained, but we flip on broadcasting of the block invalidations due to
the lack of coherency otherwise on SMP.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
flush_cache_all() gets called in to when we do some early ioremapping.
Unfortunately on SDK7786 the interrupt controller itself requires
ioremapping, leading to a bit of a chicken and egg scenario. For now,
don't bother with IPI crosscalls if there aren't any other CPUs online.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
With some of the cache rework an address aliasing optimization was added,
but this managed to fail on certain mappings resulting in pages with
PG_dcache_dirty set never writing back their dcache lines. This patch
reverts to the earlier behaviour of simply always writing back when the
dirty bit is set.
Signed-off-by: Markus Pietrek <Markus.Pietrek@emtrion.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
These still require more testing, so revert them for now. We keep the
off-by-1 in the fixmap colouring and drop the rest.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
The previous implementation of clear_user_highpage and copy_user_highpage
checked to see if there was a D-cache aliasing issue between the user
and kernel mappings of a page, but if there was they always did a
flush with writeback on the dirtied kernel alias.
However as we now have the ability to map a page into kernel space
with the same cache colour as the user mapping, there is no need to
write back this data.
Currently we also invalidate the kernel alias as a precaution, however
I'm not sure if this is actually required.
Also correct the definition of FIX_CMAP_END so that the mappings created
by kmap_coherent() are actually at the correct colour.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This gets the build fixed up for the sh64 cache enabled case.
Disabling still needs further abstraction for independent I/D-cache
disabling.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
These were previously hidden in sh_ksyms_32, despite also being needed
for sh64 now that the cache.c code is shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Conflicts:
arch/sh/mm/cache-sh4.c
|
|
Add code to handle the cache disabled case. Fixes breakage introduced by
37443ef3f0406e855e169c87ae3f4ffb4b6ff635 ("sh: Migrate SH-4 cacheflush
ops to function pointers."). Without this patch configuring caches off
with CONFIG_CACHE_OFF=y makes kfr2r09 and migo-r lock up in fbdev
deferred io or early user space.
Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
|
|
This too follows the ARM change, given that the issue at hand applies to
all platforms that implement lazy D-cache writeback.
This fixes up the case when a page mapping disappears between the
flush_dcache_page() call (when PG_dcache_dirty is set for the page) and
the update_mmu_cache() call -- such as in the case of swap cache being
freed early. This kills off the mapping test in update_mmu_cache() and
switches to simply testing for PG_dcache_dirty.
Reported-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
The i-cache flush in the case of VM_EXEC was added way back when as a
sanity measure, and in practice we only care about evicting aliases from
the d-cache. As a result, it's possible to drop the i-cache flush
completely here.
After careful profiling it's also come up that all of the work associated
with hunting down aliases and doing ranged flushing ends up generating
more overhead than simply blasting away the entire dcache, particularly
if there are many mm's that need to be iterated over. As a result of
that, just move back to flush_dcache_all() in these cases, which restores
the old behaviour, and vastly simplifies the path.
Additionally, on platforms without aliases at all, this can simply be
nopped out. Presently we have the alias check in the SH-4 specific
version, but this is true for all of the platforms, so move the check up
to a generic location. This cuts down quite a bit on superfluous cacheop
IPIs.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This fixes up a number of outstanding issues observed with old mappings
on the same colour hanging around. This requires some more optimal
handling, but is a safe fallback until all of the corner cases have been
handled.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.
One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This builds on top of the previous reversion and implements a special
on_each_cpu() variant that simple disables preemption across the call
while leaving the interrupt state to the function itself. There were some
unintended consequences with IRQ disabling in some of these paths on UP
that ran in to a deadlock scenario with IRQs being missed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Now that the SH-5 code is more or less behaving with the new cacheflush
interface, wire up the initialization code.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().
This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|
This builds in the newly created cache.c (renamed from pg-mmu.c) for both
MMU and NOMMU configurations. The kmap_coherent() stubs and alias
information recorded by each CPU family takes care of doing the right
thing while enabling the code to be commonly shared.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|