Age | Commit message (Collapse) | Author |
|
On Feroceon the L2 cache becomes non-coherent with the CPU
when the L1 caches are disabled. Thus the L2 needs to be invalidated
after both L1 caches are disabled.
On kexec before the starting the code for relocation the kernel,
the L1 caches are disabled in cpu_froc_fin (cpu_v7_proc_fin for Feroceon),
but after L2 cache is never invalidated, because inv_all is not set
in cache-feroceon-l2.c.
So kernel relocation and decompression may has (and usually has) errors.
Setting the function enables L2 invalidation and fixes the issue.
Cc: <stable@vger.kernel.org>
Signed-off-by: Illia Ragozin <illia.ragozin@grapecom.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Avoid namespace conflicts with drivers over the CP15 definitions by
moving CP15 related prototypes and definitions to a private header
file.
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra]
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx]
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Since commit 3e4d3af501 "mm: stack based kmap_atomic()", it is actively
wrong to rely on fixed kmap type indices (namely KM_L2_CACHE) as
kmap_atomic() totally ignores them and a concurrent instance of it may
happily reuse any slot for any purpose. Because kmap_atomic() is now
able to deal with reentrancy, we can get rid of the ad hoc mapping here.
While the code is made much simpler, there is a needless cache flush
introduced by the usage of __kunmap_atomic(). It is not clear if the
performance difference to remove that is worth the cost in code
maintenance (I don't think there are that many highmem users on that
platform anyway) but that should be reconsidered when/if someone cares
enough to do some measurements.
Signed-off-by: Nicolas Pitre <nicolas.pitre@linaro.org>
|
|
Strictly speaking, a MCR instruction does not produce any output.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
|
|
I get random oopses on my Kirkwood board at startup when L2 cache is
enabled. FYI I'm using Marvell uboot version 3.4.16
Each boot produces the same oops, but anything that changes the kernel
size (even only changing initramfs) makes the oops different.
I noticed that nothing invalidates the L2 cache before enabling it,
doing so fixes my problem.
Signed-off-by: Maxime Bizon <mbizon@freebox.fr>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
|
|
The choice is between looping over the physical range and performing
single cache line operations, or to map highmem pages somewhere, as
cache range ops are possible only on virtual addresses.
Because L2 range ops are much faster, we go with the later by factoring
the physical-to-virtual address conversion and use a fixmap entry for it
in the HIGHMEM case.
Possible future optimizations to avoid the pte setup cost:
- do the pte setup for highmem pages only
- determine a threshold for doing a line-by-line processing on physical
addresses when the range is small
Signed-off-by: Nicolas Pitre <nico@marvell.com>
|
|
Same fix as commit c7cf72dcadb: when 'start' and 'end' are less than a
cacheline apart and 'start' is unaligned we are done after cleaning and
invalidating the first cacheline.
Cc: <stable@kernel.org>
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
- Make sure that coprocessor instructions for range ops are contiguous
and not reordered.
- s/invalidate_and_disable_dcache/flush_and_disable_dcache/
- Don't re-enable I/D caches if they were not enabled initially.
- Change some masks to shifts for better generated code.
Signed-off-by: Nicolas Pitre <nico@marvell.com>
Acked-by: Lennert Buytenhek <buytenh@marvell.com>
|
|
This patch performs the equivalent include directory shuffle for
plat-orion, and fixes up all users.
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
|
|
This patch adds support for the unified Feroceon L2 cache controller
as found in e.g. the Marvell Kirkwood and Marvell Discovery Duo
families of ARM SoCs.
Note that:
- Page table walks are outer uncacheable on Kirkwood and Discovery
Duo, since the ARMv5 spec provides no way to indicate outer
cacheability of page table walks (specifying it in TTBR[4:3] is
an ARMv6+ feature).
This requires adding L2 cache clean instructions to
proc-feroceon.S (dcache_clean_area(), set_pte()) as well as to
tlbflush.h ({flush,clean}_pmd_entry()). The latter case is handled
by defining a new TLB type (TLB_FEROCEON) which is almost identical
to the v4wbi one but provides a TLB_L2CLEAN_FR flag.
- The Feroceon L2 cache controller supports L2 range (i.e. 'clean L2
range by MVA' and 'invalidate L2 range by MVA') operations, and this
patch uses those range operations for all Linux outer cache
operations, as they are faster than the regular per-line operations.
L2 range operations are not interruptible on this hardware, which
avoids potential livelock issues, but can be bad for interrupt
latency, so there is a compile-time tunable (MAX_RANGE_SIZE) which
allows you to select the maximum range size to operate on at once.
(Valid range is between one cache line and one 4KiB page, and must
be a multiple of the line size.)
Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>
|