Age | Commit message (Collapse) | Author |
|
Remove the legacy APB dma driver. The APB DMA support
is moved to dmaengine based Tegra APB DMA driver.
All clients are also moved to dmaengine based APB DMA
driver.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
|
Rename the driver name of the clock entry of Tegra APBDMA to
tegra-apbdma from of tegra-dma.
This name is more aligned towards the movement of dmaengine based
new DMA driver.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
|
Use clk_prepare/clk_unprepare as required by the generic clk framework.
Tested on Ventana and Cardhu.
Signed-off-by: Prashant Gaikwad <pgaikwad@nvidia.com>
Signed-off-by: Stephen Warren <swarren@nvidia.com>
|
|
It is not require to move the requestor of dma to INVALID
option before stopping dma.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
In order to read an accurate channel transfer count
from the APB DMA engine, the DMA controller must be
paused first.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Acked-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
Tegra2 hangs if APB registers are accessed from the cpu during an
apb dma operation. The workaround is to use apb dma to read/write the
registers instead.
There is a dependency loop between fuses, clocks, and APBDMA. If dma
is enabled, fuse reads must go through APBDMA to avoid corruption due
to a hw bug. APBDMA requires a clock to be enabled. Clocks must read
a fuse to determine allowable cpu frequencies.
Separate out the fuse DMA initialization, and allow the fuse read
and write functions to be called without using DMA before the DMA
initialization has been completed. Access to the fuses before APBDMA
is initialized won't hit the hardware bug because nothing else can be
using DMA.
Original fuse registar access code from Varun Wadekar
<vwadekar@nvidia.com>, improved by Colin Cross <ccross@android.com>
and later moved to separate driver by Jon Mayo <jmayo@nvidia.com>.
Major refactoring/cleanup by Olof Johansson <olof@lixom.net>.
Changes since v1:
* fix 'return false' on error condition
* dequeue dma ops in case of timeout
From: Jon Mayo <jmayo@nvidia.com>.
Signed-off-by: Jon Mayo <jmayo@nvidia.com>.
Signed-off-by: Olof Johansson <olof@lixom.net>
Acked-by: Stephen Warren <swarren@nvidia.com>
|
|
Since we'll do opportunistic allocations before the dma subsystem is
enabled we want just silent failures and retries instead.
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
None of them are used externally.
Signed-off-by: Olof Johansson <olof@lixom.net>
Acked-by: Stephen Warren <swarren@nvidia.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
|
|
Fixes generated by 'codespell' and manually reviewed.
Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
|
|
tegra_dma_init currently simply bails out early if any initialization fails.
This skips various data-structure initialization. In turn, this means that
tegra_dma_allocate_channel can still hand out channels. In this case, when
tegra_dma_free_channel is called, which calls tegra_dma_cancel, the walking
on ch->list will OOPS since the list's next/prev pointers may still be
NULL.
To solve this, add an explicit "initialized" flag, only set this once _init
has fully completed successfully, and have _allocate_channel refuse to hand
out channels if this is not set.
While at it, simplify _init:
* Remove redundant memsets
* Use bitmap_fill to mark all channels as in-use up-front, and remove
some now-redundant bitmap initialization loops.
* Only mark a channel as free once all channel-related initialization has
completed.
Finally, the successful exit path from _init always has ret==0, so just
hard-code that return. The error path still returns ret.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Colin Cross <ccross@android.com>
|
|
The following commit makes the Tegra APB DMA engine fail to initialize
correctly: 0cf6230af909a86f81907455eca2a5c9b8f68fe6
ARM: tegra: Move tegra_common_init to tegra_init_early
The reason is that tegra_init_early_ calls tegra_dma_init which calls
request_threaded_irq, which fails since the IRQ hasn't yet been marked
valid; that only happens in tegra_init_irq, which gets called after
tegra_init_early.
This used to work OK, since tegra_init_early was tegra_common_init, which
got called after tegra_init_irq, basically from the beginning of
tegra_harmony_init.
Solve this by converting tegra_dma_init to a postcore_initcall. This makes
it execute late enough that IRQs are marked valid, and avoids having to
add it back to every machine's init function.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Colin Cross <ccross@android.com>
|
|
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Colin Cross <ccross@android.com>
|
|
If a request already in the queue is passed to tegra_dma_enqueue_req,
tegra_dma_req.node->{next,prev} will end up pointing to itself instead
of at tegra_dma_channel.list, which is the way a the end-of-list
should be set up. When the DMA request completes and is list_del'd,
the list head will still point at it, yet the node's next/prev will
contain the list poison values. When the next DMA request completes,
a kernel panic will occur when those poison values are dereferenced.
This makes the DMA driver more robust in the face of buggy clients.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Colin Cross <ccross@android.com>
|
|
Sometimes, due to high interrupt latency in the continuous mode
of DMA transfer, the half buffer complete interrupt is handled
after DMA has transferred the full buffer. When this is detected,
stop DMA immediately and restart with the next buffer if the next
buffer is ready.
originally fixed by Victor(Weiguo) Pan <wpan@nvidia.com>
In place of using the simple spin_lock()/spi_unlock() in the
interrupt thread, using the spin_lock_irqsave() and
spin_unlock_irqrestore(). The lock is shared between the normal
process context and interrupt context.
originally fixed by Laxman Dewangan (ldewangan@nvidia.com)
The use of shadow registers caused memory corruption at physical
address 0 because the enable bit was not shadowed, and assuming it
needed to be set would enable an unconfigured dma block. Most of the
register accesses don't need to know the previous state of the
registers, and the few places that do need to modify only a few bits
in the registers are the same ones that were sometimes incorrectly
setting the enable bit. This patch convert tegra_dma_update_hardware
to set the entire register, and the other users to read-modify-write,
and drops the shadow registers completely.
Also fixes missing locking in tegra_dma_allocate_channel
Signed-off-by: Colin Cross <ccross@android.com>
|
|
Signed-off-by: Colin Cross <ccross@android.com>
|
|
The APB DMA block handles DMA transfers to and from some peripherals
in the Tegra SOC. It reads from sequential addresses on the memory
bus, and writes repeatedly to the same address on the APB bus.
Two transfer modes are supported, oneshot for transferring a known
size to or from a peripheral, and continuous for streaming data.
In continuous mode, a callback occurs when the buffer is half full
to allow the existing data to be handled and a new request queued.x
v2 changes:
dma API no longer uses PTR_ERR
Signed-off-by: Erik Gilling <konkers@android.com>
Signed-off-by: Colin Cross <ccross@android.com>
|