3640 Commits

Author SHA1 Message Date
Dan Carpenter
4219ff33b2 dmaengine: stm32-dmamux: Fix a NULL vs IS_ERR() check in probe
devm_ioremap_resource() doesn't return NULL, it returns error pointers.

Fixes: df7e762db5f6 ("dmaengine: Add STM32 DMAMUX driver")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-08 16:17:39 +05:30
Pierre-Yves MORDRET
a4ffb13c89 dmaengine: Add STM32 MDMA driver
This patch adds the driver for the STM32 MDMA controller.

Master Direct memory access (MDMA) is used in order to provide high-speed
data transfer between memory and memory or between peripherals and memory.

MDMA controller provides a master AXI interface for main memory and
peripheral registers access (system access port) and a master AHB
interface only for Cortex-M7 TCM memory access (TCM access port).

MDMA works in conjunction with the standard DMA controllers (DMA1 or DMA2).
It offers up to 64 channels, each dedicated to managing memory access
requests from one of the DMA stream memory buffer or other peripherals
(w/ integrated FIFO).

Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-10-08 14:45:34 +05:30
Sylvain Lesne
edf10919e5 dmaengine: altera: fix spinlock usage
Since this lock is acquired in both process and IRQ context, failing to
to disable IRQs when trying to acquire the lock in process context can
lead to deadlocks.

Signed-off-by: Sylvain Lesne <lesne@alse-fr.com>
Reviewed-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-28 13:11:46 +05:30
Sylvain Lesne
d9ec46416d dmaengine: altera: fix response FIFO emptying
Commit 6084fc2ec478 ("dmaengine: altera: Use macros instead of structs
to describe the registers") introduced a minus sign before a register
offset.

This leads to soft-locks of the DMA controller, since reading the last
status byte is required to pop the response from the FIFO. Failing to
do so will lead to a full FIFO, which means that the DMA controller
will stop processing descriptors.

Signed-off-by: Sylvain Lesne <lesne@alse-fr.com>
Reviewed-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-28 13:11:46 +05:30
Russell King
73d2a3cef4 dmaengine: sa11x0: add DMA filters
Add DMA filters for the sa11x0 DMA channels.  This will allow us to
migrate away from directly using the DMA filter function in drivers.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-27 16:11:51 +05:30
Pierre-Yves MORDRET
df7e762db5 dmaengine: Add STM32 DMAMUX driver
This patch implements the STM32 DMAMUX driver.

The DMAMUX request multiplexer allows routing a DMA request line between
the peripherals and the DMA controllers of the product. The routing
function is ensured by a programmable multi-channel DMA request line
multiplexer. Each channel selects a unique DMA request line,
unconditionally or synchronously with events from its DMAMUX
synchronization inputs. The DMAMUX may also be used as a DMA request
generator from programmable events on its input trigger signals

Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com>
Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-27 16:01:35 +05:30
Sricharan R
6b4faeac05 dmaengine: qcom-bam: Process multiple pending descriptors
The bam dmaengine has a circular FIFO to which we
add hw descriptors that describes the transaction.
The FIFO has space for about 4096 hw descriptors.

Currently we add one descriptor and wait for it to
complete with interrupt and then add the next pending
descriptor. In this way, the FIFO is underutilized
since only one descriptor is processed at a time, although
there is space in FIFO for the BAM to process more.

Instead keep adding descriptors to FIFO till its full,
that allows BAM to continue to work on the next descriptor
immediately after signalling completion interrupt for the
previous descriptor.

Also when the client has not set the DMA_PREP_INTERRUPT for
a descriptor, then do not configure BAM to trigger a interrupt
upon completion of that descriptor. This way we get a interrupt
only for the descriptor for which DMA_PREP_INTERRUPT was
requested and there signal completion of all the previous completed
descriptors. So we still do callbacks for all requested descriptors,
but just that the number of interrupts are reduced.

CURRENT:

            ------      -------   ---------------
            |DES 0|     |DESC 1|  |DESC 2 + INT |
            ------      -------   ---------------
               |           |            |
               |           |            |
INTERRUPT:   (INT)       (INT)	      (INT)
CALLBACK:     (CB)        (CB)         (CB)

		MTD_SPEEDTEST READ PAGE: 3560 KiB/s
		MTD_SPEEDTEST WRITE PAGE: 2664 KiB/s
		IOZONE READ: 2456 KB/s
		IOZONE WRITE: 1230 KB/s

	bam dma interrupts (after tests): 96508

CHANGE:

        ------  -------    -------------
        |DES 0| |DESC 1   |DESC 2 + INT |
        ------  -------   --------------
				|
				|
          		      (INT)
			      (CB for 0, 1, 2)

		MTD_SPEEDTEST READ PAGE: 3860 KiB/s
		MTD_SPEEDTEST WRITE PAGE: 2837 KiB/s
		IOZONE READ: 2677 KB/s
		IOZONE WRITE: 1308 KB/s

	bam dma interrupts (after tests): 58806

Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Reviewed-by: Andy Gross <andy.gross@linaro.org>
Tested-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-25 11:47:26 +05:30
Peter Ujfalusi
2ccb4837c9 dmaengine: ti-dma-crossbar: Fix possible race condition with dma_inuse
When looking for unused xbar_out lane we should also protect the set_bit()
call with the same mutex to protect against concurrent threads picking the
same ID.

Fixes: ec9bfa1e1a796 ("dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idr")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 23:03:42 +05:30
Corentin Labbe
8f3b00347b dmaengine: sun6i: use of_device_get_match_data
The usage of of_device_get_match_data reduce the code size a bit.
Furthermore, it prevents an improbable dereference when
of_match_device() return NULL.

Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 22:53:38 +05:30
Peter Ujfalusi
87a2f622cc dmaengine: edma: Align the memcpy acnt array size with the transfer
Memory to Memory transfers does not have any special alignment needs
regarding to acnt array size, but if one of the areas are in memory mapped
regions (like PCIe memory), we need to make sure that the acnt array size
is aligned with the mem copy parameters.

Before "dmaengine: edma: Optimize memcpy operation" change the memcpy was set
up in a different way: acnt == number of bytes in a word based on
__ffs((src | dest | len), bcnt and ccnt for looping the necessary number of
words to comlete the trasnfer.

Instead of reverting the commit we can fix it to make sure that the ACNT size
is aligned to the traswnfer.

Fixes: df6694f80365a (dmaengine: edma: Optimize memcpy operation)
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: stable@vger.kernel.org
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 22:51:07 +05:30
Nicolin Chen
f9d4a398f1 dmaengine: imx-sdma: Correct src_addr_widths and directions
The driver already supports DMA_DEV_TO_DEV in sdma_config(),
DMA_SLAVE_BUSWIDTH_2_BYTES and DMA_SLAVE_BUSWIDTH_1_BYTE in
sdma_prep_slave_sg(). So this patch adds them to the lists.

Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-21 22:45:42 +05:30
Lars-Peter Clausen
f3ae7d9155 dmaengine: xilinx_dma: Move enum xdma_ip_type to driver file
The enum xdma_ip_type is only used inside the Xilinx DMA driver and not
exported to any consumers (nor should it be). So move it from the global
header to driver file itself.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17 18:59:54 +05:30
Lars-Peter Clausen
008913dbeb dmaengine: axi-dmac: Fix software cyclic mode
When running in software cyclic mode the driver currently does not go back
to the first segment once the last segment has been reached. Effectively
making the transfer non-cyclic.

Fix this by going back to the first segment once the last segment has been
reached for cyclic transfers.

Special care need to be taken to avoid a segment from being submitted
multiple times concurrently, which could happen for transfers with a number
of segments that is smaller than the DMA controller's internal queue.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17 18:58:18 +05:30
Lars-Peter Clausen
63ab76dbbd dmaengine: axi-dmac: Only use hardware cyclic mode for single segment transfers
In hardware cyclic mode the submitted segment is repeated. This means
hardware cyclic mode can only be used if the transfer has a single segment.

Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-17 18:58:18 +05:30
Linus Torvalds
cd7b34fe1c dmaengine updates for 4.14-rc1
- Removal of DMA_SG support as we have no users for this feature
  - New driver for Altera / Intel mSGDMA IP core
  - Support for memset in dmatest and qcom_hidma driver
  - Update for non cyclic mode in k3dma, bunch of update in bam_dma, bcm sba-raid
  - Constify device ids across drivers
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQIcBAABAgAGBQJZsXteAAoJEHwUBw8lI4NHPwcP/iihF1n7jOQVtUm3zxPvUV+n
 GzU7+rqAEDLKBaIttK28LIjgvg0AC/4aiEsosfCzTjpkzMHteRw00YyplwF7/wdM
 O0owKOIub4PriDiL6d/SWFnhcWwv0/KLbyKscQcOwwvkksG/mwMn1VfW7alCrz1w
 81TOQaW9SxLxL7guJU0aQHljkudT53l8Dgsp55iC9Ccz515Iuu7dQm3DnSG3sYjJ
 Ct4u4MWWzDmmKKpbDoYe/Z+fiQT0WKuGfI7QHURVnw5qLo2sDKREWGbThhRG/lZj
 YlnLQnkjWwLU5dyX1MyIWipPxe83sjf/7OwJ7XUlLjD6o+lNEuQxjmNkVAh0hNRc
 dgrXRuqPRJMW40uOvAMDHTkexxikWc5ggt5LN9dIYDOdaS4Ch5ewf19SRi9pSDap
 FZeIWY1FWwQCAU7HQMwSYyRLBjlmEmeSkElkXCd+2wu5aH2oKOMUMbUIYcqL4fjD
 qMAR7kfn6e92fDT1gR1ZKL79Cfe9zsCQA3XmecpC/HwqiE3XtfZuDY/73cXD0MeO
 SbJUCv4ldPGjrTKBHvs0wiWbxi5Mj5sXglmSaD0lEhtMsOfhPHY2BGatTzSmKKwO
 WwmKAvM8qElQZy2Eh25dvlE04yAOofoJb6Pf/AraQOLTUkMyF8wRWEpltjUuttM9
 VzQLvh8s25naKM5mOAM2
 =88SI
 -----END PGP SIGNATURE-----

Merge tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma

Pull dmaengine updates from Vinod Koul:
 "This one features the usual updates to the drivers and one good part
  of removing DA_SG from core as it has no users.

  Summary:

   - Remove DMA_SG support as we have no users for this feature
   - New driver for Altera / Intel mSGDMA IP core
   - Support for memset in dmatest and qcom_hidma driver
   - Update for non cyclic mode in k3dma, bunch of update in bam_dma,
     bcm sba-raid
   - Constify device ids across drivers"

* tag 'dmaengine-4.14-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (52 commits)
  dmaengine: sun6i: support V3s SoC variant
  dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirk
  dmaengine: rcar-dmac: document R8A77970 bindings
  dmaengine: xilinx_dma: Fix error code format specifier
  dmaengine: altera: Use macros instead of structs to describe the registers
  dmaengine: ti-dma-crossbar: Fix dra7 reserve function
  dmaengine: pl330: constify amba_id
  dmaengine: pl08x: constify amba_id
  dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED
  dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sending
  dmaengine: bcm-sba-raid: Add debugfs support
  dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_RECEIVED
  dmaengine: bcm-sba-raid: Re-factor sba_process_deferred_requests()
  dmaengine: bcm-sba-raid: Pre-ack async tx descriptor
  dmaengine: bcm-sba-raid: Peek mbox when we have no free requests
  dmaengine: bcm-sba-raid: Alloc resources before registering DMA device
  dmaengine: bcm-sba-raid: Improve sba_issue_pending() run duration
  dmaengine: bcm-sba-raid: Increase number of free sba_request
  dmaengine: bcm-sba-raid: Allow arbitrary number free sba_request
  dmaengine: bcm-sba-raid: Remove reqs_free_count from sba_device
  ...
2017-09-07 14:03:05 -07:00
Vinod Koul
41bd0314fa Merge branch 'topic/dmatest' into for-linus 2017-09-06 21:55:10 +05:30
Vinod Koul
346ea25e81 Merge branch 'topic/qcom' into for-linus 2017-09-06 21:54:48 +05:30
Vinod Koul
05890d550c Merge branch 'topic/ppc4xx' into for-linus 2017-09-06 21:54:41 +05:30
Vinod Koul
918a21eeec Merge branch 'topic/of' into for-linus 2017-09-06 21:54:31 +05:30
Vinod Koul
dccafbf2d3 Merge branch 'topic/k3dma' into for-linus 2017-09-06 21:54:24 +05:30
Vinod Koul
f6cc35eefe Merge branch 'topic/ioat' into for-linus 2017-09-06 21:54:16 +05:30
Vinod Koul
07e24b8559 Merge branch 'topic/bcm' into for-linus 2017-09-06 21:54:09 +05:30
Vinod Koul
a431cbafcb Merge branch 'topic/altera' into for-linus 2017-09-06 21:54:01 +05:30
Icenowy Zheng
a702e47eab dmaengine: sun6i: support V3s SoC variant
Allwinner V3s has a DMA engine similar to the ones from A31, but with
fewer channels and DRQs.

Add support for it.

Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Acked-by: Chen-Yu Tsai <wens@csie.org>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-05 09:07:20 +05:30
Icenowy Zheng
0430a7c753 dmaengine: sun6i: make gate bit in sun8i's DMA engines a common quirk
Originally we enable a special gate bit when the compatible indicates
A23/33.

But according to BSP sources and user manuals, more SoCs will need this
gate bit.

So make it a common quirk configured in the config struct.

Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz>
Reviewed-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-05 09:07:20 +05:30
Lars-Peter Clausen
574897dc14 dmaengine: xilinx_dma: Fix error code format specifier
'err' is a signed int and error codes are typically negative numbers, so
use '%d' instead of '%u' to format the error code in the error message.

Fixes: ba16db36b5dd ("dmaengine: vdma: Add clock support")
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Acked-by: Kedareswara rao Appana <appanad@xilinx.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-09-05 09:03:21 +05:30
Stefan Roese
6084fc2ec4 dmaengine: altera: Use macros instead of structs to describe the registers
This patch moves from a struct declaration for the DMA controller
registers to macros with offests to the base address. This is mainly
done to remove the sparse warnings, since the function parameter of
ioread32/iowrite32 is "void __iomem *" instead of a pointer to struct
members. With this patch applied, no sparse warning is seen anymore.

Please note that the struct for the descriptors is still kept in place,
as the code largely accesses the struct members as internal variables
before the complete struct is copied into the descriptor FIFO of the
DMA controller.

Additionally this patch also removes two warnings "variable xxx set but
not used" seen when compiling with "W=1". The registers need to be read
to flush the response FIFO, but nothing needs to be done with them. So
the code is correct here and the warning is a false one.

Signed-off-by: Stefan Roese <sr@denx.de>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-29 08:17:43 +05:30
Alexander Smirnov
a2f6721b42 dmaengine: ti-dma-crossbar: Fix dra7 reserve function
DMA crossbar uses 'xbar->dma_inuse' variable to manage allocated routes.
Each bit represents respective DMA channel. If the channel is free, bit
is set to '0', if channel is allocated, bit should be set to '1'.

In reserve function, the bits for requested DMA channels are cleared, so
they are not really reserved, but freed and become ready for allocation.

Signed-off-by: Alexander Smirnov <asmirnov@ilbers.de>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 22:02:05 +05:30
Arvind Yadav
b753351ec8 dmaengine: pl330: constify amba_id
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.

Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 21:11:08 +05:30
Arvind Yadav
b80fa1217c dmaengine: pl08x: constify amba_id
amba_id are not supposed to change at runtime. All functions
working with const amba_id. So mark the non-const structs as const.

Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 21:11:08 +05:30
Anup Patel
ecbf9ef15a dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_COMPLETED
The SBA_REQUEST_STATE_COMPLETED state was added to keep track
of sba_request which got completed but cannot be freed because
underlying Async Tx descriptor was not ACKed by DMA client.

Instead of above, we can free the sba_request with non-ACKed
Async Tx descriptor and sba_alloc_request() will ensure that
it always allocates sba_request with ACKed Async Tx descriptor.
This alternate approach makes SBA_REQUEST_STATE_COMPLETED state
redundant hence this patch removes it.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
29e0f486d9 dmaengine: bcm-sba-raid: Explicitly ACK mailbox message after sending
We should explicitly ACK mailbox message because after
sending message we can know the send status via error
attribute of brcm_message.

This will also help SBA-RAID to use "txdone_ack" method
whenever mailbox controller supports it.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
8529a927e2 dmaengine: bcm-sba-raid: Add debugfs support
This patch adds debugfs support to report stats via debugfs
which in-turn will help debug hang or error situations.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
b99c238669 dmaengine: bcm-sba-raid: Remove redundant SBA_REQUEST_STATE_RECEIVED
The SBA_REQUEST_STATE_RECEIVED state is now redundant because
received sba_request are immediately freed or moved to completed
list in sba_process_received_request().

This patch removes redundant SBA_REQUEST_STATE_RECEIVED state.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
d6ffd2395a dmaengine: bcm-sba-raid: Re-factor sba_process_deferred_requests()
Currently, sba_process_deferred_requests() handles both pending
and completed sba_request which is unnecessary overhead for
sba_issue_pending() because completed sba_request handling is
not required in sba_issue_pending().

This patch breaks sba_process_deferred_requests() into two parts
sba_process_received_request() and _sba_process_pending_requests().

The sba_issue_pending() will only process pending sba_request
by calling _sba_process_pending_requests(). This will improve
sba_issue_pending().

The sba_receive_message() will only process received sba_request
by calling sba_process_received_request() for each received
sba_request. The sba_process_received_request() will also call
_sba_process_pending_requests() after handling received sba_request
because we might have pending sba_request not submitted by previous
call to sba_issue_pending().

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
fd8eb5395f dmaengine: bcm-sba-raid: Pre-ack async tx descriptor
We should pre-ack async tx descriptor at time of
allocating sba_request (just like other RAID drivers).

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
6df8f913d2 dmaengine: bcm-sba-raid: Peek mbox when we have no free requests
When setting up RAID array on several NVMe disks we observed that
sba_alloc_request() start failing (due to no free requests left)
and RAID array setup becomes very slow.

To improve performance, we do mbox channel peek when we have
no free requests. This improves performance of RAID array setup
because mbox requests that were completed but not processed by
mbox completion worker will be processed immediately by mbox
channel peek.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
eb67744b9a dmaengine: bcm-sba-raid: Alloc resources before registering DMA device
We should allocate DMA channel resources before registering the
DMA device in sba_probe() because we can get DMA request soon
after registering the DMA device. If DMA channel resources are
not allocated before first DMA request then SBA-RAID driver will
crash.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
f83385142c dmaengine: bcm-sba-raid: Improve sba_issue_pending() run duration
The pending sba_request list can become very long in real-life usage
(e.g. setting up RAID array) which can cause sba_issue_pending() to
run for long duration.

This patch adds common sba_process_deferred_requests() to process
few completed and pending requests so that it finishes in short
duration. We use this common sba_process_deferred_requests() in
both sba_issue_pending() and sba_receive_message().

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
5346aafcef dmaengine: bcm-sba-raid: Increase number of free sba_request
Currently, we have only 1024 free sba_request created
by sba_prealloc_channel_resources(). This is too low
and the prep_xxx() callbacks start failing more often
at time of RAID array setup over NVMe disks.

This patch sets number of free sba_request created by
sba_prealloc_channel_resources() to be:
<number_of_mailbox_channels> x 8192

Due to above, we will have sufficient number of free
sba_request and prep_xxx() callbacks failing is very
unlikely.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
5655e00f5c dmaengine: bcm-sba-raid: Allow arbitrary number free sba_request
Currently, we cannot have any arbitrary number of free sba_request
because sba_prealloc_channel_resources() allocates an array of
sba_request using devm_kcalloc() and kcalloc() cannot provide
memory beyond certain size.

This patch removes "reqs" (sba_request array) from sba_device
and makes "cmds" as variable array (instead of pointer) in
sba_request. This helps sba_prealloc_channel_resources() to
allocate sba_request and associated SBA command in one allocation
which in-turn allows arbitrary number of free sba_request.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
abfa251afe dmaengine: bcm-sba-raid: Remove reqs_free_count from sba_device
The reqs_free_count member of sba_device is not used anywhere
hence no point in tracking number of free sba_request.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
e7ae72aa65 dmaengine: bcm-sba-raid: Remove redundant resp_dma from sba_request
Both resp and resp_dma are redundant in sba_request because
resp is unused and resp_dma carries same information present
in tx.phys of sba_request. This patch removes both resp and
resp_dma from sba_request.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
10f1a33080 dmaengine: bcm-sba-raid: Remove redundant next_count from sba_request
The next_count in sba_request is redundant because same information
is captured by next_pending_count. This patch removes next_count
from sba_request.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
57a2850859 dmaengine: bcm-sba-raid: Common flags for sba_request state and fence
This patch merges sba_request state and fence into common
sba_request flags. The sba_request flags not only saves
memory but it can also be extended in-future without adding
new members.

We also make each sba_request state as separate bit in
sba_request flags to help debugging situations where a
sba_request is accidently in two states.

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
e4274cfa42 dmaengine: bcm-sba-raid: Reduce locking context in sba_alloc_request()
We don't require to hold "sba->reqs_lock" for long-time
in sba_alloc_request() because lock protection is not
required when initializing members of "struct sba_request".

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Anup Patel
e897091ab9 dmaengine: bcm-sba-raid: Minor improvments in comments
This patch does following improvments to comments:
1. Make section comments consistent across the driver by
avoiding " SBA " in some of the comments
2. Add/update few more section comments

Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:44:24 +05:30
Abhishek Sahu
749d0d4bbb dmaengine: qcom: bam_dma: add command descriptor flag
If DMA_PREP_CMD flag is passed in prep_slave_sg then peripheral
driver has passed the data is in BAM command descriptor format
and BAM driver should set CMD bit for each of the HW descriptors.

Signed-off-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 16:40:18 +05:30
Vinod Koul
3eeb515636 dmaengine: remove BUG_ON while registering devices
DMAengine core has BUG_ON to check for mandatory operations and ones based
on capabilities, but they use BUG_ON, so remove and move to error returns
and logging the errors gracefully

Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-28 09:39:46 +05:30
Kuninori Morimoto
5e857047ba dmaengine: rcar-dmac: initialize all data before registering IRQ handler
Anton Volkov noticed that engine->dev is NULL before
of_dma_controller_register() in probe.
Thus there might be a NULL pointer dereference in
rcar_dmac_chan_start_xfer while accessing chan->chan.device->dev which
is equal to (&dmac->engine)->dev.
On same reason, same and similar things will happen if we didn't
initialize all necessary data before calling register irq function.
To be more safety code, this patch initialize all necessary data
before calling register irq function.

Reported-by: Anton Volkov <avolkov@ispras.ru>
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2017-08-25 12:27:07 +05:30