Commit Graph

8 Commits

Author SHA1 Message Date
Arnd Bergmann
44fb3026ad ARM: tegra: Add EMC driver for v4.2-rc1
This introduces the EMC driver that's required to scale the external
 memory frequency.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2
 
 iQIcBAABCAAGBQJVU1FHAAoJEN0jrNd/PrOhClAQAKDOt5OYvadTfmPogJiwxlQl
 7NvVVsnIzHfScANP6B/pynoC8Gfx1owRdLzPPWKM860gtdtaA9pXOWRuxQV8NLcR
 5WQO4Z4sNXyvA704MNFK+EqydCN2Iu/Z+2Ups8VHZ9VmukSNaCu/se6JvW+GJMm1
 DHytjzHeckNUruNrZnXLA6ckwymDbMtCBV3W9hIeKe9aPjJrgjR7Rlhhu7cX9XGF
 XSCF9eigh4MsyLgf3cWMoVl2NIePCW3sbAQEKkiM5xzPo21VlpPyja7OWoytcQkC
 HEMmuICfwT8XEd7fTabW+WHxTnVhkL+OZ833p+LSBUCeXJ82N8vSV8HZlRHly9LZ
 diXoJ9+2l/C02FtnnB7BtLxnyQ2NF6LAP1CvYZ7eF/Z+lxthErs0bIXyjC0RTuBN
 9y6t0/VhEGMv6ApOly1VrHLtU6LnQEbYryvlOjvP4NQVb6jQc165UdPZtXufgx9o
 VprQoOIHdBzMnThI7ndRWgZcayn+VmJy0TvjTGmhLcurwigbCjkqYdwhq2c7Z0F2
 FpEOCqWMJ0mywQXoZadIk5ON51IVRin0otyKed14PY9QU6c6hJ6NCZQi1pQnB95W
 Ut7ZrdZnZMIjMui8EOzeJc82mjKL3agpbYTQqqiIV+kHY9dXpxSOOHd4DbMN5Kc2
 3xvgkGtj61VOlGiGnp/W
 =QeY4
 -----END PGP SIGNATURE-----

Merge tag 'tegra-for-4.2-emc' of git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux into next/drivers

Merge "ARM: tegra: Add EMC driver for v4.2-rc1" from Thierry Reding:

This introduces the EMC driver that's required to scale the external
memory frequency.

* tag 'tegra-for-4.2-emc' of git://git.kernel.org/pub/scm/linux/kernel/git/tegra/linux:
  memory: tegra: Add EMC frequency debugfs entry
  memory: tegra: Add EMC (external memory controller) driver
  memory: tegra: Add API needed by the EMC driver
  of: Add Tegra124 EMC bindings
  of: Document timings subnode of nvidia,tegra-mc
2015-05-13 17:59:35 +02:00
Mikko Perttunen
9c77a81f21 memory: tegra: Add EMC frequency debugfs entry
This file in debugfs can be used to get or set the EMC frequency.
Reading the file will return the currently set frequency in Hz, while
writing the file sets the specified frequency rounded to the next
highest frequency supported by the board.

Will be very useful when tuning memory scaling.

Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
[treding@nvidia.com: add "emc" debugfs directory]
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-05-05 11:39:48 +02:00
Mikko Perttunen
73a7f0a906 memory: tegra: Add EMC (external memory controller) driver
Implements functionality needed to change the rate of the memory bus
clock.

Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-05-05 11:12:17 +02:00
Mikko Perttunen
3d9dd6fdd2 memory: tegra: Add API needed by the EMC driver
The EMC driver needs to know the number of external memory devices and
also needs to update the EMEM configuration based on the new rate of the
memory bus.

To know how to update the EMEM config, looks up the values of the burst
regs in the DT, for a given timing.

Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-05-05 11:10:19 +02:00
Tomeu Vizoso
6f0a4d0c26 memory: tegra: Disable ARBITRATION_EMEM interrupt
As this interrupt is just for development purposes, as the TRM says, and
the sheer amount of interrupts fired can seriously disrupt userspace
when testing the lower frequencies supported by the EMC.

From the TRM:

"There is one performance warning type interrupt: ARBITRATION_EMEM. It
fires when the MC detects that a request has been pending in the Row
Sorter long enough to hit the DEADLOCK_PREVENTION_SLACK_THRESHOLD. In
addition to true performance problems, this interrupt may fire in
situations such as clock-change where the EMC backpressures pending
traffic for long periods of time. This interrupt helps developers
identify and debug performance issues and configuration issues."

Signed-off-by: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-05-04 15:09:36 +02:00
Thierry Reding
242b1d7133 memory: tegra: Add Tegra132 support
The memory controller on Tegra132 is very similar to the one found on
Tegra124. But the Denver CPUs don't have an outer cache, so dcache
maintenance is done slightly differently.

Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-05-04 12:54:23 +02:00
Thierry Reding
e660df07ab memory: tegra: Add SWGROUP names
Subsequent patches will add debugfs files that print the status of the
SWGROUPs. Add a new names field and complement the SoC tables with the
names of the individual SWGROUPs.

Signed-off-by: Thierry Reding <treding@nvidia.com>
2015-05-04 12:54:23 +02:00
Thierry Reding
8918465163 memory: Add NVIDIA Tegra memory controller support
The memory controller on NVIDIA Tegra exposes various knobs that can be
used to tune the behaviour of the clients attached to it.

Currently this driver sets up the latency allowance registers to the HW
defaults. Eventually an API should be exported by this driver (via a
custom API or a generic subsystem) to allow clients to register latency
requirements.

This driver also registers an IOMMU (SMMU) that's implemented by the
memory controller. It is supported on Tegra30, Tegra114 and Tegra124
currently. Tegra20 has a GART instead.

The Tegra SMMU operates on memory clients and SWGROUPs. A memory client
is a unidirectional, special-purpose DMA master. A SWGROUP represents a
set of memory clients that form a logical functional unit corresponding
to a single device. Typically a device has two clients: one client for
read transactions and one client for write transactions, but there are
also devices that have only read clients, but many of them (such as the
display controllers).

Because there is no 1:1 relationship between memory clients and devices
the driver keeps a table of memory clients and the SWGROUPs that they
belong to per SoC. Note that this is an exception and due to the fact
that the SMMU is tightly integrated with the rest of the Tegra SoC. The
use of these tables is discouraged in drivers for generic IOMMU devices
such as the ARM SMMU because the same IOMMU could be used in any number
of SoCs and keeping such tables for each SoC would not scale.

Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thierry Reding <treding@nvidia.com>
2014-12-04 16:11:47 +01:00