Commit Graph

42 Commits

Author SHA1 Message Date
Paul Mundt
31c9efde78 sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page().
PHYSADDR() runs in to issues in 32-bit mode when we do not have the
legacy P1/P2 areas mapped, as such, we need to use page_to_phys()
directly, which also happens to do the right thing in legacy 29-bit mode.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:10:28 +09:00
Paul Mundt
654d364e26 sh: sh4_flush_cache_mm() optimizations.
The i-cache flush in the case of VM_EXEC was added way back when as a
sanity measure, and in practice we only care about evicting aliases from
the d-cache. As a result, it's possible to drop the i-cache flush
completely here.

After careful profiling it's also come up that all of the work associated
with hunting down aliases and doing ranged flushing ends up generating
more overhead than simply blasting away the entire dcache, particularly
if there are many mm's that need to be iterated over. As a result of
that, just move back to flush_dcache_all() in these cases, which restores
the old behaviour, and vastly simplifies the path.

Additionally, on platforms without aliases at all, this can simply be
nopped out. Presently we have the alias check in the SH-4 specific
version, but this is true for all of the platforms, so move the check up
to a generic location. This cuts down quite a bit on superfluous cacheop
IPIs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 14:04:06 +09:00
Paul Mundt
682f88ab74 sh: Cleanup whitespace damage in sh4_flush_icache_range().
There was quite a lot of tab->space damage done here from a former patch,
clean it up once and for all.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-09 13:19:46 +09:00
Paul Mundt
983f4c514c Revert "sh: Kill off now redundant local irq disabling."
This reverts commit 64a6d72213.

Unfortunately we can't use on_each_cpu() for all of the cache ops, as
some of them only require preempt disabling. This seems to be the same
issue that impacts the mips r4k caches, where this code was based on.
This fixes up a deadlock that showed up in some IRQ context cases.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 21:12:55 +09:00
Paul Mundt
ac6a0cf671 Merge branch 'master' into sh/smp
Conflicts:
	arch/sh/mm/cache-sh4.c
2009-09-01 13:54:14 +09:00
Matt Fleming
ce3f7cb96e sh: Fix dcache flushing for N-way write-through caches.
This adopts the special-cased 2-way write-through dcache flusher for
N-ways and moves it in to the generic path. Assignment is done at runtime
via the check for the CCR_CACHE_WT bit in the same path as the per-way
writeback flushers.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-09-01 13:32:48 +09:00
Paul Mundt
e76a0136a3 sh: Fix up sh4_flush_dcache_page() build on UP.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-27 11:31:16 +09:00
Stuart Menefy
ffad9d7a54 sh: Fix problems with cache flushing when cache is in write-through mode
Change the method used to flush the cache in write-through mode to
avoid corrupted data being written back to memory.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-24 18:39:39 +09:00
Stuart Menefy
a5cf9e2444 sh: Improve comments int SH4 cache flushing code
This is a pure documentation, to try to explain why the cache flushing code
for the SH4 is implemented the way it is.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-24 17:36:24 +09:00
Paul Mundt
64a6d72213 sh: Kill off now redundant local irq disabling.
on_each_cpu() takes care of IRQ and preempt handling, the localized
handling in each of the called functions can be killed off.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 18:21:07 +09:00
Paul Mundt
f26b2a562b sh: Make cache flushers SMP-aware.
This does a bit of rework for making the cache flushers SMP-aware. The
function pointer-based flushers are renamed to local variants with the
exported interface being commonly implemented and wrapping as necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-21 17:23:14 +09:00
Paul Mundt
c139a59587 sh: Fix up cache-sh4 build on SMP.
mapping is unused on the SMP build, trigger a build error. Move it under
the ifdef.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-20 15:24:41 +09:00
Paul Mundt
37443ef3f0 sh: Migrate SH-4 cacheflush ops to function pointers.
This paves the way for allowing individual CPUs to overload the
individual flushing routines that they care about without having to
depend on weak aliases. SH-4 is converted over initially, as it wires
up pretty much everything. The majority of the other CPUs will simply use
the default no-op implementation with their own region flushers wired up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 12:29:49 +09:00
Paul Mundt
916e97834e sh: Kill off unused flush_icache_user_range().
We use flush_cache_page() outright in copy_to_user_page(), and nothing
else needs it, so just kill it off. SH-5 still defines its own version,
but that too will go away in the same fashion once it converts over.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:38:05 +09:00
Paul Mundt
0b445dcaf3 sh: Don't export flush_dcache_all().
flush_dcache_all() is used internally by the SH-4 cache code, it is not
part of the exported cache API, so make it static and don't export it.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:22:50 +09:00
Paul Mundt
27d59ec170 sh: Move alias computation to shared cache init.
This migrates the alias computation and printing of probed cache
parameters from the SH-4 code to the shared cpu_cache_init().

This permits other platforms with aliases to make use of the same
probe logic without having to roll their own, and also produces
consistent output regardless of platform.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:11:16 +09:00
Paul Mundt
ecba106058 sh: Centralize the CPU cache initialization routines.
This provides a central point for CPU cache initialization routines.
This replaces the antiquated p3_cache_init() method, which the vast
majority of CPUs never cared about.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 11:05:42 +09:00
Paul Mundt
e7b8b7f16e sh: NO_CONTEXT ASID optimizations for SH-4 cache flush.
This optimizes for the cases when a CPU does not yet have a valid ASID
context associated with it, as in this case there is no work for any of
flush_cache_mm()/flush_cache_page()/flush_cache_range() to do. Based on
the the MIPS implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-15 02:21:16 +09:00
Paul Mundt
8174252752 sh: Split out SH-4 __flush_xxx_region() ops.
This splits out the SH-4 __flush_xxx_region() functions and defines them
as weak symbols. This allows us to provide optimized versions without
having to ifdef cache-sh4.c to death.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-08-04 18:06:01 +09:00
Paul Mundt
2277ab4a1d sh: Migrate from PG_mapped to PG_dcache_dirty.
This inverts the delayed dcache flush a bit to be more in line with other
platforms. At the same time this also gives us the ability to do some
more optimizations and cleanup. Now that the update_mmu_cache() callsite
only tests for the bit, the implementation can gradually be split out and
made generic, rather than relying on special implementations for each of
the peculiar CPU types.

SH7705 in 32kB mode and SH-4 still need slightly different handling, but
this is something that can remain isolated in the varying page copy/clear
routines. On top of that, SH-X3 is dcache coherent, so there is no need
to bother with any of these tests in the PTEAEX version of
update_mmu_cache(), so we kill that off too.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-07-22 19:20:49 +09:00
Paul Mundt
205a3b4328 sh: uninline flush_icache_all().
This uses jump_to_uncached() which is now given the noinline attribute
due to the special section mapping. Kill off the inline attribute to
fix up compilation failure.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-09-08 10:35:06 +09:00
Chris Smith
09b5a10c19 sh: Optimized flush_icache_range() implementation.
Add implementation of flush_icache_range() suitable for signal handler
and kprobes. Remove flush_cache_sigtramp() and change signal.c to use
flush_icache_range().

Signed-off-by: Chris Smith <chris.smith@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-28 18:10:32 +09:00
Stuart Menefy
cbaa118ecf sh: Preparation for uncached jumps through PMB.
Presently most of the 29-bit physical parts do P1/P2 segmentation
with a 1:1 cached/uncached mapping, jumping between the two to
control the caching behaviour. This provides the basic infrastructure
to maintain this behaviour on 32-bit physical parts that don't map
P1/P2 at all, using a shiny new linker section and corresponding
fixmap entry.

Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28 13:18:59 +09:00
Paul Mundt
ab27f62002 sh: Calculate cache aliases on L2 caches.
Calculate the number of cache aliases on probed L2 caches, and while
we're at it, print out the detected statistics at boot time for these
also.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-09-24 17:00:45 +09:00
Paul Mundt
d10040f7eb sh: Fix alias calculation for non-aliasing cases.
There was an off-by-1 on the cache alias detection logic on SH-4,
which caused n_aliases to always be 1 even when the page size
precluded the existence of aliases.

With this corrected, 64KB pages happily reports n_aliases == 0, and
hits the appropriate fast paths in the flushing routines.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-09-24 16:38:25 +09:00
Paul Mundt
7ec9d6f8c0 sh: Avoid smp_processor_id() in cache desc paths.
current_cpu_data uses smp_processor_id() in order to find the
corresponding cpu_data. As the cache descs are all currently
identical, just have this look at probed results from the boot
CPU.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-09-21 18:05:20 +09:00
Paul Mundt
f0b859e3d6 sh: Reclaim beginning of P3 space for vmalloc area.
The first 1MB of P3 space was reserved and used for page colouring,
as we've reworked that to use fixmaps, we can reclaim the space and
hand it back to VMALLOC_START.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-07-25 10:43:47 +09:00
Paul Mundt
8cf1a74305 sh: Add kmap_coherent()/kunmap_coherent() interface for SH-4.
This wires up kmap_coherent() and kunmap_coherent() on SH-4, and
moves away from the p3map_mutex and reserved P3 space, opting to
use fixmaps for colouring instead.

The copy_user_page()/clear_user_page() implementations are moved
to this, which fixes the nasty blowups with spinlock debugging
as a result of having some of these calls nested under the page
table lock.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-07-24 13:28:26 +09:00
Paul Mundt
39e688a94b sh: Revert lazy dcache writeback changes.
These ended up causing too many problems on older parts,
revert for now..

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-03-05 19:46:47 +09:00
Paul Mundt
11c1965687 sh: Fixup cpu_data references for the non-boot CPUs.
There are a lot of bogus cpu_data-> references that only end up working
for the boot CPU, convert these to current_cpu_data to fixup SMP.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-02-13 10:54:45 +09:00
Paul Mundt
26b7a78c55 sh: Lazy dcache writeback optimizations.
This converts the lazy dcache handling to the model described in
Documentation/cachetlb.txt and drops the ptep_get_and_clear() hacks
used for the aliasing dcaches on SH-4 and SH7705 in 32kB mode. As a
bonus, this slightly cuts down on the cache flushing frequency.

With that and the PTEA handling out of the way, the update_mmu_cache()
implementations can be consolidated, and we no longer have to worry
about which configuration the cache is in for the SH7705 case.

And finally, explicitly disable the lazy writeback on SMP (SH-4A).

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-02-13 10:54:44 +09:00
Paul Mundt
37bda1da45 sh: Convert remaining remap_area_pages() users to ioremap_page_range().
A couple of these were missed.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-12 08:42:08 +09:00
Paul Mundt
510c72ad2d sh: Fixup various PAGE_SIZE == 4096 assumptions.
There were a number of places that made evil PAGE_SIZE == 4k
assumptions that ended up breaking when trying to play with
8k and 64k page sizes, this fixes those up.

The most significant change is the way we load THREAD_SIZE,
previously this was done via:

	mov	#(THREAD_SIZE >> 8), reg
	shll8	reg

to avoid a memory access and allow the immediate load. With
a 64k PAGE_SIZE, we're out of range for the immediate load
size without resorting to special instructions available in
later ISAs (movi20s and so on). The "workaround" for this is
to bump up the shift to 10 and insert a shll2, which gives a
bit more flexibility while still being much cheaper than a
memory access.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06 10:45:39 +09:00
Paul Mundt
52e27782e1 sh: p3map_sem sem2mutex conversion.
Simple sem2mutex conversion for the p3map semaphores.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-12-06 10:45:37 +09:00
Paul Mundt
33573c0e32 sh: Fix occasional flush_cache_4096() stack corruption.
IRQs disabling in flush_cache_4096 for cache purge. Under certain
workloads we would get an IRQ in the middle of a purge operation,
and the cachelines would remain in an inconsistent state, leading
to occasional stack corruption.

Signed-off-by: Takeo Takahashi <takahashi.takeo@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27 18:37:30 +09:00
Paul Mundt
28ccf7f91b sh: Selective flush_cache_mm() flushing.
flush_cache_mm() wraps in to flush_cache_all(), which is rather
excessive given that the number of PTEs within the specified context
are generally quite low.  Optimize for walking the mm's VMA list and
selectively flushing the VMA ranges from the dcache. Invalidate the
icache only if a VMA sets VM_EXEC.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27 18:30:07 +09:00
Paul Mundt
298476220d sh: Add control register barriers.
Currently when making changes to control registers, we
typically need some time for changes to take effect (8
nops, generally).  However, for sh4a we simply need to
do an icbi..

This is a simple patch for implementing a general purpose
ctrl_barrier() which functions as a control register write
barrier. There's some additional documentation in the patch
itself, but it's pretty self explanatory.

There were also some places where we were not doing the
barrier, which didn't seem to have any adverse effects on
legacy parts, but certainly did on sh4a. It's safer to have
the barrier in place for legacy parts as well in these cases,
though this does make flush_tlb_all() more expensive (by an
order of 8 nops).  We can ifdef around the flush_tlb_all()
case for now if it's clear that all legacy parts won't have
a problem with this.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27 14:57:44 +09:00
Richard Curnow
b638d0b921 sh: Optimized cache handling for SH-4/SH-4A caches.
This reworks some of the SH-4 cache handling code to more easily
accomodate newer-style caches (particularly for the > direct-mapped
case), as well as optimizing some of the old code.

Signed-off-by: Richard Curnow <richard.curnow@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27 14:09:26 +09:00
Paul Mundt
fdfc74f9fc sh: Support for SH-4A memory barriers.
SH-4A supports 'synco' as a barrier, sprinkle it around
the cache ops as necessary..

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27 14:05:52 +09:00
Paul Mundt
a252710fc5 sh: flush_cache_range() cleanup and optimizations.
flush_cache_range() wasn't page aligning the end of the range,
we can't assume that it will always be page aligned, and we
ended up getting unaligned faults in some rare call paths.

Additionally, we add a small optimization to just purge the
dcache entirely if the range is large enough that the page
table walking will take longer. We use an arbitrary value of
64 pages for the large range size, as per sh64.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2006-09-27 11:29:55 +09:00
Jörn Engel
6ab3d5624e Remove obsolete #include <linux/config.h>
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-06-30 19:25:36 +02:00
Linus Torvalds
1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00