Commit Graph

2389 Commits

Author SHA1 Message Date
Paul Mundt
9dbe00a56a sh: Fix up IRQ re-enabling for the need_resched() case.
In the case where need_resched() is set in between the cpu_idle() and
pm_idle() calls we were missing an else case for just re-enabling local
IRQs and bailing out. This was noticed by the irqs_disabled() warning,
even though IRQs were being re-enabled elsewhere.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 17:55:59 +09:00
Paul Mundt
0e6d4986e7 sh: Make check_pgt_cache() more aggressive while idling.
This follows the x86 change and moves check_pgt_cache() up under the
!need_resched() tight loop, rather than simply calling in to it when
exiting idle.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 17:27:58 +09:00
Paul Mundt
f533c3d340 sh: Idle loop chainsawing for SMP-based light sleep.
This does a bit of chainsawing of the idle loop code to get light sleep
working on SMP. Previously this was forcing secondary CPUs in to sleep
mode with them not coming back if they didn't have their own local
timers. Given that we use clockevents broadcasting by default, the CPU
managing the clockevents can't have IRQs disabled before entering its
sleep state.

This unfortunately leaves us with the age-old need_resched() race in
between local_irq_enable() and cpu_sleep(), but at present this is
unavoidable. After some more experimentation it may be possible to layer
on SR.BL bit manipulation over top of this scheme to inhibit the race
condition, but given the current potential for missing wakeups, this is
left as a future exercise.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 17:20:58 +09:00
Paul Mundt
94eab0bb20 sh: Force boot CPU in to light sleep mode for SH-X3 SMP.
All of the secondary CPUs are forced in to light sleep mode, but we were
missing the same initialization for the boot CPU. This resulted in
inconsistent sleep modes depending on which CPU we were on, confusing the
idle loop when not polling.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 17:19:08 +09:00
Paul Mundt
abeaf33a41 Merge branch 'sh/stable-updates'
Conflicts:
	arch/sh/mm/cache-sh4.c
2009-10-16 15:14:50 +09:00
Thomas Gleixner
52a94909f0 sh: Remove BKL from landisk gio.
The open function got the BKL via the big push down. Replace it by
preempt_enable/disable as this is sufficient for an UP machine.

The ioctl can be unlocked because there is no functionality which
requires serialization. The usage by multiple callers is broken with
and without the BKL due to the local static variable addr.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 14:42:33 +09:00
Magnus Damm
5fb80ae8bd sh: disabled cache handling fix.
Add code to handle the cache disabled case. Fixes breakage introduced by
37443ef3f0 ("sh: Migrate SH-4 cacheflush
ops to function pointers."). Without this patch configuring caches off
with CONFIG_CACHE_OFF=y makes kfr2r09 and migo-r lock up in fbdev
deferred io or early user space.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 14:38:48 +09:00
Valentin Sitdikov
a7a7c0e1d1 sh: Fix up single page flushing to use PAGE_SIZE.
Presently The SH-4 cache flushing code uses flush_cache_4096() for most
of the real flushing work, which breaks down to a fixed 4096 unroll and
increment. Not only is this sub-optimal for larger page sizes, it's also
uncovered a bug in sh4_flush_dcache_page() when large page sizes are used
and we have no cache aliases -- resulting in only a part of the page's
D-cache lines being written back.

Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-16 14:15:38 +09:00
Paul Mundt
731ba3301d sh: Count NMIs in irq_cpustat_t.
This plugs in support for NMI counting per-CPU via irq_cpustat_t.
Modelled after the x86 implementation.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 16:42:28 +09:00
Paul Mundt
56bfc42f6c sh: TS_RESTORE_SIGMASK conversion.
Replace TIF_RESTORE_SIGMASK with TS_RESTORE_SIGMASK and define our own
set_restore_sigmask() function.  This saves the costly SMP-safe set_bit
operation, which we do not need for the sigmask flag since TIF_SIGPENDING
always has to be set too.

Based on the x86 and powerpc change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 16:05:42 +09:00
Paul Mundt
b195e46677 Merge branch 'sh/stable-updates' 2009-10-14 15:53:08 +09:00
Paul Mundt
457b646189 sh: Fix a TRACE_IRQS_OFF typo.
The resume_userspace path had TRACE_IRQS_OFF written incorrectly and so
never handled the transition properly. This was fixed once before but
seems to have made it back in the tree. Fix it for good.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 15:50:28 +09:00
Paul Mundt
4d2947f7c6 sh: Optimize the setup_rt_frame() I-cache flush.
This only needs to flush the return code via the legacy path, and just
invalidates uselessly otherwise. This makes the behaviour consistent for
all of the trampoline setup paths.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 15:49:45 +09:00
Paul Mundt
a66c2edea5 sh: Populate initial secondary CPU info from boot_cpu_data.
The secondary CPU info was seeing corrupted results due to not entering
all of the setup paths taken by the boot CPU. So we just memcpy() the
boot cpu data over directly, and then fix up the per-CPU bits.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 15:44:12 +09:00
Paul Mundt
2908df9e2c sh: Tidy up SMP cpuinfo.
Trivial change for cleaning up the cpuinfo pretty printing on SMP, adds a
newline between CPUs.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 15:43:52 +09:00
Paul Mundt
eaa47704d9 sh: Use boot_cpu_data for FPU tests in sigcontext paths.
We do not want to use smp_processor_id() from these paths, as they trip
preempt BUGs. Switch the test over to the boot cpu directly.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 15:43:41 +09:00
Paul Mundt
d780613acc sh: Only invalidate the I-cache range for secondary CPUs stack_start.
Secondary CPUs already take care of the D-cache bits through the common
cache initialization path, and the only thing that is necessary after
twiddling around with stack_start is ensuring that the I-cache changes
are visible (particularly since this tends to be the only part lacking
coherency).

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 11:51:28 +09:00
Paul Mundt
36c8719926 sh: Provide CALLER_ADDRx definitions even when ftrace is disabled.
Despite being located in the ftrace header, the CALLER_ADDRx definitions
are used by generic code. As such, we have to provide it generically, and
given that there is no real dependence on ftrace in the first place, the
definitions can just be moved out.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-14 11:49:49 +09:00
Paul Mundt
e4b053d96a sh: ftrace: Make code modification NMI safe.
This cribs the x86 implementation of ftrace_nmi_enter() and friends to
make ftrace_modify_code() NMI safe, particularly on SMP configurations.

For additional notes on the problems involved, see the comment below
ftrace_call_replace().

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 16:52:50 +09:00
Paul Mundt
c8afde7f40 sh: Don't profile return_address().
This adds return_address.c to the -pg exclusion list, as this is the
building block for CALLER_ADDRx we do not want to profile this.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 16:31:08 +09:00
Paul Mundt
5a3abba77d sh: Tidy up the dwarf module helpers.
This enables us to build the dwarf unwinder both with modules enabled and
disabled in addition to reducing code size in the latter case. The
helpers are also consolidated, and modified to resemble the BUG module
helpers.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 13:32:19 +09:00
Paul Mundt
ac4fac8cb2 sh: Generalize CALLER_ADDRx support.
This splits out the unwinder implementation and adds a new
return_address() abstraction modelled after the ARM code. The DWARF
unwinder is tied in to this, returning NULL otherwise in the case of
being unable to support arbitrary depths.

This enables us to get correct behaviour with the unwinder enabled,
as well as disabling the arbitrary depth support when frame pointers are
enabled, as arbitrary depths with __builtin_return_address() are not
supported regardless.

With this abstraction it's also possible to layer on a simplified
implementation with frame pointers in the event that the unwinder isn't
enabled, although this is left as a future exercise.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 13:10:14 +09:00
Paul Mundt
5852b203ef Merge branch 'sh/stable-updates' 2009-10-13 12:45:08 +09:00
Paul Mundt
9922262242 sh: ftrace: Fix up syscall tracepoint support.
Sync up with latest core changes in the syscalls tracing area:

- tracing: Map syscall name to number (syscall_name_to_nr())
- tracing: Call arch_init_ftrace_syscalls at boot
- tracing: add support tracepoint ids (set_syscall_{enter,exit}_id())

Taken from the s390 change.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 12:42:48 +09:00
Paul Mundt
95019b48ad Merge branch 'sh/stable-updates' 2009-10-13 11:27:08 +09:00
Paul Mundt
964f7e5a56 sh: force dcache flush if dcache_dirty bit set.
This too follows the ARM change, given that the issue at hand applies to
all platforms that implement lazy D-cache writeback.

This fixes up the case when a page mapping disappears between the
flush_dcache_page() call (when PG_dcache_dirty is set for the page) and
the update_mmu_cache() call -- such as in the case of swap cache being
freed early. This kills off the mapping test in update_mmu_cache() and
switches to simply testing for PG_dcache_dirty.

Reported-by: Nitin Gupta <ngupta@vflare.org>
Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 11:18:34 +09:00
Paul Mundt
af67c3a9e6 sh: update die() output.
This follows the ARM change, as SH had all of the same issues:

Make die() better match x86:
- add printing of the last accessed sysfs file
- ensure console_verbose() is called under the lock
- ensure we panic outside of oops_exit()

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-13 10:57:52 +09:00
Paul Mundt
7a0064d672 Merge branch 'sh/ftrace' of git://github.com/mfleming/linux-2.6 2009-10-13 10:31:50 +09:00
Paul Mundt
8ec006c587 Merge branch 'sh/dwarf-unwinder'
Conflicts:
	arch/sh/kernel/dwarf.c
2009-10-12 08:50:07 +09:00
Paul Mundt
5ab78ff693 Merge branch 'sh/dwarf-unwinder' of git://github.com/mfleming/linux-2.6 into sh/dwarf-unwinder 2009-10-12 08:42:46 +09:00
Matt Fleming
d26cddbbd2 sh: tracing: Use the DWARF unwinder for CALLER_ADDRx
The major reason for implementing the DWARF unwinder in the first place
was so that we could stop using __builtin_return_address(n), which
doesn't work on SH for n > 0.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2009-10-11 17:56:17 +01:00
Matt Fleming
c2d474d6f8 sh: Remove any reference to recursive functions from comments
Originally, dwarf_unwind_stack() was a recursive function and it seems
that some of the old comments were never updated.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2009-10-11 17:12:32 +01:00
Matt Fleming
ed4fe7f488 sh: Fix memory leak in dwarf_unwind_stack()
If we broke out of the while (1) loop because the return address of
"frame" was zero, then "frame" needs to be free'd before we return.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2009-10-11 17:12:28 +01:00
Matt Fleming
a6a2f2ad67 sh: Teach the DWARF unwinder about modules
Pass a module's .eh_frame section to the DWARF unwinder at module load
time so that the section's FDEs and CIEs can be registered with the
DWARF unwinder. This allows us to unwind the stack through module code
when generating backtraces.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
2009-10-11 16:41:44 +01:00
Paul Mundt
3d4e0cfb33 sh: Reinstate ILSEL -> IRL intc mappings for SH-X3 proto CPU.
In the multi-evt conversion for the SH-X3 proto CPU, IRLs were dropped
down to a single unique masking source, which ended up blowing up on
ILSEL-based IRQs which have special semantics that otherwise confuse the
intc code. While this does result in intc spewing about not having a
unique masking source, we don't really care.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 22:45:41 +09:00
Paul Mundt
2a8bc92345 sh: Shut up CONFIG_32BIT=n compiler warnings.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 22:24:55 +09:00
Matt Fleming
20b5014b3e sh: Fold fixed-PMB support into dynamic PMB support
The initialisation process differs for CONFIG_PMB and for
CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB
entries that were allocated by the bootloader.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:34 +09:00
Matt Fleming
ef269b3276 sh: Fix the offset from P1SEG/P2SEG where we map RAM
We need to map the gap between 0x00000000 and __MEMORY_START in the PMB,
as well as RAM.

With this change my 7785LCR board can switch to 32bit MMU mode at
runtime.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:26 +09:00
Matt Fleming
3105121949 sh: Remap physical memory into P1 and P2 in pmb_init()
Eventually we'll have complete control over what physical memory gets
mapped where and we can probably do other interesting things. For now
though, when the MMU is in 32-bit mode, we map physical memory into the
P1 and P2 virtual address ranges with the same semantics as they have in
29-bit mode.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:52:03 +09:00
Matt Fleming
edd7de803c sh: Get rid of the kmem cache code
Unfortunately, at the time during in boot when we want to be setting up
the PMB entries, the kmem subsystem hasn't been initialised.

We now match pmb_map slots with pmb_entry_list slots. When we find an
empty slot in pmb_map, we set the bit, thereby acquiring the
corresponding pmb_entry_list entry. There is a benefit in using this
static array of struct pmb_entry's; we don't need to acquire any locks
in order to traverse the list of struct pmb_entry's.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:47 +09:00
Matt Fleming
8386aebb9e sh: Make most PMB functions static
There's no need to export the internal PMB functions for allocating,
freeing and modifying PMB entries, etc. This way we can restrict the
interface for PMB.

Also remove the static from pmb_init() so that we have more freedom in
setting up the initial PMB entries and turning on MMU 32bit mode.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:37 +09:00
Matt Fleming
b336f124b1 sh: CONFIG_PMB doesn't mean the MMU is in 32bit mode
CONFIG_PMB will eventually allow the MMU to be switched between 29-bit
and 32-bit mode dynamically at runtime.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:23 +09:00
Matt Fleming
1f69b6af91 sh: Prepare for dynamic PMB support
To allow the MMU to be switched between 29bit and 32bit mode at runtime
some constants need to swapped for functions that return a runtime
value.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:12 +09:00
Matt Fleming
8bd642b17b sh: Obliterate the P1 area macros
Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the
idea that all addresses in P1SEG are untranslated, so we can access an
address's physical page as an offset from P1SEG. This doesn't work for
CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used
for PMB mappings and so can be translated to any physical address.

Likewise, replace a P1SEGADDR() use with virt_to_phys().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:51:02 +09:00
Matt Fleming
067784f623 sh: Allocate PMB entry slot earlier
Simplify set_pmb_entry() by removing the possibility of not finding a
free slot in the PMB. Instead we now allocate a slot in pmb_alloc() so
that if there are no free slots we fail at allocation time, rather than
in set_pmb_entry().

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:49:57 +09:00
Paul Mundt
5e3679c594 Merge branch 'sh/cachetlb' 2009-10-10 21:36:53 +09:00
Guennadi Liakhovetski
a469f627c1 SH: add support for the RJ54N1CB0C camera for the kfr2r09 platform
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10 21:21:59 +09:00
Matt Fleming
a2767cfb1d sh: Don't allocate smaller sized mappings on every iteration
Currently, we've got the less than ideal situation where if we need to
allocate a 256MB mapping we'll allocate four entries like so,

	 entry 1: 128MB
	 entry 2:  64MB
	 entry 3:  16MB
	 entry 4:  16MB

This is because as we execute the loop in pmb_remap() we will
progressively try mapping the remaining address space with smaller and
smaller sizes. This isn't good because the size we use on one iteration
may be the perfect size to use on the next iteration, for instance when
the initial size is divisible by one of the PMB mapping sizes.

With this patch, we now only need two entries in the PMB to map 256MB of
address space,

	  entry 1: 128MB
	  entry 2: 128MB

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:26:35 +09:00
Matt Fleming
2bea7ea7d5 sh: Try PMB mapping based on physical address, not mapping size
We should favour PMB mappings when the physical address cannot be
reached with 29-bits.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:25:10 +09:00
Matt Fleming
fc2bdefdde sh: Plug PMB alloc memory leak
If we fail to allocate a PMB entry in pmb_remap() we must remember to
clear and free any PMB entries that we may have previously allocated,
e.g. if we were allocating a multiple entry mapping.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09 11:24:09 +09:00