2833 Commits

Author SHA1 Message Date
Paul Mundt
94ea5e449a sh: wire up SET/GET_UNALIGN_CTL.
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from
the PPC and ia64 implementations. The thread flags happen to be the
logical inverse of what the global fault mode is set to, so this works
out pretty cleanly. By default the global fault mode is used, with tasks
now being able to override their own settings via prctl().

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-23 12:56:30 +09:00
Paul Mundt
7c1b2c6890 sh: allow alignment fault mode to be configured at kernel boot.
Follow the ARM change, which is what our alignment helpers are based on
in the first place.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-23 11:48:50 +09:00
Dominik Brodowski
3b7a17fcda resource/PCI: mark struct resource as const
Now that we return the new resource start position, there is no
need to update "struct resource" inside the align function.
Therefore, mark the struct resource as const.

Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2010-02-22 16:16:57 -08:00
Dominik Brodowski
b26b2d494b resource/PCI: align functions now return start of resource
As suggested by Linus, align functions should return the start
of a resource, not void. An update of "res->start" is no longer
necessary.

Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
Cc: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
2010-02-22 16:16:56 -08:00
Kuninori Morimoto
16afc9fb02 sh: sh7724: Update FSI/SPU2 clock
When FSI and Network (= NFS file system) were used at the same time,
the I/O of FSI was unstable.  This patch updates the SPU2 clock (which
is used for FSI) to solve this issue.  Special thanks to Jeremy.

Signed-off-by: Jeremy Baker <Jeremy.Baker@renesas.com>
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-22 19:14:18 +09:00
Magnus Damm
6f26d19fce sh: always enable sh7724 vpu_clk and set to 166MHz on Ecovec
Update the sh7724 processor code to always enable vpu_clk.

On the Ecovec board, set the vpu_clk to 166 Mhz.

The 166MHz setting results in a divide-by-6 setup for
vpu_clk and improves the VPU performance compared to the
power-on-reset/bootloader configuration.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-22 19:11:23 +09:00
Magnus Damm
7be85c6eb4 sh: add sh7724 kick callback to clk_div4_table
This patch adds a ->kick() callback to clk_div4_table
and ties it into sh_clk_div4_set_rate(). A sh7724
specific kick function is also added that updates the
KICK bit whenever div4 clocks in FRQCRA and FRQCRB
have been set. Allows us to set the VPU clock.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-22 19:11:22 +09:00
Magnus Damm
0a5f337ecd sh: introduce struct clk_div4_table
This patch introduces struct clk_div4_table. The structure
will be used to keep div4 specific data, and is with this
patch replacing the struct clk_div_mult_table pointer arg
used by the sh_clk_div4_register() functions.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-22 19:11:20 +09:00
Magnus Damm
de7ca2144c sh: clock-cpg div4 set_rate() shift fix
Make sure the div4 bitfield is shifted according
to the enable_bit value in sh_clk_div4_set_rate().

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-22 19:11:19 +09:00
Russell King
4b3073e1c5 MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
On VIVT ARM, when we have multiple shared mappings of the same file
in the same MM, we need to ensure that we have coherency across all
copies.  We do this via make_coherent() by making the pages
uncacheable.

This used to work fine, until we allowed highmem with highpte - we
now have a page table which is mapped as required, and is not available
for modification via update_mmu_cache().

Ralf Beache suggested getting rid of the PTE value passed to
update_mmu_cache():

  On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
  to construct a pointer to the pte again.  Passing a pte_t * is much
  more elegant.  Maybe we might even replace the pte argument with the
  pte_t?

Ben Herrenschmidt would also like the pte pointer for PowerPC:

  Passing the ptep in there is exactly what I want.  I want that
  -instead- of the PTE value, because I have issue on some ppc cases,
  for I$/D$ coherency, where set_pte_at() may decide to mask out the
  _PAGE_EXEC.

So, pass in the mapped page table pointer into update_mmu_cache(), and
remove the PTE value, updating all implementations and call sites to
suit.

Includes a fix from Stephen Rothwell:

  sparc: fix fallout from update_mmu_cache API change

  Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-20 16:41:46 +00:00
Matt Fleming
8c563a30cd sh: Turn on speculative return for SH7785 and SH7786
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18 18:54:18 +09:00
Paul Mundt
77f36fcc03 Merge branch 'sh/pmb-dynamic' 2010-02-18 18:35:20 +09:00
Paul Mundt
d01447b319 sh: Merge legacy and dynamic PMB modes.
This implements a bit of rework for the PMB code, which permits us to
kill off the legacy PMB mode completely. Rather than trusting the boot
loader to do the right thing, we do a quick verification of the PMB
contents to determine whether to have the kernel setup the initial
mappings or whether it needs to mangle them later on instead.

If we're booting from legacy mappings, the kernel will now take control
of them and make them match the kernel's initial mapping configuration.
This is accomplished by breaking the initialization phase out in to
multiple steps: synchronization, merging, and resizing. With the recent
rework, the synchronization code establishes page links for compound
mappings already, so we build on top of this for promoting mappings and
reclaiming unused slots.

At the same time, the changes introduced for the uncached helpers also
permit us to dynamically resize the uncached mapping without any
particular headaches. The smallest page size is more than sufficient for
mapping all of kernel text, and as we're careful not to jump to any far
off locations in the setup code the mapping can safely be resized
regardless of whether we are executing from it or not.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18 18:13:51 +09:00
Paul Mundt
2e450643d7 sh: Use uncached I/O helpers in PMB setup.
The PMB code is an example of something that spends an absurd amount of
time running uncached when only a couple of operations really need to be.
This switches over to the shiny new uncached helpers, permitting us to
spend far more time running cached.

Additionally, MMUCR twiddling is perfectly safe from cached space given
that it's paired with a control register barrier, so fix that up, too.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18 13:26:05 +09:00
Paul Mundt
b8f7918f33 sh: Provide uncached I/O helpers.
There are lots of registers that can only be updated from the uncached
mapping, so we add some helpers for those cases in order to make it
easier to ensure that we only make the jump when it's absolutely
necessary.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18 13:23:30 +09:00
Paul Mundt
d53a0d33bc sh: PMB locking overhaul.
This implements some locking for the PMB code. A high level rwlock is
added for dealing with rw accesses on the entry map while a per-entry
data structure spinlock is added to deal with the PMB entry changing out
from underneath us.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 21:17:02 +09:00
Mike Frysinger
e7b8e675d9 tracing: Unify arch_syscall_addr() implementations
Most implementations of arch_syscall_addr() are the same, so create a
default version in common code and move the one piece that differs (the
syscall table) to asm/syscall.h.  New arch ports don't have to waste
time copying & pasting this simple function.

The s390/sparc versions need to be different, so document why.

Signed-off-by: Mike Frysinger <vapier@gentoo.org>
Acked-by: David S. Miller <davem@davemloft.net>
Acked-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
LKML-Reference: <1264498803-17278-1-git-send-email-vapier@gentoo.org>
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2010-02-17 13:07:21 +01:00
Paul Mundt
0065b96775 sh: Fix up dynamically created write-through PMB mappings.
Write-through PMB mappings still require the cache bit to be set, even if
they're to be flagged with a different cache policy and bufferability
bit. To reduce some of the confusion surrounding the flag encoding we
centralize the cache mask based on the system cache policy while we're at
it.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 18:05:23 +09:00
Paul Mundt
d7813bc9e8 sh: Build PMB entry links for existing contiguous multi-page mappings.
This plugs in entry sizing support for existing mappings and then builds
on top of that for linking together entries that are mapping contiguous
areas. This will ultimately permit us to coalesce mappings and promote
head pages while reclaiming PMB slots for dynamic remapping.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 17:56:38 +09:00
Paul Mundt
9edef28653 sh: uncached mapping helpers.
This adds some helper routines for uncached mapping support. This
simplifies some of the cases where we need to check the uncached mapping
boundaries in addition to giving us a centralized location for building
more complex manipulation on top of.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 16:28:00 +09:00
Paul Mundt
51becfd962 sh: PMB tidying.
Some overdue cleanup of the PMB code, killing off unused functionality
and duplication sprinkled about the tree.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 15:33:30 +09:00
Paul Mundt
7bdda6209f sh: Fix up more 64-bit pgprot truncation on SH-X2 TLB.
Both the store queue API and the PMB remapping take unsigned long for
their pgprot flags, which cuts off the extended protection bits. In the
case of the PMB this isn't really a problem since the cache attribute
bits that we care about are all in the lower 32-bits, but we do it just
to be safe. The store queue remapping on the other hand depends on the
extended prot bits for enabling userspace access to the mappings.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 13:23:00 +09:00
Magnus Damm
838a4a9dce sh: fix sh7723 SDHI support using INTC force_disable
Update the sh7723 INTC tables with force_enable support
to mask out pending unsupported SDHI interrupt sources.

Without this patch the kernel locks up due to a pending
SDHI interrupt that the tmio_mmc driver cannot handle.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 12:45:44 +09:00
Magnus Damm
e9125ac0bf sh: fix sh7722 SDHI support using INTC force_disable
Update the sh7722 INTC tables with force_enable support
to mask out pending unsupported SDHI interrupt sources.

Without this patch the kernel locks up due to a pending
SDHI interrupt that the tmio_mmc driver cannot handle.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 12:45:43 +09:00
Paul Mundt
49f3bfe933 sh: Setup boot CPU VBR early to enable early page faults.
vmemmap and the vmsplit code amongst others need to be able to take page
faults much earlier than trap_init() time, so move this in to the early
CPU initialization. VBR setup for secondary CPUs is already handled
through start_secondary(), so we only need to do this for the boot CPU.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17 12:33:22 +09:00
Paul Mundt
1d5cfcdff7 sh: Kill off some superfluous legacy PMB special casing.
The __va()/__pa() offsets and the boot memory offsets are consistent for
all PMB users, so there is no need to special case these for legacy PMB.
Kill the special casing off and depend on CONFIG_PMB across the board.
This also fixes up yet another addressing bug for sh64.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16 21:43:38 +09:00
Paul Mundt
efd54ea315 sh: Merge the legacy PMB mapping and entry synchronization code.
This merges the code for iterating over the legacy PMB mappings and the
code for synchronizing software state with the hardware mappings. There's
really no reason to do the same iteration twice, and this also buys us
the legacy entry logging facility for the dynamic PMB case.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16 18:39:30 +09:00
Paul Mundt
55cef91a5d sh: Prevent fixed slot PMB remapping from clobbering boot entries.
The PMB initialization code walks the entries and synchronizes the
software PMB state with the hardware mappings, preserving the slot index.
Unfortunately pmb_alloc() only tested the bit position in the entry map
and failed to set it, resulting in subsequent remaps being able to be
dynamically assigned a slot that trampled an existing boot mapping with
general badness ensuing.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16 17:14:04 +09:00
Nobuhiro Iwamatsu
319c2cc761 sh: Fix zImage boot using fixed PMB.
Signed-off-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com>
Signed-off-by: Yoshihiro Shimoda <shimoda.yoshihiro@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16 13:50:26 +09:00
Magnus Damm
fb1e776050 sh: fix sh7724 SDHI support using INTC force_disable
Update the sh7724 INTC tables with force_enable support
to mask out pending unsupported SDHI interrupt sources.

Without this patch the kernel locks up due to a pending
SDHI interrupt that the tmio_mmc driver cannot handle.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16 13:38:57 +09:00
Paul Mundt
04c8697355 sh: Fix up legacy PMB mode offset calculation.
The change for fixing up sh64 inadvertently inverted the logic for legacy
PMB, fix that back up.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-15 16:10:57 +09:00
Paul Mundt
028c5d5d59 Merge branch 'sh/stable-updates' 2010-02-15 14:49:37 +09:00
Paul Mundt
4b505db9c4 sh64: fix tracing of signals.
This follows the parisc change to ensure that tracehook_signal_handler()
is aware of when we are single-stepping in order to ptrace_notify()
appropriately. While this was implemented for 32-bit SH, sh64 neglected
to make use of TIF_SINGLESTEP when it was folded in with the 32-bit code,
resulting in ptrace_notify() never being called.

As sh64 uses all of the other abstractions already, this simply plugs in
the thread flag in the appropriate enable/disable paths and fixes up the
tracehook notification accordingly. With this in place, sh64 is brought
in line with what 32-bit is already doing.

Reported-by: Mike Frysinger <vapier@gentoo.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-15 14:17:45 +09:00
Paul Mundt
19f6b8b44e sh64: fix up memory offset calculation.
The linker script offsets were broken by the recent 29/32-bit
integration, so this fixes it up for sh64.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-12 15:41:45 +09:00
Paul Mundt
b0f3ae03ac sh: Isolate uncached mapping support.
This splits out the uncached mapping support under its own config option,
presently only used by 29-bit mode and 32-bit + PMB. This will make it
possible to optionally add an uncached mapping on sh64 as well as booting
without an uncached mapping for 32-bit.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-12 15:40:00 +09:00
Paul Mundt
a4dad4c75c sh: update sdk7786 defconfig.
This plugs in USB and PCI and other bits for SDK7786.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-10 16:06:42 +09:00
Paul Mundt
7578a4c625 sh: Fix up multi-resource mapping for SH7786 PCIe.
This reworks some of the SH7786 PCIe initialization code to dynamically
setup and size the various resource windows, as opposed to the original
code that simply wired in a couple of them statically.

At the same time, we tidy up the initialization code a bit, kill off some
read-only register twiddling that was gleaned from the bus analyzer, and
also propagate the physical slot/channel mapping.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-10 16:00:58 +09:00
Magnus Damm
801cd56e3e sh: break out enable/reparent div4 clocks on sh7723
Break out sh7723 div4 clocks for SIU and IRDA as
reparent / enable clocks. Similar to the SIU clock
patch for sh7722 by Guennadi.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-09 18:24:31 +09:00
Magnus Damm
3844eadcfd sh: sh7724/Ecovec24/KFR2R09/MS7724SE SDHI vector merge
Merge the SDHI vectors in the sh7724 INTC table
and update the SDHI platform data for Ecovec24,
KFR2R09 and MS7724SE.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-09 18:24:30 +09:00
Magnus Damm
e3e80046e0 sh: sh7723/AP325 SDHI vector merge
Merge the SDHI vectors in the sh7723 INTC table
and update the SDHI platform data for AP325.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-09 18:24:30 +09:00
Magnus Damm
8d9adabac3 sh: sh7722/Migo-R SDHI vector merge
Merge the SDHI vectors in the sh7722 INTC table
and update the SDHI platform data for Migo-R.

Signed-off-by: Magnus Damm <damm@opensource.se>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-09 18:24:29 +09:00
Paul Mundt
7561f2dd39 sh: Fix up SH7786 PCI resource definitions.
This adds in some of the missing memory resources for channels 1/2 and
gets the code building again for the recent changes.

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-08 16:36:56 +09:00
Paul Mundt
13fd7aeb9a Merge branches 'sh/dwarf-unwinder', 'sh/g3-prep' and 'sh/stable-updates' 2010-02-08 11:48:10 +09:00
Paul Mundt
2e18e04798 Merge branch 'sh/dmaengine'
Conflicts:
	arch/sh/drivers/dma/dma-sh.c
2010-02-08 11:34:03 +09:00
Matt Fleming
858918b77b sh: Optimise FDE/CIE lookup by using red-black trees
Now that the DWARF unwinder is being used to provide perf callstacks
unwinding speed is an issue. It is no longer being used in exceptional
circumstances where we don't care about runtime performance, e.g. when
panicing, so it makes sense improve performance is possible.

With this patch I saw a 42% improvement in unwind time when calling
return_address(1). Greater improvements will be seen as the number of
levels unwound increases as each unwind is now cheaper.

Note that insertion time has doubled but that's just the price we pay
for keeping the trees balanced. However, this is a one-time cost for
kernel boot/module load and so the improvements in lookup time dominate
the extra time we spend keeping the trees balanced.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-08 11:29:15 +09:00
Matt Fleming
1af0b2fc67 sh: Remove superfluous setup_frame_reg call
There's no need to setup the frame pointer again in
call_handle_tlbmiss. The frame pointer will already have been setup in
handle_interrupt.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-08 10:47:11 +09:00
Matt Fleming
944a343861 sh: Don't continue unwinding across interrupts
Unfortunately, due to poor DWARF info in current toolchains, unwinding
through interrutps cannot be done reliably. The problem is that the
DWARF info for function epilogues is wrong.

Take this standard epilogue sequence,

80003cc4:       e3 6f           mov     r14,r15
80003cc6:       26 4f           lds.l   @r15+,pr
80003cc8:       f6 6e           mov.l   @r15+,r14
						<---- interrupt here
80003cca:       f6 6b           mov.l   @r15+,r11
80003ccc:       f6 6a           mov.l   @r15+,r10
80003cce:       f6 69           mov.l   @r15+,r9
80003cd0:       0b 00           rts

If we take an interrupt at the highlighted point, the DWARF info will
bogusly claim that the return address can be found at some offset from
the frame pointer, even though the frame pointer was just restored. The
worst part is if the unwinder finds a text address at the bogus stack
address - unwinding will continue, for a bit, until it finally comes
across an unexpected address on the stack and blows up.

The only solution is to stop unwinding once we've calculated the
function that was executing when the interrupt occurred. This PC can be
easily calculated from pt_regs->pc.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-08 10:47:04 +09:00
Matt Fleming
1dca56f138 sh: Setup frame pointer in handle_exception path
In order to allow the DWARF unwinder to unwind through exceptions we
need to setup the frame pointer register (r14).

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-08 10:46:53 +09:00
Matt Fleming
142698282c sh: Correct the offset of the return address in ret_from_exception
The address that ret_from_exception and ret_from_irq will return to is
found in the stack slot for SPC, not PR. This error was causing the
DWARF unwinder to pick up the wrong return address on the stack and then
unwind using the unwind tables for the wrong function.

While I'm here I might as well add CFI annotations for the other
registers since they could be useful when unwinding.

Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-08 10:46:46 +09:00
Guennadi Liakhovetski
cfefe99795 sh: implement DMA_SLAVE capability in SH dmaengine driver
Tested to work with a SIU ASoC driver on sh7722 (migor).

Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-08 09:40:26 +09:00