__INIT directive just before kernel_entry was ignored for most platforms.
This patch fixes it and get rid of this warning:
WARNING: vmlinux.o(.text+0x478): Section mismatch: reference to .init.text:start_kernel (between '_stext' and 'run_init_process')
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Replaces the deprecated __attribute_used__ with __used. Also makes some
style adjustments to abide by the kernel coding conventions.
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
per cpu data section contains two types of data. One set which is
exclusively accessed by the local cpu and the other set which is per cpu,
but also shared by remote cpus. In the current kernel, these two sets are
not clearely separated out. This can potentially cause the same data
cacheline shared between the two sets of data, which will result in
unnecessary bouncing of the cacheline between cpus.
One way to fix the problem is to cacheline align the remotely accessed per
cpu data, both at the beginning and at the end. Because of the padding at
both ends, this will likely cause some memory wastage and also the
interface to achieve this is not clean.
This patch:
Moves the remotely accessed per cpu data (which is currently marked
as ____cacheline_aligned_in_smp) into a different section, where all the data
elements are cacheline aligned. And as such, this differentiates the local
only data and remotely accessed data cleanly.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: <linux-arch@vger.kernel.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Andi Kleen <ak@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Identical implementations of PTRACE_POKEDATA go into generic_ptrace_pokedata()
function.
AFAICS, fix bug on xtensa where successful PTRACE_POKEDATA will nevertheless
return EPERM.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If the kernel OOPSed or BUGed then it probably should be considered as
tainted. Thus, all subsequent OOPSes and SysRq dumps will report the
tainted kernel. This saves a lot of time explaining oddities in the
calltraces.
Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Acked-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Added parisc patch from Matthew Wilson -Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
While the PC speaker is wired up to the i8254 there is more to the i8254
than just the PC speaker so this code was getting in the way under its
current name.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
To support multiple TC microthreads acting as "CPUs" within a VPE,
VPE-wide interrupt mask bits must be specially manipulated during
interrupt handling. To support legacy drivers and interrupt controller
management code, SMTC has a "backstop" to track and if necessary restore
the interrupt mask. This has some performance impact on interrupt service
overhead. Disable it only if you know what you are doing.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Currently a number of unaligned instructions is counted but not used.
Add /debug/mips/unaligned_instructions file to show the value.
And add /debug/mips/unaligned_action to control behavior upon an
unaligned access. Possible actions are:
0: silently fixup the unaligned access.
1: send SIGBUS.
2: dump registers, process name, etc. and fixup.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Patch to add mips common support for the PMC-Sierra MSP71xx devices.
Signed-off-by: Marc St-Jean <Marc_St-Jean@pmc-sierra.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Which will cut down the cost of RDHWR $29 which is used to obtain the
TLS pointer and so far being emulated in software down to a single cycle
operation.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Convert old/obsolete NORET_TYPE and ATTRIB_NORET macros to use the
newer standard of "__noreturn" as defined in compiler-gcc.h.
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
the SMP load-balancer uses the boot-time migration-cost estimation
code to attempt to improve the quality of balancing. The reason for
this code is that the discrete priority queues do not preserve
the order of scheduling accurately, so the load-balancer skips
tasks that were running on a CPU 'recently'.
this code is fundamental fragile: the boot-time migration cost detector
doesnt really work on systems that had large L3 caches, it caused boot
delays on large systems and the whole cache-hot concept made the
balancing code pretty undeterministic as well.
(and hey, i wrote most of it, so i can say it out loud that it sucks ;-)
under CFS the same purpose of cache affinity can be achieved without
any special cache-hot special-case: tasks are sorted in the 'timeline'
tree and the SMP balancer picks tasks from the left side of the
tree, thus the most cache-cold task is balanced automatically.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
The idle loop goes to sleep using the WAIT instruction if !need_resched().
This has is suffering from from a race condition that if if just after
need_resched has returned 0 an interrupt might set TIF_NEED_RESCHED but
we've just completed the test so go to sleep anyway. This would be
trivial to fix by just disabling interrupts during that sequence as in:
local_irq_disable();
if (!need_resched())
__asm__("wait");
local_irq_enable();
but the processor architecture leaves it undefined if a processor calling
WAIT with interrupts disabled will ever restart its pipeline and indeed
some processors have made use of the freedom provided by the architecture
definition. This has been resolved and the Config7.WII bit indicates that
the use of WAIT is safe on 24K, 24KE and 34K cores. It also is safe on
74K starting revision 2.1.0 so enable the use of WAIT with interrupts
disabled for 74K based on a c0_prid of at least that.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
C0_status doesn't need to be initialized at this point anyway; the register
will be initialized later.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
We used to avoid the WAIT entirely on the 20K but really only need to do
this on early revs of the 20K. Without this a 20K was a bit of a
power hog. Well, in the lower power power hog category ;-)
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
For some platforms it's definitions may conflict. So that's the one-liner.
The rest is 10 square kilometers of collateral damage fixup this include
used to paper over.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Support for performance counter overflow interrupt that is on a separate
interrupt from the timer.
Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Some non-DSP enabled cores 24K / 34K can generate a DSP exception where they
are actually expected to produce a reserved instruction exception.
Signed-off-by: Chris Dearman <chris@mips.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This did corrupt register s0 which the caller of self_ipi expects to
be unchanged. This is a kernel bug which will only be triggered with
the compilers which compile __smtc_ipi_replay to use s0 across the
invocation of self_ipi. Gcc 4.1.2 does this, for example.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This fixes the warning:
arch/mips/kernel/traps.c:931: warning: 'do_default_vi' defined but not used
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* O32 fadvise64() pass long long arguments by register pairs. Add
sys32 version for 64 bit kernel.
* N32 readahead() can pass a long long argument by one register. No
need to use sys32_readahead.
Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>