Commit Graph

22886 Commits

Author SHA1 Message Date
Michael Neuling
ce48b21007 powerpc: Add VSX context save/restore, ptrace and signal support
This patch extends the floating point save and restore code to use the
VSX load/stores when VSX is available.  This will make FP context
save/restore marginally slower on FP only code, when VSX is available,
as it has to load/store 128bits rather than just 64bits.

Mixing FP, VMX and VSX code will get constant architected state.

The signals interface is extended to enable access to VSR 0-31
doubleword 1 after discussions with tool chain maintainers.  Backward
compatibility is maintained.

The ptrace interface is also extended to allow access to VSR 0-31 full
registers.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:50 +10:00
Michael Neuling
72ffff5b17 powerpc: Add VSX assembler code macros
This adds the macros for the VSX load/store instruction as most
binutils are not going to support this for a while.

Also add VSX register save/restore macros and vsr[0-63] register definitions.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:48 +10:00
Michael Neuling
b962ce9d26 powerpc: Add VSX CPU feature
Add a VSX CPU feature.  Also add code to detect if VSX is available
from the device tree.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:47 +10:00
Michael Neuling
c6e6771b87 powerpc: Introduce VSX thread_struct and CONFIG_VSX
The layout of the new VSR registers and how they overlap on top of the
legacy FPR and VR registers is:

                   VSR doubleword 0               VSR doubleword 1
          ----------------------------------------------------------------
  VSR[0]  |             FPR[0]            |                              |
          ----------------------------------------------------------------
  VSR[1]  |             FPR[1]            |                              |
          ----------------------------------------------------------------
          |              ...              |                              |
          |              ...              |                              |
          ----------------------------------------------------------------
  VSR[30] |             FPR[30]           |                              |
          ----------------------------------------------------------------
  VSR[31] |             FPR[31]           |                              |
          ----------------------------------------------------------------
  VSR[32] |                             VR[0]                            |
          ----------------------------------------------------------------
  VSR[33] |                             VR[1]                            |
          ----------------------------------------------------------------
          |                              ...                             |
          |                              ...                             |
          ----------------------------------------------------------------
  VSR[62] |                             VR[30]                           |
          ----------------------------------------------------------------
  VSR[63] |                             VR[31]                           |
          ----------------------------------------------------------------

VSX has 64 128bit registers.  The first 32 regs overlap with the FP
registers and hence extend them with and additional 64 bits.  The
second 32 regs overlap with the VMX registers.

This commit introduces the thread_struct changes required to reflect
this register layout.  Ptrace and signals code is updated so that the
floating point registers are correctly accessed from the thread_struct
when CONFIG_VSX is enabled.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:46 +10:00
Michael Neuling
6f3d8e6947 powerpc: Make load_up_fpu and load_up_altivec callable
Make load_up_fpu and load_up_altivec callable so they can be reused by
the VSX code.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:45 +10:00
Michael Neuling
10e343925a powerpc: Move altivec_unavailable
Move the altivec_unavailable code, to make room at 0xf40 where the
vsx_unavailable exception will be.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:44 +10:00
Michael Neuling
9c75a31c35 powerpc: Add macros to access floating point registers in thread_struct.
We are going to change where the floating point registers are stored
in the thread_struct, so in preparation add some macros to access the
floating point registers.  Update all code to use these new macros.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:43 +10:00
Michael Neuling
9e7511861c powerpc: Fix MSR setting in 32 bit signal code
If we set the SPE MSR bit in save_user_regs we can blow away the VEC
bit.  This doesn't matter in reality as they are in fact the same bit
but looks bad.

Also, when we add VSX in a later patch, we need to be able to set two
separate MSR bits here.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:42 +10:00
Tony Breeds
9b09c6d909 powerpc: Change the default link address for pSeries zImage kernels
Currently we set the start of the .text section to be 4Mb for pSeries.
In situations where the zImage is > 8Mb we'll fail to boot (due to
overlapping with OF).  Move .text in a zImage from 4MB to 64MB
(well past OF).

We still will not be able to load large zImage unless we also move OF,
to that end, add a note to the zImage ELF to move OF to 32Mb.  If this
is the very first kernel booted then we'll need to move OF manually by
setting real-base.

Signed-off-by: Tony Breeds <tony@bakeyournoodle.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:32 +10:00
Michael Ellerman
c230328def powerpc: Use an alternative feature section in entry_64.S
Use an alternative feature section in _switch. There are three cases
handled here, either we don't have an SLB, in which case we jump over the
entire code section, or if we do we either do or don't have 1TB segments.

Boot tested on Power3, Power5 and Power5+.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:31 +10:00
Michael Ellerman
362e7701fd powerpc: Add self-tests of the feature fixup code
This commit adds tests of the feature fixup code, they are run during
boot if CONFIG_FTR_FIXUP_SELFTEST=y. Some of the tests manually invoke
the patching routines to check their behaviour, and others use the
macros and so are patched during the normal patching done during boot.

Because we have two sets of macros with different names, we use a macro
to generate the test of the macros, very niiiice.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:30 +10:00
Michael Ellerman
9b1a735de6 powerpc: Add logic to patch alternative feature sections
This commit adds the logic to patch alternative sections.  This is fairly
straightforward, except for branches.  Relative branches that jump from
inside the else section to outside of it need to be translated as they're
moved, otherwise they will jump to the wrong location.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:29 +10:00
Michael Ellerman
fac23fe4be powerpc: Introduce infrastructure for feature sections with alternatives
The current feature section logic only supports nop'ing out code, this means
if you want to choose at runtime between instruction sequences, one or both
cases will have to execute the nop'ed out contents of the other section, eg:

BEGIN_FTR_SECTION
	or	1,1,1
END_FTR_SECTION_IFSET(FOO)
BEGIN_FTR_SECTION
	or	2,2,2
END_FTR_SECTION_IFCLR(FOO)

and the resulting code will be either,

	or	1,1,1
	nop

or,
	nop
	or	2,2,2

For small code segments this is fine, but for larger code blocks and in
performance criticial code segments, it would be nice to avoid the nops.
This commit starts to implement logic to allow the following:

BEGIN_FTR_SECTION
	or	1,1,1
FTR_SECTION_ELSE
	or	2,2,2
ALT_FTR_SECTION_END_IFSET(FOO)

and the resulting code will be:

	or	1,1,1
or,
	or	2,2,2

We achieve this by extending the existing FTR macros. The current feature
section semantic just becomes a special case, ie. if the else case is empty
we nop out the default case.

The key limitation is that the size of the else case must be less than or
equal to the size of the default case. If the else case is smaller the
remainder of the section is nop'ed.

We let the linker put the else case code in with the rest of the text,
so that relative branches from the else case are more likley to link,
this has the disadvantage that we can't free the unused else cases.

This commit introduces the required macro and linker script changes, but
does not enable the patching of the alternative sections.

We also need to update two hand-made section entries in reg.h and timex.h

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:28 +10:00
Michael Ellerman
51c52e8669 powerpc: Split out do_feature_fixups() from cputable.c
The logic to patch CPU feature sections lives in cputable.c, but these days
it's used for CPU features as well as firmware features.  Move it into
it's own file for neatness and as preparation for some additions.

While we're moving the code, we pull the loop body logic into a separate
routine, and remove a comment which doesn't apply anymore.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:24 +10:00
Michael Ellerman
b7bcda631e powerpc: Add PPC_NOP_INSTR, a hash define for the preferred nop instruction
A bunch of code has hard-coded the value for a "nop" instruction, it
would be nice to have a #define for it.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:23 +10:00
Michael Ellerman
ae0dc73625 powerpc: Add tests of the code patching routines
Add tests of the existing code patching routines, as well as the new
routines added in the last commit.  The self-tests are run late in boot
when CONFIG_CODE_PATCHING_SELFTEST=y, which depends on DEBUG_KERNEL=y.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:22 +10:00
Michael Ellerman
411781a290 powerpc: Add new code patching routines
This commit adds some new routines for patching code, which will be used
in a following commit.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:21 +10:00
Michael Ellerman
053a858efa powerpc: Make create_branch() return errors if the branch target is too large
If you pass a target value to create_branch() which is more than 32MB - 4,
or - 32MB away from the branch site, then it's impossible to create an
immediate branch.  The current code doesn't check, which will lead to us
creating a branch to somewhere else - which is bad.

For code that cares to check we return 0, which is easy to check for, and
for code that doesn't at least we'll be creating an illegal instruction,
rather than a branch to some random address.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:19 +10:00
Michael Ellerman
e7a57273c6 powerpc: Allow create_branch() to return errors
Currently create_branch() creates a branch instruction for you, and
patches it into the call site.  In some circumstances it would be nice
to be able to create the instruction and patch it later, and also some
code might want to check for errors in the branch creation before
doing the patching.  A future commit will change create_branch() to
check for errors.

For callers that don't care, replace create_branch() with
patch_branch(), which just creates the branch and patches it directly.

While we're touching all the callers, change to using unsigned int *,
as this seems to match usage better.  That allows (and requires) us to
remove the volatile in the definition of vector in powermac/smp.c and
mpc86xx_smp.c, that's correct because now that we're passing vector as
an unsigned int * the compiler knows that it's value might change
across the patch_branch() call.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Acked-by: Jon Loeliger <jdl@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:19 +10:00
Michael Ellerman
aaddd3eaca powerpc: Move code patching code into arch/powerpc/lib/code-patching.c
We currently have a few routines for patching code in asm/system.h, because
they didn't fit anywhere else. I'd like to clean them up a little and add
some more, so first move them into a dedicated C file - they don't need to
be inlined.

While we're moving the code, drop create_function_call(), it's intended
caller never got merged and will be replaced in future with something
different.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:18 +10:00
Kumar Gala
f0c426bc35 powerpc: Move common module code into its own file
Refactor common code between ppc32 and ppc64 module handling into a
shared filed.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:05 +10:00
Dave Kleikamp
87e9ab13c3 powerpc: hash_huge_page() should get the WIMG bits from the lpte
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
Cc: Adam Litke <agl@us.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:02 +10:00
Joel Schopp
0cb9901377 powerpc: Tell firmware we support architecture V2.06
Add the bits to the architecture-vec so that ibm,client-architecture
lets the firmware know we support the 2.06 architecture.

Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:28:00 +10:00
Joel Schopp
635f5a6354 powerpc: Add cputable entry for Power7 architected mode
Add an entry for Power7 architected mode and add "(raw)" to Power7 raw
mode to distinguish it more clearly.

Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-07-01 11:27:59 +10:00
Paul Mackerras
3a8247cc2c powerpc: Only demote individual slices rather than whole process
At present, if we have a kernel with a 64kB page size, and some
process maps something that has to be mapped with 4kB pages (such as a
cache-inhibited mapping on POWER5+, or the eHCA infiniband queue-pair
pages), we change the process to use 4kB pages everywhere.  This hurts
the performance of HPC programs that access eHCA from userspace.

With this patch, the kernel will only demote the slice(s) containing
the eHCA or cache-inhibited mappings, leaving the remaining slices
able to use 64kB hardware pages.

This also changes the slice_get_unmapped_area code so that it is
willing to place a 64k-page mapping into (or across) a 4k-page slice
if there is no better alternative, i.e. if the program specified
MAP_FIXED or if there is not sufficient space available in slices that
are either empty or already have 64k-page mappings in them.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-01 11:27:57 +10:00
Michael Neuling
e952e6c4d6 powerpc: Add cputable entry for POWER7
Add a cputable entry for the POWER7 processor.

Also tell firmware that we know about POWER7.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:31:11 +10:00
Becky Bruce
316a405841 powerpc: Get rid of bitfields in ppc_bat struct
While working on the 36-bit physical support, I noticed that there
was exactly one line of code that actually referenced the bitfields.
So I got rid of them and redefined ppc_bat as a struct of 2 u32's:
batu and batl.  I also got rid of the previous union that held the
bitfield structs and a word representation of the batu/l values.

This seems like a nicer solution than adding in a bunch of
new bitfields to support extended bat addressing that would never
get used, and just leaving the struct as-is would have been
incomplete in the face of large physical addressing.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:31:05 +10:00
Becky Bruce
7c5c4325d2 powerpc: Change BAT code to use phys_addr_t
Currently, the physical address is an unsigned long, but it should
be phys_addr_t in set_bat, [v/p]_mapped_by_bat.  Also, create a
macro that can convert a large physical address into the correct
format for programming the BAT registers.

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:31:03 +10:00
Arnd Bergmann
36c35be332 powerpc: Increase CRASH_HANDLER_MAX
There are now two potential callers of machine_crash_shutdown,
so increase the limit accordingly.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:31:00 +10:00
Arnd Bergmann
5acb08070d powerpc/cell: Disable ptcal in case of crash kdump
We need to disable ptcal before starting a new kernel after a crash,
in order to avoid overwriting data in the kdump kernel.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:30:58 +10:00
Arnd Bergmann
cf2076012f powerpc/pseries: Call pseries_kexec_setup only on pseries
The pseries_kexec_setup function overwrites some ppc_md
pointers, so make sure it only gets called when running on
the right architecture.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:30:57 +10:00
Benjamin Herrenschmidt
41743a4e34 powerpc: Free a PTE bit on ppc64 with 64K pages
This frees a PTE bit when using 64K pages on ppc64.  This is done
by getting rid of the separate _PAGE_HASHPTE bit.  Instead, we just test
if any of the 16 sub-page bits is set.  For non-combo pages (ie. real
64K pages), we set SUB0 and the location encoding in that field.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:30:53 +10:00
Nick Piggin
b1e2270ffe spufs: Convert nopfn to fault
Signed-off-by: Nick Piggin <npiggin@suse.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:30:44 +10:00
Segher Boessenkool
1976aef970 powerpc: Get rid of CROSS32{AS,LD,OBJCOPY}
CROSS32AS and CROSS32LD are never used (instead, CROSS32CC is used
with proper command line options).

CROSS32OBJCOPY isn't used anymore either, since the "wrapper" stuff
was added.

Remove these unused variables.

Signed-off-by: Segher Boessenkool <segher@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-06-30 22:30:39 +10:00
Paul Mackerras
e9a4b6a3f6 Merge branch 'linux-2.6' 2008-06-30 10:16:50 +10:00
Paul Mackerras
441dbb500b Merge branch 'next' of master.kernel.org:/pub/scm/linux/kernel/git/jwboyer/powerpc-4xx 2008-06-30 09:57:05 +10:00
Kumar Gala
dee805532a powerpc: Add dma nodes to 83xx, 85xx and 86xx boards
Added DMA nodes for the elo/elo-plus DMA engines.

Renamed the interrupt controller alias in mpc832x_rdb.dts to ipic so that
its the same as all the other boards.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-27 16:04:29 -05:00
Kumar Gala
f82796214a powerpc/booke: Add kprobes support for booke style processors
This patch is based on work done by Madhvesh. R. Sulibhavi back in
March 2007.

We refactor some of the single step handling since it differs between
"classic" and "booke" powerpc cores.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 03:35:46 -05:00
Kumar Gala
b76e59d1fb powerpc/kprobes: Some minor fixes
* Mark __flush_icache_range as a function that can't be probed since its
  used by the kprobe code.

* Fix an issue with single stepping and async exceptions.  We need to
  ensure that we dont get an async exception (external, decrementer, etc)
  while we are attempting to single step the probe point.

  Added a check to ensure we only handle a single step if its really
  intended for the instruction in question.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 03:35:33 -05:00
Anton Vorontsov
d14b3dd619 powerpc/QE: use arch_initcall to probe QUICC Engine GPIOs
It was discussed that global arch_initcall() is preferred way to probe
QE GPIOs, so let's use it.

Signed-off-by: Anton Vorontsov <avorontsov@ru.mvista.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 01:49:09 -05:00
Vitaly Bordug
2308c954f5 powerpc/85xx: Update pin setup for 8560ads
Ports B and C pins programming is changed to get SCC2 UART and FCC3
ethernet work.

Signed-off-by: Vitaly Bordug <vitb@kernel.crashing.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 01:49:07 -05:00
Kumar Gala
aba11fc50c powerpc/e500mc: flush L2 on NAP for e500mc
If we have an L2CSR register (e500mc) we need to flush the L2 before going
to nap.  We use the HW flush mechanism provided in that register.

The code reuses the CPU_FTR_604_PERF_MON bit as it is no longer used by
any code in the kernel.  Additionally we didn't reuse the exist L2CR
feature bit as this is intended for the 7xxx L2CR register and L2CSR
is part of the new Freescale "Book-E" registers.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 01:49:03 -05:00
Kumar Gala
fc4033b2f8 powerpc/85xx: add DOZE/NAP support for e500 core
The e500 core enter DOZE/NAP power-saving modes when the core go to
cpu_idle routine.

The power management default running mode is DOZE, If the user

echo 1 > /proc/sys/kernel/powersave-nap

the system will change to NAP running mode.

Signed-off-by: Dave Liu <daveliu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-26 01:48:56 -05:00
Bryan Wu
8d0a60032f Blackfin arch: fix up section mismatch warning
--
WARNING: vmlinux.o(.text+0x721a): Section mismatch in reference from the function ___fill_code_cplbtab() to the function .init.text:_fill_cplbtab()
The function ___fill_code_cplbtab() references
the function __init _fill_cplbtab().
This is often because ___fill_code_cplbtab lacks a __init
annotation or the annotation of _fill_cplbtab is wrong.

WARNING: vmlinux.o(.text+0x7238): Section mismatch in reference from the function ___fill_code_cplbtab() to the function .init.text:_fill_cplbtab()
The function ___fill_code_cplbtab() references
the function __init _fill_cplbtab().
This is often because ___fill_code_cplbtab lacks a __init
annotation or the annotation of _fill_cplbtab is wrong.

WARNING: vmlinux.o(.text+0x7250): Section mismatch in reference from the function ___fill_code_cplbtab() to the function .init.text:_fill_cplbtab()
The function ___fill_code_cplbtab() references
the function __init _fill_cplbtab().
This is often because ___fill_code_cplbtab lacks a __init
annotation or the annotation of _fill_cplbtab is wrong.

WARNING: vmlinux.o(.text+0x7264): Section mismatch in reference from the function ___fill_code_cplbtab() to the function .init.text:_fill_cplbtab()
The function ___fill_code_cplbtab() references
the function __init _fill_cplbtab().
This is often because ___fill_code_cplbtab lacks a __init
annotation or the annotation of _fill_cplbtab is wrong.

WARNING: vmlinux.o(.text+0x72a2): Section mismatch in reference from the function ___fill_data_cplbtab() to the function .init.text:_fill_cplbtab()
The function ___fill_data_cplbtab() references
the function __init _fill_cplbtab().
This is often because ___fill_data_cplbtab lacks a __init
annotation or the annotation of _fill_cplbtab is wrong.

WARNING: vmlinux.o(.text+0x72bc): Section mismatch in reference from the function ___fill_data_cplbtab() to the function .init.text:_fill_cplbtab()
The function ___fill_data_cplbtab() references
the function __init _fill_cplbtab().
This is often because ___fill_data_cplbtab lacks a __init
annotation or the annotation of _fill_cplbtab is wrong.

WARNING: vmlinux.o(.text+0x72d4): Section mismatch in reference from the function ___fill_data_cplbtab() to the function .init.text:_fill_cplbtab()
The function ___fill_data_cplbtab() references
the function __init _fill_cplbtab().
This is often because ___fill_data_cplbtab lacks a __init
annotation or the annotation of _fill_cplbtab is wrong.

WARNING: vmlinux.o(.text+0x72e8): Section mismatch in reference from the function ___fill_data_cplbtab() to the function .init.text:_fill_cplbtab()
The function ___fill_data_cplbtab() references
the function __init _fill_cplbtab().
This is often because ___fill_data_cplbtab lacks a __init
annotation or the annotation of _fill_cplbtab is wrong.
--

Signed-off-by: Bryan Wu <cooloney@kernel.org>
2008-06-25 12:41:51 +08:00
Sonic Zhang
71a7d15562 Blackfin arch: fix bug - kernel boot fails when Spinlock and rw-lock debugging enabled
Initialize the lock of bad_irq_desc properly.
The content of irq_desc array is replaced by bad_irq_desc in blackfin
arch irqchip init code. So, do it properly as common irq init code.

Signed-off-by: Sonic Zhang <sonic.zhang@analog.com>
Signed-off-by: Bryan Wu <cooloney@kernel.org>
2008-06-25 12:02:07 +08:00
Linus Torvalds
bd8c540fe8 Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6:
  [IA64] Eliminate NULL test after alloc_bootmem in iosapic_alloc_rte()
  [IA64] Handle count==0 in sn2_ptc_proc_write()
  [IA64] Fix boot failure on ia64/sn2
2008-06-24 18:12:33 -07:00
Linus Torvalds
919c0d14ae Merge branch 'kvm-updates-2.6.26' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm
* 'kvm-updates-2.6.26' of git://git.kernel.org/pub/scm/linux/kernel/git/avi/kvm:
  KVM: Remove now unused structs from kvm_para.h
  x86: KVM guest: Use the paravirt clocksource structs and functions
  KVM: Make kvm host use the paravirt clocksource structs
  x86: Make xen use the paravirt clocksource structs and functions
  x86: Add structs and functions for paravirt clocksource
  KVM: VMX: Fix host msr corruption with preemption enabled
  KVM: ioapic: fix lost interrupt when changing a device's irq
  KVM: MMU: Fix oops on guest userspace access to guest pagetable
  KVM: MMU: large page update_pte issue with non-PAE 32-bit guests (resend)
  KVM: MMU: Fix rmap_write_protect() hugepage iteration bug
  KVM: close timer injection race window in __vcpu_run
  KVM: Fix race between timer migration and vcpu migration
2008-06-24 18:09:06 -07:00
Linus Torvalds
9bf8a943ad Merge branch 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
  xen: remove support for non-PAE 32-bit
2008-06-24 11:21:47 -07:00
Gerd Hoffmann
f6e16d5ad4 x86: KVM guest: Use the paravirt clocksource structs and functions
This patch updates the kvm host code to use the pvclock structs
and functions, thereby making it compatible with Xen.

The patch also fixes an initialization bug: on SMP systems the
per-cpu has two different locations early at boot and after CPU
bringup.  kvmclock must take that in account when registering the
physical address within the host.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-06-24 21:02:33 +03:00
Gerd Hoffmann
50d0a0f987 KVM: Make kvm host use the paravirt clocksource structs
This patch updates the kvm host code to use the pvclock structs.
It also makes the paravirt clock compatible with Xen.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Avi Kivity <avi@qumranet.com>
2008-06-24 21:02:32 +03:00