Intel systems report the cache level data from CPUID 4 in sysfs.
Add a CPUID 4 emulation for AMD CPUs to report the same
information for them. This allows programs to read this
information in a uniform way.
The AMD way to report this is less flexible so some assumptions
are hardcoded (e.g. no L3)
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Previously the apicid<->coreid split was computed based on the max
number of cores. Now use a new CPUID AMD defined for that. On most
systems right now it should be 0 and the old method will be used.
Signed-off-by: Andi Kleen <ak@suse.de>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
This reverts commits
3e3318dee0 [PATCH] swsusp: x86_64 mark special saveable/unsaveable pages
b6370d96e0 [PATCH] swsusp: i386 mark special saveable/unsaveable pages
ce4ab0012b [PATCH] swsusp: add architecture special saveable pages support
because not only do they apparently cause page faults on x86, the
infrastructure doesn't compile on powerpc.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Copy the softirq bits in preempt_count from the current context into the
hardirq context when using 4K stacks to make the softirq_count macro work
correctly and thereby fix softirq cpu time accounting.
Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de>
Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
As described in a previous patch and documented in mm/filemap.h,
copy_from_user_inatomic* shouldn't zero out the tail of the buffer after an
incomplete copy.
This patch implements that change for i386.
For the _nocache version, a new __copy_user_intel_nocache is defined similar
to copy_user_zeroio_intel_nocache, and this is ultimately used for the copy.
For the regular version, __copy_from_user_ll_nozero is defined which uses
__copy_user and __copy_user_intel - the later needs casts to reposition the
__user annotations.
If copy_from_user_atomic is given a constant length of 1, 2, or 4, then we do
still zero the destintion on failure. This didn't seem worth the effort of
fixing as the places where it is used really don't care.
Signed-off-by: Neil Brown <neilb@suse.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: William Lee Irwin III <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add cpu_relax() to infinite loops in crash.c and doublefault.c. This is
the safest change.
Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Add cpu_relax() to various smpboot.c init loops. cpu_relax() always implies a
barrier (according to Arjan), so remove those as well.
Signed-off-by: Andreas Mohr <andi@lisas.de>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Clean up and refactor i386 sub-architecture setup.
This change moves all the code from the
asm-i386/mach-*/setup_arch_pre/post.h headers, into
arch/i386/mach-*/setup.c. mach-*/setup_arch_pre.h is renamed to
setup_arch.h, and contains only things which should be in header files. It
is purely code-motion; there should be no functional changes at all.
Several functions in arch/i386/kernel/setup.c needed to be made non-static
so that they're visible to the code in mach-*/setup.c. asm-i386/setup.h is
used to hold the prototypes for these functions.
Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com>
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Chris Wright <chrisw@sous-sol.org>
Cc: Christian Limpach <Christian.Limpach@cl.cam.ac.uk>
Cc: Martin Bligh <mbligh@google.com>
Cc: James Bottomley <James.Bottomley@steeleye.com>
Cc: Andrey Panin <pazke@donpac.ru>
Cc: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Remove VM_LOCKED before remap_pfn range from device drivers and get rid of
VM_SHM.
remap_pfn_range() already sets VM_IO. There is no need to set VM_SHM since
it does nothing. VM_LOCKED is of no use since the remap_pfn_range does not
place pages on the LRU. The pages are therefore never subject to swap
anyways. Remove all the vm_flags settings before calling remap_pfn_range.
After removing all the vm_flag settings no use of VM_SHM is left. Drop it.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Acked-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Convert a few stragglers over to for_each_possible_cpu(), remove
for_each_cpu().
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Considering that there isn't a lot of hw we can depend on during resume,
this is about as good as it gets.
This is x86-only for now, although the basic concept (and most of the
code) will certainly work on almost any platform.
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6: (65 commits)
ACPI: suppress power button event on S3 resume
ACPI: resolve merge conflict between sem2mutex and processor_perflib.c
ACPI: use for_each_possible_cpu() instead of for_each_cpu()
ACPI: delete newly added debugging macros in processor_perflib.c
ACPI: UP build fix for bugzilla-5737
Enable P-state software coordination via _PDC
P-state software coordination for speedstep-centrino
P-state software coordination for acpi-cpufreq
P-state software coordination for ACPI core
ACPI: create acpi_thermal_resume()
ACPI: create acpi_fan_suspend()/acpi_fan_resume()
ACPI: pass pm_message_t from acpi_device_suspend() to root_suspend()
ACPI: create acpi_device_suspend()/acpi_device_resume()
ACPI: replace spin_lock_irq with mutex for ec poll mode
ACPI: Allow a WAN module enable/disable on a Thinkpad X60.
sem2mutex: acpi, acpi_link_lock
ACPI: delete unused acpi_bus_drivers_lock
sem2mutex: drivers/acpi/processor_perflib.c
ACPI add ia64 exports to build acpi_memhotplug as a module
ACPI: asus_acpi_init(): propagate correct return value
...
Manual resolve of conflicts in:
arch/i386/kernel/cpu/cpufreq/acpi-cpufreq.c
arch/i386/kernel/cpu/cpufreq/speedstep-centrino.c
include/acpi/processor.h
list_splice_init(list, head) does unneeded job if it is known that
list_empty(head) == 1. We can use list_replace_init() instead.
Signed-off-by: Oleg Nesterov <oleg@tv-sign.ru>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The hardirq_ctx and softirq_ctx variables are written to on init only,
Signed-off-by: Andreas Mohr <andi@lisas.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Default values for boolean and tristate options can only be 'y', 'm' or 'n'.
This patch removes wrong default for SCHED_SMT.
Signed-off-by: Jean-Luc Leger <jean-luc.leger@dspnet.fr.eu.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Move do_suspend_lowlevel to correct segment. If it is in the same hugepage
with ro data, mark_rodata_ro will make it unexecutable.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
flush_tlb_all uses on_each_cpu, which will disable/enable interrupt.
In suspend/resume time, this will make interrupt wrongly enabled.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Pages (Reserved/ACPI NVS/ACPI Data) below max_low_pfn will be saved/restored
by S4 currently. We should mark 'Reserved' pages not saveable.
Pages (Reserved/ACPI NVS/ACPI Data) above max_low_pfn will not be
saved/restored by S4 currently. We should save the 'ACPI NVS/ACPI Data'
pages.
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Nigel Cunningham <nigel@suspend2.net>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
New CPU flags for next generation of crypto engine as found in VIA C7
processors.
Signed-off-by: Michal Ludvig <michal@logix.cz>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Sometimes thread_info and task_struct get out-of-sync with each other.
Printing task.thread_info in show_registers() can help spot this. And when
task_struct is corrupt then task.comm can contain garbage, so only print as
many characters as it can hold.
Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Never allow int3 traps from V8086 mode to enter the kprobes handler.
Signed-off-by: Zachary Amsden <zach@vmware.com>
Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
Cc: Chuck Ebbert <76306.1226@compuserve.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
We need to check for vm86 mode first before looking at selector privilege
bits.
Segment limit is always base + 64k and only the low 16 bits of EIP are
significant in vm86 mode.
Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Zachary Amsden <zach@vmware.com>
Cc: Rohit Seth <rohitseth@google.com>
Acked-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Use proper defines instead of open-coded values.
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
constify structs and add one __initdata.
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
PCI code was outside of CONFIG_PCI, add __initdata at cyrix_55x0 (since
accessed within __init function only).
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
The i386 page fault handler does not allow enough slack when checking for
userspace access below the current stack pointer. This prevents use of the
enter instruction by user code. Fix this by allowing enough slack for
"enter $65535,$31" to execute.
Problem reported by Tomasz Malesinski <tmal@mimuw.edu.pl>
Tested using this program, based on the original from Tomasz:
.file "ovflow.S"
.version "01.01"
gcc2_compiled.:
.section .rodata
.LC0:
.string "asdf\n"
.text
.align 4
.globl main
.type main,@function
main:
nest_level=0
.rept 30
enter $0,$nest_level
nest_level=nest_level+1
.endr
enter $65535,$30
enter $65535,$31
addl $-12,%esp
pushl $.LC0
call printf
addl $16,%esp
.L2:
.rept 32
leave
.endr
ret
.Lfe1:
.size main,.Lfe1-main
.ident "GCC: (GNU) 2.95.4 20011002 (Debian prerelease)"
Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
On i386, kernel irq balance doesn't work.
1) In function do_irq_balance, after kernel finds the min_loaded cpu but
before calling set_pending_irq to really pin the selected_irq to the
target cpu, kernel does a cpus_and with irq_affinity[selected_irq].
Later on, when the irq is acked, kernel would calls
move_native_irq=>desc->handler->set_affinity to change the irq affinity.
However, every function pointed by
hw_interrupt_type->set_affinity(unsigned int irq, cpumask_t cpumask)
always changes irq_affinity[irq] to cpumask. Next time when recalling
do_irq_balance, it has to do cpu_ands again with
irq_affinity[selected_irq], but irq_affinity[selected_irq] already
becomes one cpu selected by the first irq balance.
2) Function balance_irq in file arch/i386/kernel/io_apic.c has the same
issue.
[akpm@osdl.org: cleanups]
Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
If CONFIG_FRAME_POINTERS is enabled, and one does a dump_stack() during
early SMP init, an infinite stackdump and a bootup hang happens:
[<c0104e7f>] show_trace+0xd/0xf
[<c0104e96>] dump_stack+0x15/0x17
[<c01440df>] save_trace+0xc3/0xce
[<c014527d>] mark_lock+0x8c/0x4fe
[<c0145df5>] __lockdep_acquire+0x44e/0xaa5
[<c0146798>] lockdep_acquire+0x68/0x84
[<c1048699>] _spin_lock+0x21/0x2f
[<c010d918>] prepare_set+0xd/0x5d
[<c010daa8>] generic_set_all+0x1d/0x201
[<c010ca9a>] mtrr_ap_init+0x23/0x3b
[<c010ada8>] identify_cpu+0x2a7/0x2af
[<c01192a7>] smp_store_cpu_info+0x2f/0xb4
[<c01197d0>] start_secondary+0xb5/0x3ec
[<c104ec11>] end_of_stack_stop_unwind_function+0x1/0x4
[<c104ec11>] end_of_stack_stop_unwind_function+0x1/0x4
[<c104ec11>] end_of_stack_stop_unwind_function+0x1/0x4
[<c104ec11>] end_of_stack_stop_unwind_function+0x1/0x4
[<c104ec11>] end_of_stack_stop_unwind_function+0x1/0x4
[<c104ec11>] end_of_stack_stop_unwind_function+0x1/0x4
[<c104ec11>] end_of_stack_stop_unwind_function+0x1/0x4
[<c104ec11>] end_of_stack_stop_unwind_function+0x1/0x4
[...]
Due to "end_of_stack_stop_unwind_function" recursing back to itself in the
EBP stackframe-walker. So avoid this type of recursion when walking the
stack .
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
When multiple updates matching a given CPU are found in the update file, the
action taken by the microcode update driver was inappropriate:
- when lower revision microcode was found before matching or higher revision
one, the driver would needlessly complain that it would not downgrade the
CPU
- when microcode matching the currently installed revision was found before
newer revision code, no update would actually take place
To change this behavior, the driver now concludes about possibly updates and
issues messages only when the entire input was parsed.
Additionally, this adds back (in different places, and conditionalized upon
a new module option) some messages removed by a previous patch.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: Tigran Aivazian <tigran_aivazian@symantec.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Only drm, framebuffer, mtrr parts + misc files here and there.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
- avoid expensive modulo (integer division) which happened
since APM_MAX_EVENTS is 20 (non-power-of-2)
- kill compiler warnings by initializing two variables
- add __read_mostly to some important static variables that are read often
(by idle loop etc.)
- constify several structures
Signed-off-by: Andreas Mohr <andi@lisas.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Use the x86 cache-bypassing copy instructions for copy_from_user().
Some performance data are
Total of GLOBAL_POWER_EVENTS (CPU cycle samples)
2.6.12.4.orig 1921587
2.6.12.4.nt 1599424
1599424/1921587=83.23% (16.77% reduction)
BSQ_CACHE_REFERENCE (L3 cache miss)
2.6.12.4.orig 57427
2.6.12.4.nt 20858
20858/57427=36.32% (63.7% reduction)
L3 cache miss reduction of __copy_from_user_ll
samples %
37408 65.1412 vmlinux __copy_from_user_ll
23 0.1103 vmlinux __copy_user_zeroing_intel_nocache
23/37408=0.061% (99.94% reduction)
Top 5 of 2.6.12.4.nt
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000
samples % app name symbol name
128392 8.0274 vmlinux __copy_user_zeroing_intel_nocache
64206 4.0143 vmlinux journal_add_journal_head
59746 3.7355 vmlinux do_get_write_access
47674 2.9807 vmlinux journal_put_journal_head
46021 2.8774 vmlinux journal_dirty_metadata
pattern9-0-cpu4-0-09011728/summary.out
Counted BSQ_CACHE_REFERENCE events (cache references seen by the bus unit) with a unit mask of 0x3f (multiple flags) count 3000
samples % app name symbol name
69755 4.2861 vmlinux __copy_user_zeroing_intel_nocache
55685 3.4215 vmlinux journal_add_journal_head
52371 3.2179 vmlinux __find_get_block
45504 2.7960 vmlinux journal_put_journal_head
36005 2.2123 vmlinux journal_stop
pattern9-0-cpu4-0-09011744/summary.out
Counted BSQ_CACHE_REFERENCE events (cache references seen by the bus unit) with a unit mask of 0x200 (read 3rd level cache miss) count 3000
samples % app name symbol name
1147 5.4994 vmlinux journal_add_journal_head
881 4.2240 vmlinux journal_dirty_data
872 4.1809 vmlinux blk_rq_map_sg
734 3.5192 vmlinux journal_commit_transaction
617 2.9582 vmlinux radix_tree_delete
pattern9-0-cpu4-0-09011731/summary.out
iozone results are
original 2.6.12.4 CPU time = 207.768 sec
cache aware CPU time = 184.783 sec
(three times run)
184.783/207.768=88.94% (11.06% reduction)
original:
pattern9-0-cpu4-0-08191720/iozone.out: CPU Utilization: Wall time 45.997 CPU time 64.527 CPU utilization 140.28 %
pattern9-0-cpu4-0-08191741/iozone.out: CPU Utilization: Wall time 46.878 CPU time 71.933 CPU utilization 153.45 %
pattern9-0-cpu4-0-08191743/iozone.out: CPU Utilization: Wall time 45.152 CPU time 71.308 CPU utilization 157.93 %
cache awre:
pattern9-0-cpu4-0-09011728/iozone.out: CPU Utilization: Wall time 44.842 CPU time 62.465 CPU utilization 139.30 %
pattern9-0-cpu4-0-09011731/iozone.out: CPU Utilization: Wall time 44.718 CPU time 59.273 CPU utilization 132.55 %
pattern9-0-cpu4-0-09011744/iozone.out: CPU Utilization: Wall time 44.367 CPU time 63.045 CPU utilization 142.10 %
Signed-off-by: Hiro Yoshioka <hyoshiok@miraclelinux.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
sys_move_pages() support for 32bit (i386 plus x86_64 compat layer)
Add support for move_pages() on i386 and also add the compat functions
necessary to run 32 bit binaries on x86_64.
Add compat_sys_move_pages to the x86_64 32bit binary layer. Note that it is
not up to date so I added the missing pieces. Not sure if this is done the
right way.
[akpm@osdl.org: compile fix]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Consolidate the various arch-specific implementations of pxm_to_node() and
node_to_pxm() into a single generic version.
Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Andi Kleen <ak@muc.de>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: "Brown, Len" <len.brown@intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* master.kernel.org:/pub/scm/linux/kernel/git/davej/cpufreq:
[CPUFREQ] Fix ondemand vs suspend deadlock
[CPUFREQ] Fix powernow-k8 SMP kernel on UP hardware bug.
[PATCH] redirect speedstep-centrino maintainer mail to cpufreq list
[CPUFREQ] correct powernow-k8 fid/vid masks for extended parts
[CPUFREQ] Clarify powernow-k8 cpu_family statements
I haven't really maintained this driver for a while, and I'm not
keeping up with the latest in Intel power management. I get a steady
stream of mail which I don't really do anything useful with; the
cpufreq list seems like a better destination, unless someone wants to
get the mail directly.
Also clean up a couple of ancient comments which don't really apply
anymore (as far as I know, nobody has ever damaged a CPU with this
driver).
Signed-off-by: Jeremy Fitzhardinge <jeremy@goop.org>
Signed-off-by: Dave Jones <davej@redhat.com>
On 15 Jun 2006 03:45:10 +0200, Andi Kleen wrote:
> Anyways I would say that if the BIOS can't get MCFG right then
> it's likely not been validated on that board and shouldn't be used.
According to Petr Vandrovec:
... "What is important (and checked) is address of MMCONFIG reported by MCFG
table... Unfortunately code does not bother with printing that address :-(
"Another problem is that code has hardcoded that MMCONFIG area is 256MB large.
Unfortunately for the code PCI specification allows any power of two between 2MB
and 256MB if vendor knows that such amount of busses (from 2 to 128) will be
sufficient for system. With notebook it is quite possible that not full 8 bits
are implemented for MMCONFIG bus number."
So here is a patch. Unfortunately my system still fails the test because
it doesn't reserve any part of the MMCONFIG area, but this may fix others.
Booted on x86_64, only compiled on i386. x86_64 still remaps the max area
(256MB) even though only 2MB is checked... but 2.6.16 had no check at all
so it is still better.
PCI: reduce size of x86 MMCONFIG reserved area check
1. Print the address of the MMCONFIG area when the test for that area
being reserved fails.
2. Only check if the first 2MB is reserved, as that is the minimum.
Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com>
Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
This a bit late (yours patch was posted about a year ago), but
a co-worker of spotted part of the code that looks like a memory
leak. Looking at the code it seems that pci_mmcfg_config should
be free-ed if MMCONFIG is above 4GB.
From: Konrad Rzeszutek <konradr@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
When a PCI device is disabled via pci_disable_device(), it's still
left decoding its BAR resource ranges even though its driver
will have likely released those regions (and may even have
unloaded). pci_enable_device() already explicitly enables
BAR resource decode for the device being enabled. This patch
disables resource decode for the PCI device being disabled,
making it symmetric with the enable call.
I saw this while doing something else, not because of a
problem report. Still, seems to be the correct thing to do.
Signed-off-by: Rajesh Shah <rajesh.shah@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
The fid/vid masks for parts using the extended parts are slightly incorrect and can result in
incorrect fid/vid codes being applied. No instances of this problem have been reported in
the field but it could be a problem with future parts.
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
Signed-off-by: Dave Jones <davej@redhat.com>
This patch clarifies the meaning of the cpu_family if
statements in the hw pstate driver patch for powernow-k8
Signed-off-by: Mark Langsdorf <mark.langsdorf@amd.com>
Signed-off-by: Dave Jones <davej@redhat.com>