linux/arch/x86/lib
Jiri Olsa 26afb7c661 x86, 64-bit: Fix copy_[to/from]_user() checks for the userspace address limit
As reported in BZ #30352:

  https://bugzilla.kernel.org/show_bug.cgi?id=30352

there's a kernel bug related to reading the last allowed page on x86_64.

The _copy_to_user() and _copy_from_user() functions use the following
check for address limit:

  if (buf + size >= limit)
	fail();

while it should be more permissive:

  if (buf + size > limit)
	fail();

That's because the size represents the number of bytes being
read/write from/to buf address AND including the buf address.
So the copy function will actually never touch the limit
address even if "buf + size == limit".

Following program fails to use the last page as buffer
due to the wrong limit check:

 #include <sys/mman.h>
 #include <sys/socket.h>
 #include <assert.h>

 #define PAGE_SIZE       (4096)
 #define LAST_PAGE       ((void*)(0x7fffffffe000))

 int main()
 {
        int fds[2], err;
        void * ptr = mmap(LAST_PAGE, PAGE_SIZE, PROT_READ | PROT_WRITE,
                          MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
        assert(ptr == LAST_PAGE);
        err = socketpair(AF_LOCAL, SOCK_STREAM, 0, fds);
        assert(err == 0);
        err = send(fds[0], ptr, PAGE_SIZE, 0);
        perror("send");
        assert(err == PAGE_SIZE);
        err = recv(fds[1], ptr, PAGE_SIZE, MSG_WAITALL);
        perror("recv");
        assert(err == PAGE_SIZE);
        return 0;
 }

The other place checking the addr limit is the access_ok() function,
which is working properly. There's just a misleading comment
for the __range_not_ok() macro - which this patch fixes as well.

The last page of the user-space address range is a guard page and
Brian Gerst observed that the guard page itself due to an erratum on K8 cpus
(#121 Sequential Execution Across Non-Canonical Boundary Causes Processor
Hang).

However, the test code is using the last valid page before the guard page.
The bug is that the last byte before the guard page can't be read
because of the off-by-one error. The guard page is left in place.

This bug would normally not show up because the last page is
part of the process stack and never accessed via syscalls.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Acked-by: Brian Gerst <brgerst@gmail.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: <stable@kernel.org>
Link: http://lkml.kernel.org/r/1305210630-7136-1-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-05-18 12:49:00 +02:00
..
.gitignore x86: Gitignore: arch/x86/lib/inat-tables.c 2009-11-04 13:11:28 +01:00
atomic64_32.c x86-32: Rewrite 32-bit atomic64 functions in assembly 2010-02-25 20:47:30 -08:00
atomic64_386_32.S x86: Use {push,pop}_cfi in more places 2011-02-28 18:06:22 +01:00
atomic64_cx8_32.S x86: Use {push,pop}_cfi in more places 2011-02-28 18:06:22 +01:00
cache-smp.c x86, lib: Add wbinvd smp helpers 2010-01-22 16:05:42 -08:00
checksum_32.S x86: Use {push,pop}_cfi in more places 2011-02-28 18:06:22 +01:00
clear_page_64.S x86, mem: clear_page_64.S: Support clear_page() with enhanced REP MOVSB/STOSB 2011-05-17 15:40:27 -07:00
cmpxchg8b_emu.S x86: Provide an alternative() based cmpxchg64() 2009-09-30 22:55:59 +02:00
cmpxchg16b_emu.S percpu: Omit segment prefix in the UP case for cmpxchg_double 2011-03-27 19:25:36 -07:00
cmpxchg.c x86, asm: Merge cmpxchg_486_u64() and cmpxchg8b_emu() 2010-07-28 17:05:11 -07:00
copy_page_64.S x86, alternatives: Use 16-bit numbers for cpufeature index 2010-07-07 10:36:28 -07:00
copy_user_64.S x86, 64-bit: Fix copy_[to/from]_user() checks for the userspace address limit 2011-05-18 12:49:00 +02:00
copy_user_nocache_64.S
csum-copy_64.S x86: Clean up csum-copy_64.S a bit 2011-03-18 10:44:26 +01:00
csum-partial_64.c x86: Fix common misspellings 2011-03-18 10:39:30 +01:00
csum-wrappers_64.c
delay.c x86: udelay: Use this_cpu_read to avoid address calculation 2011-01-04 06:08:55 +01:00
getuser.S
inat.c x86: AVX instruction set decoder support 2009-10-29 08:47:46 +01:00
insn.c x86: AVX instruction set decoder support 2009-10-29 08:47:46 +01:00
iomap_copy_64.S
Makefile percpu, x86: Add arch-specific this_cpu_cmpxchg_double() support 2011-02-28 11:20:49 +01:00
memcpy_32.c x86, mem: Optimize memmove for small size and unaligned cases 2010-09-24 18:57:11 -07:00
memcpy_64.S x86, mem: memcpy_64.S: Optimize memcpy by enhanced REP MOVSB/STOSB 2011-05-17 15:40:29 -07:00
memmove_64.S x86, mem: memmove_64.S: Optimize memmove by enhanced REP MOVSB/STOSB 2011-05-17 15:40:30 -07:00
memset_64.S x86, mem: memset_64.S: Optimize memset by enhanced REP MOVSB/STOSB 2011-05-17 15:40:31 -07:00
mmx_32.c
msr-reg-export.c x86, msr: change msr-reg.o to obj-y, and export its symbols 2009-09-04 10:00:09 -07:00
msr-reg.S x86, msr: Fix msr-reg.S compilation with gas 2.16.1, on 32-bit too 2009-09-03 21:26:34 +02:00
msr-smp.c x86, msr: msrs_alloc/free for CONFIG_SMP=n 2009-12-16 15:36:32 -08:00
msr.c x86, msr: msrs_alloc/free for CONFIG_SMP=n 2009-12-16 15:36:32 -08:00
putuser.S
rwlock_64.S
rwsem_64.S x86-64: Add CFI annotations to lib/rwsem_64.S 2011-02-28 18:06:21 +01:00
semaphore_32.S x86: Fix a bogus unwind annotation in lib/semaphore_32.S 2011-03-02 08:16:44 +01:00
string_32.c
strstr_32.c
thunk_32.S x86: Remove unused bits from lib/thunk_*.S 2011-02-28 18:06:22 +01:00
thunk_64.S x86: Remove unused bits from lib/thunk_*.S 2011-02-28 18:06:22 +01:00
usercopy_32.c x86: Turn the copy_from_user check into an (optional) compile time warning 2009-10-01 11:31:04 +02:00
usercopy_64.c
x86-opcode-map.txt x86: Add Intel FMA instructions to x86 opcode map 2009-10-29 08:47:47 +01:00