commit ce7eebe5c3deef8e19c177c24ee75843256e69ca upstream.
The compilation only looks for linux/magic.h from the default include
paths, which does not include the source tree. This results in a build
error if linux/magic.h is not available or not installed.
For example, this build error occurs on CentOS 5.
$ make -C tools/lib/lk V=1
[...]
gcc -o debugfs.o -c -ggdb3 -Wall -Wextra -std=gnu99 -Werror -O6
-D_FORTIFY_SOURCE=2 -Wbad-function-cast -Wdeclaration-after-statement
-Wformat-security -Wformat-y2k -Winit-self -Wmissing-declarations
-Wmissing-prototypes -Wnested-externs -Wno-system-headers
-Wold-style-definition -Wpacked -Wredundant-decls -Wshadow
-Wstrict-aliasing=3 -Wstrict-prototypes -Wswitch-default -Wswitch-enum
-Wundef -Wwrite-strings -Wformat -fPIC -D_LARGEFILE64_SOURCE
-D_FILE_OFFSET_BITS=64 debugfs.c
debugfs.c:8:25: error: linux/magic.h: No such file or directory
The only symbol from linux/magic.h needed by debugfs.c is DEBUGFS_MAGIC,
and that is already defined in debugfs.h. linux/magic.h isn't providing
any extra symbols and can unincluded. This is similar to the approach by
perf, which has its own magic.h wrapper at
tools/perf/util/include/linux/magic.h
Signed-off-by: Vinson Lee <vlee@twitter.com>
Acked-by: Borislav Petkov <bp@suse.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Vinson Lee <vlee@freedesktop.org>
Link: http://lkml.kernel.org/r/1379546200-17028-1-git-send-email-vlee@freedesktop.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c0f04d88e46d14de51f4baebb6efafb7d59e9f96 upstream.
In writeback mode, when we get a cache flush we need to make sure we
issue a flush to the backing device.
The code for sending down an extra flush was wrong - by cloning the bio
we were probably getting flags that didn't make sense for a bare flush,
and also the old code was firing for FUA bios, for which we don't need
to send a flush to the backing device.
This was causing data corruption somehow - the mechanism was never
determined, but this patch fixes it for the users that were seeing it.
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 84786438ed17978d72eeced580ab757e4da8830b upstream.
btree_sort_fixup() was overly clever, because it was trying to avoid
pulling a key off the btree iterator in more than one place.
This led to a really obscure bug where we'd break early from the loop in
btree_sort_fixup() if the current key overlapped with keys in more than
one older set, and the next key it overlapped with was zero size.
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a698e08c82dfb9771e0bac12c7337c706d729b6d upstream.
GFP_NOIO means we could be getting called recursively - mca_alloc() ->
mca_data_alloc() - definitely can't use mutex_lock(bucket_lock) then.
Whoops.
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 1394d6761b6e9e15ee7c632a6d48791188727b40 upstream.
bch_journal_meta() was missing the flush to make the journal write
actually go down (instead of waiting up to journal_delay_ms)...
Whoops
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c2a4f3183a1248f615a695fbd8905da55ad11bba upstream.
Background writeback works by scanning the btree for dirty data and
adding those keys into a fixed size buffer, then for each dirty key in
the keybuf writing it to the backing device.
When read_dirty() finishes and it's time to scan for more dirty data, we
need to wait for the outstanding writeback IO to finish - they still
take up slots in the keybuf (so that foreground writes can check for
them to avoid races) - without that wait, we'll continually rescan when
we'll be able to add at most a key or two to the keybuf, and that takes
locks that starves foreground IO. Doh.
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit c426c4fd46f709ade2bddd51c5738729c7ae1db5 upstream.
The journal replay code didn't handle this case, causing it to go into
an infinite loop...
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit aee6f1cfff3ce240eb4b43b41ca466b907acbd2e upstream.
sysfs attributes with unusual characters have crappy failure modes
in Squeeze (udev 164); later versions of udev are unaffected.
This should make these characters more unusual.
Signed-off-by: Gabriel de Perthuis <g2p.code@gmail.com>
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6d9d21e35fbfa2934339e96934f862d118abac23 upstream.
That switch statement was obviously wrong, leading to some sort of weird
spinning on rare occasion with discards enabled...
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 49475555848d396a0c78fb2f8ecceb3f3f263ef1 upstream.
Superblock lock was replaced with (un)lock_super() removal, but left
uninitialized for Seventh Edition UNIX filesystem in the following commit (3.7):
c07cb01 sysv: drop lock/unlock super
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2f6cf0de0281d210061ce976f2d42d246adc75bb upstream.
The memcpy() in bio_copy_data() was using the wrong offset vars, leading
to data corruption in weird unusual setups.
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2cf55125c64d64cc106e204d53b107094762dfdf upstream.
This fixes a serious bug affecting all hash types with a net element -
specifically, if a CIDR value is deleted such that none of the same size
exist any more, all larger (less-specific) values will then fail to
match. Adding back any prefix with a CIDR equal to or more specific than
the one deleted will fix it.
Steps to reproduce:
ipset -N test hash:net
ipset -A test 1.1.0.0/16
ipset -A test 2.2.2.0/24
ipset -T test 1.1.1.1 #1.1.1.1 IS in set
ipset -D test 2.2.2.0/24
ipset -T test 1.1.1.1 #1.1.1.1 IS NOT in set
This is due to the fact that the nets counter was unconditionally
decremented prior to the iteration that shifts up the entries. Now, we
first check if there is a proceeding entry and if not, decrement it and
return. Otherwise, we proceed to iterate and then zero the last element,
which, in most cases, will already be zero.
Signed-off-by: Oliver Smith <oliver@8.c.9.b.0.7.4.0.1.0.0.2.ip6.arpa>
Signed-off-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d4a516560fc96a9d486a9939bcb567e3fdce8f49 upstream.
In theory the linux cred in a gssproxy reply can include up to
NGROUPS_MAX data, 256K of data. In the common case we expect it to be
shorter. So do as the nfsv3 ACL code does and let the xdr code allocate
the pages as they come in, instead of allocating a lot of pages that
won't typically be used.
Tested-by: Simo Sorce <simo@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 9dfd87da1aeb0fd364167ad199f40fe96a6a87be upstream.
The reply to a gssproxy can include up to NGROUPS_MAX gid's, which will
take up more than a page. We therefore need to allocate an array of
pages to hold the reply instead of trying to allocate a single huge
buffer.
Tested-by: Simo Sorce <simo@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 6a36978e6931e6601be586eb313375335f2cfaa3 upstream.
The encoding of linux creds is a bit confusing.
Also: I think in practice it doesn't really matter whether we treat any
of these things as signed or unsigned, but unsigned seems more
straightforward: uid_t/gid_t are unsigned and it simplifies the ngroups
overflow check.
Tested-by: Simo Sorce <simo@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 778e512bb1d3315c6b55832248cd30c566c081d7 upstream.
We can use the normal coding infrastructure here.
Two minor behavior changes:
- we're assuming no wasted space at the end of the linux cred.
That seems to match gss-proxy's behavior, and I can't see why
it would need to do differently in the future.
- NGROUPS_MAX check added: note groups_alloc doesn't do this,
this is the caller's responsibility.
Tested-by: Simo Sorce <simo@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f3cff25f05f2ac29b2ee355e611b0657482f6f1d upstream.
'samples' is 64bit operant, but do_div() second parameter is 32.
do_div silently truncates high 32 bits and calculated result
is invalid.
In case if low 32bit of 'samples' are zeros then do_div() produces
kernel crash.
Signed-off-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Jonghwan Choi <jhbird.choi@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 89365e6c9ad4c0e090e4c6a4b67a3ce319381d89 upstream.
Need to check for /dev/zero.
Most likely more strings are missing too.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1366848182-30449-1-git-send-email-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Vinson Lee <vlee@freedesktop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 7cb2ef56e6a8b7b368b2e883a0a47d02fed66911 upstream.
I am working with a tool that simulates oracle database I/O workload.
This tool (orion to be specific -
<http://docs.oracle.com/cd/E11882_01/server.112/e16638/iodesign.htm#autoId24>)
allocates hugetlbfs pages using shmget() with SHM_HUGETLB flag. It then
does aio into these pages from flash disks using various common block
sizes used by database. I am looking at performance with two of the most
common block sizes - 1M and 64K. aio performance with these two block
sizes plunged after Transparent HugePages was introduced in the kernel.
Here are performance numbers:
pre-THP 2.6.39 3.11-rc5
1M read 8384 MB/s 5629 MB/s 6501 MB/s
64K read 7867 MB/s 4576 MB/s 4251 MB/s
I have narrowed the performance impact down to the overheads introduced by
THP in __get_page_tail() and put_compound_page() routines. perf top shows
>40% of cycles being spent in these two routines. Every time direct I/O
to hugetlbfs pages starts, kernel calls get_page() to grab a reference to
the pages and calls put_page() when I/O completes to put the reference
away. THP introduced significant amount of locking overhead to get_page()
and put_page() when dealing with compound pages because hugepages can be
split underneath get_page() and put_page(). It added this overhead
irrespective of whether it is dealing with hugetlbfs pages or transparent
hugepages. This resulted in 20%-45% drop in aio performance when using
hugetlbfs pages.
Since hugetlbfs pages can not be split, there is no reason to go through
all the locking overhead for these pages from what I can see. I added
code to __get_page_tail() and put_compound_page() to bypass all the
locking code when working with hugetlbfs pages. This improved performance
significantly. Performance numbers with this patch:
pre-THP 3.11-rc5 3.11-rc5 + Patch
1M read 8384 MB/s 6501 MB/s 8371 MB/s
64K read 7867 MB/s 4251 MB/s 6510 MB/s
Performance with 64K read is still lower than what it was before THP, but
still a 53% improvement. It does mean there is more work to be done but I
will take a 53% improvement for now.
Please take a look at the following patch and let me know if it looks
reasonable.
[akpm@linux-foundation.org: tweak comments]
Signed-off-by: Khalid Aziz <khalid.aziz@oracle.com>
Cc: Pravin B Shelar <pshelar@nicira.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 8ac1c8d5deba65513b6a82c35e89e73996c8e0d6 upstream.
After commit 829199197a ("kernel/audit.c: avoid negative sleep
durations") audit emitters will block forever if userspace daemon cannot
handle backlog.
After the timeout the waiting loop turns into busy loop and runs until
daemon dies or returns back to work. This is a minimal patch for that
bug.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Richard Guy Briggs <rgb@redhat.com>
Cc: Eric Paris <eparis@redhat.com>
Cc: Chuck Anderson <chuck.anderson@oracle.com>
Cc: Dan Duval <dan.duval@oracle.com>
Cc: Dave Kleikamp <dave.kleikamp@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Jonghwan Choi <jhbird.choi@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e729eac6f65e11c5f03b09adcc84bd5bcb230467 upstream.
Refuse RW mount of udf filesystem. So far we just silently changed it
to RO mount but when the media is writeable, block layer won't notice
this change and thus will think device is used RW and will block eject
button of the drive. That is unexpected by users because for
non-writeable media eject button works just fine.
Userspace mount(8) command handles this just fine and retries mounting
with MS_RDONLY set so userspace shouldn't see any regression. Plus any
tool mounting udf is likely confronted with the case of read-only
media where block layer already refuses to mount the filesystem without
MS_RDONLY set so our behavior shouldn't be anything new for it.
Reported-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit d759bfa4e7919b89357de50a2e23817079889195 upstream.
Change all function used in filesystem discovery during mount to user
standard kernel return values - -errno on error, 0 on success instead
of 1 on failure and 0 on success. This allows us to pass error number
(not just failure / success) so we can abort device scanning earlier
in case of errors like EIO or ENOMEM . Also we will be able to return
EROFS in case writeable mount is requested but writing isn't supported.
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5077ac3b8108007f4a2b4589f2d373cf55453206 upstream.
As USB/PCI/MEDIA_SUPPORT dependencies can be tristate, we can't
simply make the bool menu to be dependent on it. Everything below
the menu should also depend on it, otherwise, we risk to allow
building them with 'y', while only 'm' would be supported.
So, add an IF just before everything below, in order to avoid
such risks.
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit a0f9354b1a319cb29c331bfd2e5a15d7f9b87fa4 upstream.
(a.k.a. Kconfig bool depending on a tristate considered harmful)
Fix various build errors when CONFIG_USB=m and media USB drivers
are builtin. In this case, CONFIG_USB_ZR364XX=y,
CONFIG_VIDEO_PVRUSB2=y, and CONFIG_VIDEO_STK1160=y.
This is caused by (from drivers/media/usb/Kconfig):
menuconfig MEDIA_USB_SUPPORT
bool "Media USB Adapters"
depends on USB && MEDIA_SUPPORT
=m =y
so MEDIA_USB_SUPPORT=y and all following Kconfig 'source' lines
are included. By adding an "if USB" guard around most of this file,
the needed dependencies are enforced.
drivers/built-in.o: In function `zr364xx_start_readpipe':
zr364xx.c:(.text+0xc726a): undefined reference to `usb_alloc_urb'
zr364xx.c:(.text+0xc72bb): undefined reference to `usb_submit_urb'
drivers/built-in.o: In function `zr364xx_stop_readpipe':
zr364xx.c:(.text+0xc72fd): undefined reference to `usb_kill_urb'
zr364xx.c:(.text+0xc7309): undefined reference to `usb_free_urb'
drivers/built-in.o: In function `read_pipe_completion':
zr364xx.c:(.text+0xc7acc): undefined reference to `usb_submit_urb'
drivers/built-in.o: In function `send_control_msg.constprop.12':
zr364xx.c:(.text+0xc7d2f): undefined reference to `usb_control_msg'
drivers/built-in.o: In function `pvr2_ctl_timeout':
pvrusb2-hdw.c:(.text+0xcadb6): undefined reference to `usb_unlink_urb'
pvrusb2-hdw.c:(.text+0xcadcb): undefined reference to `usb_unlink_urb'
drivers/built-in.o: In function `pvr2_hdw_create':
(.text+0xcc42c): undefined reference to `usb_alloc_urb'
drivers/built-in.o: In function `pvr2_hdw_create':
(.text+0xcc448): undefined reference to `usb_alloc_urb'
drivers/built-in.o: In function `pvr2_hdw_create':
(.text+0xcc5f9): undefined reference to `usb_set_interface'
drivers/built-in.o: In function `pvr2_hdw_create':
(.text+0xcc65a): undefined reference to `usb_free_urb'
drivers/built-in.o: In function `pvr2_hdw_create':
(.text+0xcc666): undefined reference to `usb_free_urb'
drivers/built-in.o: In function `pvr2_send_request_ex.part.22':
pvrusb2-hdw.c:(.text+0xccbe3): undefined reference to `usb_submit_urb'
pvrusb2-hdw.c:(.text+0xccc83): undefined reference to `usb_submit_urb'
drivers/built-in.o: In function `pvr2_hdw_remove_usb_stuff.part.25':
pvrusb2-hdw.c:(.text+0xcd3f9): undefined reference to `usb_kill_urb'
pvrusb2-hdw.c:(.text+0xcd405): undefined reference to `usb_free_urb'
pvrusb2-hdw.c:(.text+0xcd421): undefined reference to `usb_kill_urb'
pvrusb2-hdw.c:(.text+0xcd42d): undefined reference to `usb_free_urb'
drivers/built-in.o: In function `pvr2_hdw_device_reset':
(.text+0xcd658): undefined reference to `usb_lock_device_for_reset'
drivers/built-in.o: In function `pvr2_hdw_device_reset':
(.text+0xcd664): undefined reference to `usb_reset_device'
drivers/built-in.o: In function `pvr2_hdw_cpureset_assert':
(.text+0xcd6f9): undefined reference to `usb_control_msg'
drivers/built-in.o: In function `pvr2_hdw_cpufw_set_enabled':
(.text+0xcd84e): undefined reference to `usb_control_msg'
drivers/built-in.o: In function `pvr2_upload_firmware1':
pvrusb2-hdw.c:(.text+0xcda47): undefined reference to `usb_clear_halt'
pvrusb2-hdw.c:(.text+0xcdb04): undefined reference to `usb_control_msg'
drivers/built-in.o: In function `pvr2_upload_firmware2':
(.text+0xce7dc): undefined reference to `usb_bulk_msg'
drivers/built-in.o: In function `pvr2_stream_buffer_count':
pvrusb2-io.c:(.text+0xd2e05): undefined reference to `usb_alloc_urb'
pvrusb2-io.c:(.text+0xd2e5b): undefined reference to `usb_kill_urb'
pvrusb2-io.c:(.text+0xd2e9f): undefined reference to `usb_free_urb'
drivers/built-in.o: In function `pvr2_stream_internal_flush':
pvrusb2-io.c:(.text+0xd2f9b): undefined reference to `usb_kill_urb'
drivers/built-in.o: In function `pvr2_buffer_queue':
(.text+0xd3328): undefined reference to `usb_kill_urb'
drivers/built-in.o: In function `pvr2_buffer_queue':
(.text+0xd33ea): undefined reference to `usb_submit_urb'
drivers/built-in.o: In function `stk1160_read_reg':
(.text+0xd3efa): undefined reference to `usb_control_msg'
drivers/built-in.o: In function `stk1160_write_reg':
(.text+0xd3f4f): undefined reference to `usb_control_msg'
drivers/built-in.o: In function `stop_streaming':
stk1160-v4l.c:(.text+0xd4997): undefined reference to `usb_set_interface'
drivers/built-in.o: In function `start_streaming':
stk1160-v4l.c:(.text+0xd4a9f): undefined reference to `usb_set_interface'
stk1160-v4l.c:(.text+0xd4afa): undefined reference to `usb_submit_urb'
stk1160-v4l.c:(.text+0xd4ba3): undefined reference to `usb_set_interface'
drivers/built-in.o: In function `stk1160_isoc_irq':
stk1160-video.c:(.text+0xd509b): undefined reference to `usb_submit_urb'
drivers/built-in.o: In function `stk1160_cancel_isoc':
(.text+0xd50ef): undefined reference to `usb_kill_urb'
drivers/built-in.o: In function `stk1160_free_isoc':
(.text+0xd5155): undefined reference to `usb_free_coherent'
drivers/built-in.o: In function `stk1160_free_isoc':
(.text+0xd515d): undefined reference to `usb_free_urb'
drivers/built-in.o: In function `stk1160_alloc_isoc':
(.text+0xd5278): undefined reference to `usb_alloc_urb'
drivers/built-in.o: In function `stk1160_alloc_isoc':
(.text+0xd52c2): undefined reference to `usb_alloc_coherent'
drivers/built-in.o: In function `stk1160_alloc_isoc':
(.text+0xd53c4): undefined reference to `usb_free_urb'
drivers/built-in.o: In function `zr364xx_driver_init':
zr364xx.c:(.init.text+0x463e): undefined reference to `usb_register_driver'
drivers/built-in.o: In function `pvr_init':
pvrusb2-main.c:(.init.text+0x4662): undefined reference to `usb_register_driver'
drivers/built-in.o: In function `stk1160_usb_driver_init':
stk1160-core.c:(.init.text+0x467d): undefined reference to `usb_register_driver'
drivers/built-in.o: In function `zr364xx_driver_exit':
zr364xx.c:(.exit.text+0x1377): undefined reference to `usb_deregister'
drivers/built-in.o: In function `pvr_exit':
pvrusb2-main.c:(.exit.text+0x1389): undefined reference to `usb_deregister'
drivers/built-in.o: In function `stk1160_usb_driver_exit':
stk1160-core.c:(.exit.text+0x13a0): undefined reference to `usb_deregister'
Suggested-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4f66c59922cbcda14c9e103e6c7f4ee616360d43 upstream.
Putting everything into VRAM seems to help.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 855f5f1d882a34e4e9dd27b299737cd3508a5624 upstream.
We were using the wrong set_properly callback so we always
ended up with Full scaling even if something else (Center or
Full aspect) was selected.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 91f3a6aaf280294b07c05dfe606e6c27b7ba3c72 upstream.
The OUTPUT_ENABLE action jumps past the point in the coder where
the data_offset is set on certain rs780 cards. This worked
previously because the OUTPUT_ENABLE action is always called
immediately after the ENABLE action so the data_offset remained
set. In 6f8bbaf568
(drm/radeon/atom: initialize more atom interpretor elements to 0),
we explictly reset data_offset to 0 between atom calls which then
caused this to fail. The fix is to just skip calling the
OUTPUT_ENABLE action on the problematic chipsets. The ENABLE
action does the same thing and more. Ultimately, we could
probably drop the OUTPUT_ENABLE action all together on DCE3
asics.
fixes:
https://bugzilla.kernel.org/show_bug.cgi?id=60791
v2: only rs880 seems to be affected
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit f4e1a4d3ecbb9e42bdf8e7869ee8a4ebfa27fb20 upstream.
My commit
commit c630ccf1a1
Author: Stanislaw Gruszka <stf_xl@wp.pl>
Date: Sat Mar 16 19:19:46 2013 +0100
rt2800: rearrange bbp/rfcsr initialization
make Maxim machine freeze when try to start wireless device.
Initialization order and sending MCU_BOOT_SIGNAL request, changed in
above commit, is important. Doing things incorrectly make PCIe bus
problems, which can froze the machine.
This patch change initialization sequence like vendor driver do:
function NICInitializeAsic() from
2011_1007_RT5390_RT5392_Linux_STA_V2.5.0.3_DPO (PCI devices) and
DPO_RT5572_LinuxSTA_2.6.1.3_20121022 (according Mediatek, latest driver
for RT8070/RT3070/RT3370/RT3572/RT5370/RT5372/RT5572 USB devices).
It fixes freezes on Maxim system.
Resolve:
https://bugzilla.redhat.com/show_bug.cgi?id=1000679
Reported-and-tested-by: Maxim Polyakov <polyakov@dexmalabs.com>
Bisected-by: Igor Gnatenko <i.gnatenko.brain@gmail.com>
Cc: stable@vger.kernel.org # 3.10+
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit fb93df1c2d8b3b1fb16d6ee9e32554e0c038815d upstream.
The table has the following format:
typedef struct _ATOM_SRC_DST_TABLE_FOR_ONE_OBJECT //usSrcDstTableOffset pointing to this structure
{
UCHAR ucNumberOfSrc;
USHORT usSrcObjectID[1];
UCHAR ucNumberOfDst;
USHORT usDstObjectID[1];
}ATOM_SRC_DST_TABLE_FOR_ONE_OBJECT;
usSrcObjectID[] and usDstObjectID[] are variably sized, so we
can't access them directly. Use pointers and update the offset
appropriately when accessing the Dst members.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit acf88deb8ddbb73acd1c3fa32fde51af9153227f upstream.
Setting MC_MISC_CNTL.GART_INDEX_REG_EN causes hangs on
some boards on resume. The systems seem to work fine
without touching this bit so leave it as is.
v2: read-modify-write the GART_INDEX_REG_EN bit.
I suspect the problem is that we are losing the other
settings in the register.
fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=52952
Reported-by: Ondrej Zary <linux@rainbow-software.org>
Tested-by: Daniel Tobias <dan.g.tob@gmail.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 290d24576ccf1aa0373d2185cedfe262d0d4952a upstream.
We need to allocate line buffer to each display when
setting up the watermarks. Failure to do so can lead
to a blank screen. This fixes blank screen problems
on dce6 asics.
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=64850
Based on an initial fix from:
Jay Cornwall <jay.cornwall@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0b31e02363b0db4e7931561bc6c141436e729d9f upstream.
We need to allocate line buffer to each display when
setting up the watermarks. Failure to do so can lead
to a blank screen. This fixes blank screen problems
on dce4.1/5 asics.
Based on an initial fix from:
Jay Cornwall <jay.cornwall@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit e5b9e7503eb1f4884efa3b321d3cc47806779202 upstream.
Also add a new RADEON_INFO query to check that CP DMA packets are
supported on the compute ring.
CP DMA has been supported since the 3.8 kernel, but due to an oversight
we forgot to teach the CS checker that the CP DMA packet was legal for
the compute ring on Southern Islands GPUs.
This patch fixes a bug where the radeon driver will incorrectly reject a legal
CP DMA packet from user space. I would like to have the patch
backported to stable so that we don't have to require Mesa users to use a
bleeding edge kernel in order to take advantage of this feature which
is already present in the stable kernels (3.8 and newer).
v2:
- Don't bump kms version, so this patch can be backported to stable
kernels.
Signed-off-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 4543eda52113d1e2cc0e9bf416f79597e6ef1ec7 upstream.
Need to swap the data fetched over i2c properly. This
is the same fix as the endian fix for aux channel
transactions.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 95663948ba22a4be8b99acd67fbf83e86ddffba4 upstream.
If the LCD table contains an EDID record, properly account
for the edid size when walking through the records.
This should fix error messages about unknown LCD records.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5087f51da805f53cba7366f70d596e7bde2a5486 upstream.
Commit ea9197cc32 effectively enabled the
use of an improved DAC detection code, but introduced a regression on
the original nv50 chipset, causing a ghost monitor to be detected.
v2 (Ben Skeggs): the offending line was likely a thinko, removed it for
all chipsets (tested nv50 and nve6 to cover entire range) and added
some additional debugging.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67382
Tested-by: Martin Peres <martin.peres@labri.fr>
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 182b17c8dc4e83aab000ce86587b6810e515da87 upstream.
After a vmalloc failure in ttm_dma_tt_alloc_page_directory(),
ttm_dma_tt_init() will call ttm_tt_destroy() to cleanup, and end up
inside the driver's unpopulate() hook when populate() has never yet
been called.
On nouveau, the first issue to be hit because of this is that
dma_address[] may be a NULL pointer. After working around this,
ttm_pool_unpopulate() may potentially hit the same issue with
the pages[] array.
It seems to make more sense to avoid calling unpopulate on already
unpopulated TTMs than to add checks to all the implementations.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 2e8378136f28bea960cec643d3fa5d843c9049ec upstream.
When porting from UMS I mistyped this from the wrong place, AST noticed
and pointed it out, so we should fix it to be like the X.org driver.
Reported-by: Y.C. Chen <yc_chen@aspeedtech.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 101b96f32956ee99bf1468afaf572b88cda9f88b upstream.
DRM_IOCTL_MODE_GETFB is used to retrieve information about a given
framebuffer ID. It is a read-only helper and was thus declassified for
unprivileged access in:
commit a14b1b4247
Author: Mandeep Singh Baines <mandeep.baines@gmail.com>
Date: Fri Jan 20 12:11:16 2012 -0800
drm: remove master fd restriction on mode setting getters
However, alongside width, height and stride information,
DRM_IOCTL_MODE_GETFB also passes back a handle to the underlying buffer of
the framebuffer. This handle allows users to mmap() it and read or write
into it. Obviously, this should be restricted to DRM-Master.
With the current setup, *any* process with access to /dev/dri/card0 (which
means any process with access to hardware-accelerated rendering) can
access the current screen framebuffer and modify it ad libitum.
For backwards-compatibility reasons we want to keep the
DRM_IOCTL_MODE_GETFB call unprivileged. Besides, it provides quite useful
information regarding screen setup. So we simply test whether the caller
is the current DRM-Master and if not, we return 0 as handle, which is
always invalid. A following DRM_IOCTL_GEM_CLOSE on this handle will fail
with EINVAL, but we accept this. Users shouldn't test for errors during
GEM_CLOSE, anyway. And it is still better as a failing MODE_GETFB call.
v2: add capable(CAP_SYS_ADMIN) check for compatibility with i-g-t
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 17e1df07df0fbc77696a1e1b6ccf9f2e5af70e40 upstream.
My g33 here seems to be shockingly good at hitting them all. This time
around kms_flip/flip-vs-panning-vs-hang blows up:
intel_crtc_wait_for_pending_flips correctly checks for gpu hangs and
if a gpu hang is pending aborts the wait for outstanding flips so that
the setcrtc call will succeed and release the crtc mutex. And the gpu
hang handler needs that lock in intel_display_handle_reset to be able
to complete outstanding flips.
The problem is that we can race in two ways:
- Waiters on the dev_priv->pending_flip_queue aren't woken up after
we've the reset as pending, but before we actually start the reset
work. This means that the waiter doesn't notice the pending reset
and hence will keep on hogging the locks.
Like with dev->struct_mutex and the ring->irq_queue wait queues we
there need to wake up everyone that potentially holds a lock which
the reset handler needs.
- intel_display_handle_reset was called _after_ we've already
signalled the completion of the reset work. Which means a waiter
could sneak in, grab the lock and never release it (since the
pageflips won't ever get released).
Similar to resetting the gem state all the reset work must complete
before we update the reset counter. Contrary to the gem reset we
don't need to have a second explicit wake up call since that will
have happened already when completing the pageflips. We also don't
have any issues that the completion happens while the reset state is
still pending - wait_for_pending_flips is only there to ensure we
display the right frame. After a gpu hang&reset events such
guarantees are out the window anyway. This is in contrast to the gem
code where too-early wake-up would result in unnecessary restarting
of ioctls.
Also, since we've gotten these various deadlocks and ordering
constraints wrong so often throw copious amounts of comments at the
code.
This deadlock regression has been introduced in the commit which added
the pageflip reset logic to the gpu hang work:
commit 96a02917a0
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Mon Feb 18 19:08:49 2013 +0200
drm/i915: Finish page flips and update primary planes after a GPU reset
v2:
- Add comments to explain how the wake_up serves as memory barriers
for the atomic_t reset counter.
- Improve the comments a bit as suggested by Chris Wilson.
- Extract the wake_up calls before/after the reset into a little
i915_error_wake_up and unconditionally wake up the
pending_flip_queue waiters, again as suggested by Chris Wilson.
v3: Throw copious amounts of comments at i915_error_wake_up as
suggested by Chris Wilson.
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 122f46badaafbe651f05c2c0f24cadee692f761b upstream.
Since we've started to clean up pending flips when the gpu hangs in
commit 96a02917a0
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Mon Feb 18 19:08:49 2013 +0200
drm/i915: Finish page flips and update primary planes after a GPU reset
the gpu reset work now also grabs modeset locks. But since work items
on our private work queue are not allowed to do that due to the
flush_workqueue from the pageflip code this results in a neat
deadlock:
INFO: task kms_flip:14676 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kms_flip D ffff88019283a5c0 0 14676 13344 0x00000004
ffff88018e62dbf8 0000000000000046 ffff88013bdb12e0 ffff88018e62dfd8
ffff88018e62dfd8 00000000001d3b00 ffff88019283a5c0 ffff88018ec21000
ffff88018f693f00 ffff88018eece000 ffff88018e62dd60 ffff88018eece898
Call Trace:
[<ffffffff8138ee7b>] schedule+0x60/0x62
[<ffffffffa046c0dd>] intel_crtc_wait_for_pending_flips+0xb2/0x114 [i915]
[<ffffffff81050ff4>] ? finish_wait+0x60/0x60
[<ffffffffa0478041>] intel_crtc_set_config+0x7f3/0x81e [i915]
[<ffffffffa031780a>] drm_mode_set_config_internal+0x4f/0xc6 [drm]
[<ffffffffa0319cf3>] drm_mode_setcrtc+0x44d/0x4f9 [drm]
[<ffffffff810e44da>] ? might_fault+0x38/0x86
[<ffffffffa030d51f>] drm_ioctl+0x2f9/0x447 [drm]
[<ffffffff8107a722>] ? trace_hardirqs_off+0xd/0xf
[<ffffffffa03198a6>] ? drm_mode_setplane+0x343/0x343 [drm]
[<ffffffff8112222f>] ? mntput_no_expire+0x3e/0x13d
[<ffffffff81117f33>] vfs_ioctl+0x18/0x34
[<ffffffff81118776>] do_vfs_ioctl+0x396/0x454
[<ffffffff81396b37>] ? sysret_check+0x1b/0x56
[<ffffffff81118886>] SyS_ioctl+0x52/0x7d
[<ffffffff81396b12>] system_call_fastpath+0x16/0x1b
2 locks held by kms_flip/14676:
#0: (&dev->mode_config.mutex){+.+.+.}, at: [<ffffffffa0316545>] drm_modeset_lock_all+0x22/0x59 [drm]
#1: (&crtc->mutex){+.+.+.}, at: [<ffffffffa031656b>] drm_modeset_lock_all+0x48/0x59 [drm]
INFO: task kworker/u8:4:175 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/u8:4 D ffff88018de9a5c0 0 175 2 0x00000000
Workqueue: i915 i915_error_work_func [i915]
ffff88018e37dc30 0000000000000046 ffff8801938ab8a0 ffff88018e37dfd8
ffff88018e37dfd8 00000000001d3b00 ffff88018de9a5c0 ffff88018ec21018
0000000000000246 ffff88018e37dca0 000000005a865a86 ffff88018de9a5c0
Call Trace:
[<ffffffff8138ee7b>] schedule+0x60/0x62
[<ffffffff8138f23d>] schedule_preempt_disabled+0x9/0xb
[<ffffffff8138d0cd>] mutex_lock_nested+0x205/0x3b1
[<ffffffffa0477094>] ? intel_display_handle_reset+0x7e/0xbd [i915]
[<ffffffffa0477094>] ? intel_display_handle_reset+0x7e/0xbd [i915]
[<ffffffffa0477094>] intel_display_handle_reset+0x7e/0xbd [i915]
[<ffffffffa044e0a2>] i915_error_work_func+0x128/0x147 [i915]
[<ffffffff8104a89a>] process_one_work+0x1d4/0x35a
[<ffffffff8104a821>] ? process_one_work+0x15b/0x35a
[<ffffffff8104b4a5>] worker_thread+0x144/0x1f0
[<ffffffff8104b361>] ? rescuer_thread+0x275/0x275
[<ffffffff8105076d>] kthread+0xac/0xb4
[<ffffffff81059d30>] ? finish_task_switch+0x3b/0xc0
[<ffffffff810506c1>] ? __kthread_parkme+0x60/0x60
[<ffffffff81396a6c>] ret_from_fork+0x7c/0xb0
[<ffffffff810506c1>] ? __kthread_parkme+0x60/0x60
3 locks held by kworker/u8:4/175:
#0: (i915){.+.+.+}, at: [<ffffffff8104a821>] process_one_work+0x15b/0x35a
#1: ((&dev_priv->gpu_error.work)){+.+.+.}, at: [<ffffffff8104a821>] process_one_work+0x15b/0x35a
#2: (&crtc->mutex){+.+.+.}, at: [<ffffffffa0477094>] intel_display_handle_reset+0x7e/0xbd [i915]
This blew up while running kms_flip/flip-vs-panning-vs-hang-interruptible
on one of my older machines.
Unfortunately (despite the proper lockdep annotations for
flush_workqueue) lockdep still doesn't detect this correctly, so we
need to rely on chance to discover these bugs.
Apply the usual bugfix and schedule the reset work on the system
workqueue to keep our own driver workqueue free of any modeset lock
grabbing.
Note that this is not a terribly serious regression since before the
offending commit we'd simply have stalled userspace forever due to
failing to abort all outstanding pageflips.
v2: Add a comment as requested by Chris.
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 5f5610f69be3a925b1f79af27150bb7377bc9ad6 upstream.
This patch fixes a NULL pointer dereference and a WARN_ON in
dummy-hcd. These things were the result of moving to the UDC core
framework, and possibly of changes to that framework.
Now unloading a gadget driver causes the UDC to be stopped after the
gadget driver is unbound, not before. Therefore the "driver" argument
to dummy_udc_stop() can be NULL, so we must not try to print the
driver's name without checking first.
Also, the UDC framework automatically unregisters the gadget when the
UDC is deleted. Therefore a sysfs attribute file attached to the
gadget must be removed before the UDC is deleted, not after.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 297502abb32e225fb23801fcdb0e4f6f8e17099a upstream.
A HID device could send a malicious output report that would cause the
logitech-dj HID driver to leak kernel memory contents to the device, or
trigger a NULL dereference during initialization:
[ 304.424553] usb 1-1: New USB device found, idVendor=046d, idProduct=c52b
...
[ 304.780467] BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
[ 304.781409] IP: [<ffffffff815d50aa>] logi_dj_recv_send_report.isra.11+0x1a/0x90
CVE-2013-2895
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0a9cd0a80ac559357c6a90d26c55270ed752aa26 upstream.
A HID device could send a malicious output report that would cause the
lenovo-tpkbd HID driver to write just beyond the output report allocation
during initialization, causing a heap overflow:
[ 76.109807] usb 1-1: New USB device found, idVendor=17ef, idProduct=6009
...
[ 80.462540] BUG kmalloc-192 (Tainted: G W ): Redzone overwritten
CVE-2013-2894
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 41df7f6d43723deb7364340b44bc5d94bf717456 upstream.
A HID device could send a malicious output report that would cause the
steelseries HID driver to write beyond the output report allocation
during initialization, causing a heap overflow:
[ 167.981534] usb 1-1: New USB device found, idVendor=1038, idProduct=1410
...
[ 182.050547] BUG kmalloc-256 (Tainted: G W ): Redzone overwritten
CVE-2013-2891
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
commit 0ccdd9e7476680c16113131264ad6597bd10299d upstream.
If tpkbd_probe_tp() bails out, the probe() function return an error,
but hid_hw_stop() is never called.
fixes:
https://bugzilla.redhat.com/show_bug.cgi?id=1003998
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>