Age | Commit message (Collapse) | Author |
|
Pull xfs fixes from Ben Myers:
- Fix nested transactions in xfs_qm_scall_setqlim
- Clear suid/sgid bits when we truncate with size update
- Fix recovery for split buffers
- Fix block count on remote symlinks
- Add fsgeom flag for v5 superblock support
- Disable XFS_IOC_SWAPEXT for CRC enabled filesystems
- Fix dirv3 freespace block corruption
* tag 'for-linus-v3.10-rc4' of git://oss.sgi.com/xfs/xfs:
xfs: fix dir3 freespace block corruption
xfs: disable swap extents ioctl on CRC enabled filesystems
xfs: add fsgeom flag for v5 superblock support.
xfs: fix incorrect remote symlink block count
xfs: fix split buffer vector log recovery support
xfs: kill suid/sgid through the truncate path.
xfs: avoid nesting transactions in xfs_qm_scall_setqlim()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
Pull aer error logging fix from Tony Luck:
"Can't call pci_get_domain_bus_and_slot() from interupt context"
* tag 'please-pull-aertracefix' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
aerdrv: Move cper_print_aer() call out of interrupt context
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64
Pull arm64 fixes from Catalin Marinas:
- Module compilation issues (symbol not exported).
- Plug a hole where user space can bring the kernel down.
* tag 'arm64-stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64:
arm64: don't kill the kernel on a bad esr from el0
arm64: treat unhandled compat el0 traps as undef
arm64: Do not report user faults for handled signals
arm64: kernel: compiling issue, need 'EXPORT_SYMBOL(clear_page)'
|
|
Rather than completely killing the kernel if we receive an esr value we
can't deal with in the el0 handlers, send the process a SIGILL and log
the esr value in the hope that we can debug it. If we receive a bad esr
from el1, we'll die() as before.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
|
|
Currently, if a compat process reads or writes from/to a disabled
cp15/cp14 register, the trap is not handled by the el0_sync_compat
handler, and the kernel will head to bad_mode, where it will die(), and
oops(). For 64 bit processes, disabled system register accesses are
currently treated as unhandled instructions.
This patch modifies entry.S to treat these unhandled traps as undefined
instructions, sending a SIGILL to userspace. This gives processes a
chance to handle this and stop using inaccessible registers, and
prevents further issues in the kernel as a result of the die().
Reported-by: Johannes Jensen <Johannes.Jensen@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Pull drm fixes from Dave Airlie:
"One qxl 32-bit warning fix, the rest is a bunch of radeon fixes from
Alex for some issues we've been seeing."
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux:
drm/qxl: fix build warnings on 32-bit
radeon: use max_bus_speed to activate gen2 speeds
drm/radeon: narrow scope of Apple re-POST hack
drm/radeon: don't check crtcs in card_posted() on cards without DCE
drm/radeon: fix card_posted check for newer asics
drm/radeon: fix typo in cu_per_sh on verde
drm/radeon: UVD block on SUMO2 is the same as on SUMO
|
|
Just the usual printk related warnings.
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Pull nfsd fixes from Bruce Fields:
"A couple minor fixes for the (new to 3.10) gss-proxy code.
And one regression from user-namespace changes. (XBMC clients were
doing something admittedly weird--sending -1 gid's--but something that
we used to allow.)"
* 'for-3.10' of git://linux-nfs.org/~bfields/linux:
svcrpc: fix failures to handle -1 uid's and gid's
svcrpc: implement O_NONBLOCK behavior for use-gss-proxy
svcauth_gss: fix error code in use_gss_proxy()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Peter Anvin:
- Three EFI-related fixes
- Two early memory initialization fixes
- build fix for older binutils
- fix for an eager FPU performance regression -- currently we don't
allow the use of the FPU at interrupt time *at all* in eager mode,
which is clearly wrong.
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86: Allow FPU to be used at interrupt time even with eagerfpu
x86, crc32-pclmul: Fix build with older binutils
x86-64, init: Fix a possible wraparound bug in switchover in head_64.S
x86, range: fix missing merge during add range
x86, efi: initial the local variable of DataSize to zero
efivar: fix oops in efivar_update_sysfs_entries() caused by memory reuse
efivarfs: Never return ENOENT from firmware again
|
|
With the addition of eagerfpu the irq_fpu_usable() now returns false
negatives especially in the case of ksoftirqd and interrupted idle task,
two common cases for FPU use for example in networking/crypto. With
eagerfpu=off FPU use is possible in those contexts. This is because of
the eagerfpu check in interrupted_kernel_fpu_idle():
...
* For now, with eagerfpu we will return interrupted kernel FPU
* state as not-idle. TBD: Ideally we can change the return value
* to something like __thread_has_fpu(current). But we need to
* be careful of doing __thread_clear_has_fpu() before saving
* the FPU etc for supporting nested uses etc. For now, take
* the simple route!
...
if (use_eager_fpu())
return 0;
As eagerfpu is automatically "on" on those CPUs that also have the
features like AES-NI this patch changes the eagerfpu check to return 1 in
case the kernel_fpu_begin() has not been said yet. Once it has been the
__thread_has_fpu() will start returning 0.
Notice that with eagerfpu the __thread_has_fpu is always true initially.
FPU use is thus always possible no matter what task is under us, unless
the state has already been saved with kernel_fpu_begin().
[ hpa: this is a performance regression, not a correctness regression,
but since it can be quite serious on CPUs which need encryption at
interrupt time I am marking this for urgent/stable. ]
Signed-off-by: Pekka Riikonen <priikone@iki.fi>
Link: http://lkml.kernel.org/r/alpine.GSO.2.00.1305131356320.18@git.silcnet.org
Cc: <stable@vger.kernel.org> v3.7+
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
|
binutils prior to 2.18 (e.g. the ones found on SLE10) don't support
assembling PEXTRD, so a macro based approach like the one for PCLMULQDQ
in the same file should be used.
This requires making the helper macros capable of recognizing 32-bit
general purpose register operands.
[ hpa: tagging for stable as it is a low risk build fix ]
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Link: http://lkml.kernel.org/r/51A6142A02000078000D99D8@nat28.tlf.novell.com
Cc: Alexander Boyko <alexander_boyko@xyratex.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Huang Ying <ying.huang@intel.com>
Cc: <stable@vger.kernel.org> v3.9
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
|
When the directory freespace index grows to a second block (2017
4k data blocks in the directory), the initialisation of the second
new block header goes wrong. The write verifier fires a corruption
error indicating that the block number in the header is zero. This
was being tripped by xfs/110.
The problem is that the initialisation of the new block is done just
fine in xfs_dir3_free_get_buf(), but the caller then users a dirv2
structure to zero on-disk header fields that xfs_dir3_free_get_buf()
has already zeroed. These lined up with the block number in the dir
v3 header format.
While looking at this, I noticed that the struct xfs_dir3_free_hdr()
had 4 bytes of padding in it that wasn't defined as padding or being
zeroed by the initialisation. Add a pad field declaration and fully
zero the on disk and in-core headers in xfs_dir3_free_get_buf() so
that this is never an issue in the future. Note that this doesn't
change the on-disk layout, just makes the 32 bits of padding in the
layout explicit.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 5ae6e6a401957698f2bd8c9f4a86d86d02199fea)
|
|
Currently, swapping extents from one inode to another is a simple
act of switching data and attribute forks from one inode to another.
This, unfortunately in no longer so simple with CRC enabled
filesystems as there is owner information embedded into the BMBT
blocks that are swapped between inodes. Hence swapping the forks
between inodes results in the inodes having mapping blocks that
point to the wrong owner and hence are considered corrupt.
To fix this we need an extent tree block or record based swap
algorithm so that the BMBT block owner information can be updated
atomically in the swap transaction. This is a significant piece of
new work, so for the moment simply don't allow swap extent
operations to succeed on CRC enabled filesystems.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 02f75405a75eadfb072609f6bf839e027de6a29a)
|
|
Currently userspace has no way of determining that a filesystem is
CRC enabled. Add a flag to the XFS_IOC_FSGEOMETRY ioctl output to
indicate that the filesystem has v5 superblock support enabled.
This will allow xfs_info to correctly report the state of the
filesystem.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 74137fff067961c9aca1e14d073805c3de8549bd)
|
|
When CRCs are enabled, the number of blocks needed to hold a remote
symlink on a 1k block size filesystem may be 2 instead of 1. The
transaction reservation for the allocated blocks was not taking this
into account and only allocating one block. Hence when trying to
read or invalidate such symlinks, we are mapping a hole where there
should be a block and things go bad at that point.
Fix the reservation to use the correct block count, clean up the
block count calculation similar to the remote attribute calculation,
and add a debug guard to detect when we don't write the entire
symlink to disk.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 321a95839e65db3759a07a3655184b0283af90fe)
|
|
A long time ago in a galaxy far away....
.. the was a commit made to fix some ilinux specific "fragmented
buffer" log recovery problem:
http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-import.git;a=commitdiff;h=b29c0bece51da72fb3ff3b61391a391ea54e1603
That problem occurred when a contiguous dirty region of a buffer was
split across across two pages of an unmapped buffer. It's been a
long time since that has been done in XFS, and the changes to log
the entire inode buffers for CRC enabled filesystems has
re-introduced that corner case.
And, of course, it turns out that the above commit didn't actually
fix anything - it just ensured that log recovery is guaranteed to
fail when this situation occurs. And now for the gory details.
xfstest xfs/085 is failing with this assert:
XFS (vdb): bad number of regions (0) in inode log format
XFS: Assertion failed: 0, file: fs/xfs/xfs_log_recover.c, line: 1583
Largely undocumented factoid #1: Log recovery depends on all log
buffer format items starting with this format:
struct foo_log_format {
__uint16_t type;
__uint16_t size;
....
As recoery uses the size field and assumptions about 32 bit
alignment in decoding format items. So don't pay much attention to
the fact log recovery thinks that it decoding an inode log format
item - it just uses them to determine what the size of the item is.
But why would it see a log format item with a zero size? Well,
luckily enough xfs_logprint uses the same code and gives the same
error, so with a bit of gdb magic, it turns out that it isn't a log
format that is being decoded. What logprint tells us is this:
Oper (130): tid: a0375e1a len: 28 clientid: TRANS flags: none
BUF: #regs: 2 start blkno: 144 (0x90) len: 16 bmap size: 2 flags: 0x4000
Oper (131): tid: a0375e1a len: 4096 clientid: TRANS flags: none
BUF DATA
----------------------------------------------------------------------------
Oper (132): tid: a0375e1a len: 4096 clientid: TRANS flags: none
xfs_logprint: unknown log operation type (4e49)
**********************************************************************
* ERROR: data block=2 *
**********************************************************************
That we've got a buffer format item (oper 130) that has two regions;
the format item itself and one dirty region. The subsequent region
after the buffer format item and it's data is them what we are
tripping over, and the first bytes of it at an inode magic number.
Not a log opheader like there is supposed to be.
That means there's a problem with the buffer format item. It's dirty
data region is 4096 bytes, and it contains - you guessed it -
initialised inodes. But inode buffers are 8k, not 4k, and we log
them in their entirety. So something is wrong here. The buffer
format item contains:
(gdb) p /x *(struct xfs_buf_log_format *)in_f
$22 = {blf_type = 0x123c, blf_size = 0x2, blf_flags = 0x4000,
blf_len = 0x10, blf_blkno = 0x90, blf_map_size = 0x2,
blf_data_map = {0xffffffff, 0xffffffff, .... }}
Two regions, and a signle dirty contiguous region of 64 bits. 64 *
128 = 8k, so this should be followed by a single 8k region of data.
And the blf_flags tell us that the type of buffer is a
XFS_BLFT_DINO_BUF. It contains inodes. And because it doesn't have
the XFS_BLF_INODE_BUF flag set, that means it's an inode allocation
buffer. So, it should be followed by 8k of inode data.
But we know that the next region has a header of:
(gdb) p /x *ohead
$25 = {oh_tid = 0x1a5e37a0, oh_len = 0x100000, oh_clientid = 0x69,
oh_flags = 0x0, oh_res2 = 0x0}
and so be32_to_cpu(oh_len) = 0x1000 = 4096 bytes. It's simply not
long enough to hold all the logged data. There must be another
region. There is - there's a following opheader for another 4k of
data that contains the other half of the inode cluster data - the
one we assert fail on because it's not a log format header.
So why is the second part of the data not being accounted to the
correct buffer log format structure? It took a little more work with
gdb to work out that the buffer log format structure was both
expecting it to be there but hadn't accounted for it. It was at that
point I went to the kernel code, as clearly this wasn't a bug in
xfs_logprint and the kernel was writing bad stuff to the log.
First port of call was the buffer item formatting code, and the
discontiguous memory/contiguous dirty region handling code
immediately stood out. I've wondered for a long time why the code
had this comment in it:
vecp->i_addr = xfs_buf_offset(bp, buffer_offset);
vecp->i_len = nbits * XFS_BLF_CHUNK;
vecp->i_type = XLOG_REG_TYPE_BCHUNK;
/*
* You would think we need to bump the nvecs here too, but we do not
* this number is used by recovery, and it gets confused by the boundary
* split here
* nvecs++;
*/
vecp++;
And it didn't account for the extra vector pointer. The case being
handled here is that a contiguous dirty region lies across a
boundary that cannot be memcpy()d across, and so has to be split
into two separate operations for xlog_write() to perform.
What this code assumes is that what is written to the log is two
consecutive blocks of data that are accounted in the buf log format
item as the same contiguous dirty region and so will get decoded as
such by the log recovery code.
The thing is, xlog_write() knows nothing about this, and so just
does it's normal thing of adding an opheader for each vector. That
means the 8k region gets written to the log as two separate regions
of 4k each, but because nvecs has not been incremented, the buf log
format item accounts for only one of them.
Hence when we come to log recovery, we process the first 4k region
and then expect to come across a new item that starts with a log
format structure of some kind that tells us whenteh next data is
going to be. Instead, we hit raw buffer data and things go bad real
quick.
So, the commit from 2002 that commented out nvecs++ is just plain
wrong. It breaks log recovery completely, and it would seem the only
reason this hasn't been since then is that we don't log large
contigous regions of multi-page unmapped buffers very often. Never
would be a closer estimate, at least until the CRC code came along....
So, lets fix that by restoring the nvecs accounting for the extra
region when we hit this case.....
.... and there's the problemin log recovery it is apparently working
around:
XFS: Assertion failed: i == item->ri_total, file: fs/xfs/xfs_log_recover.c, line: 2135
Yup, xlog_recover_do_reg_buffer() doesn't handle contigous dirty
regions being broken up into multiple regions by the log formatting
code. That's an easy fix, though - if the number of contiguous dirty
bits exceeds the length of the region being copied out of the log,
only account for the number of dirty bits that region covers, and
then loop again and copy more from the next region. It's a 2 line
fix.
Now xfstests xfs/085 passes, we have one less piece of mystery
code, and one more important piece of knowledge about how to
structure new log format items..
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 709da6a61aaf12181a8eea8443919ae5fc1b731d)
|
|
XFS has failed to kill suid/sgid bits correctly when truncating
files of non-zero size since commit c4ed4243 ("xfs: split
xfs_setattr") introduced in the 3.1 kernel. Fix it.
Fix it.
cc: stable kernel <stable@vger.kernel.org>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 56c19e89b38618390addfc743d822f99519055c6)
|
|
Lockdep reports:
=============================================
[ INFO: possible recursive locking detected ]
3.9.0+ #3 Not tainted
---------------------------------------------
setquota/28368 is trying to acquire lock:
(sb_internal){++++.?}, at: [<c11e8846>] xfs_trans_alloc+0x26/0x50
but task is already holding lock:
(sb_internal){++++.?}, at: [<c11e8846>] xfs_trans_alloc+0x26/0x50
from xfs_qm_scall_setqlim()->xfs_dqread() when a dquot needs to be
allocated.
xfs_qm_scall_setqlim() is starting a transaction and then not
passing it into xfs_qm_dqet() and so it starts it's own transaction
when allocating the dquot. Splat!
Fix this by not allocating the dquot in xfs_qm_scall_setqlim()
inside the setqlim transaction. This requires getting the dquot
first (and allocating it if necessary) then dropping and relocking
the dquot before joining it to the setqlim transaction.
Reported-by: Michael L. Semon <mlsemon35@gmail.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit f648167f3ac79018c210112508c732ea9bf67c7b)
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen
Pull Xen fixes from Konrad Rzeszutek Wilk:
- Use proper error paths
- Clean up APIC IPI usage (incorrect arguments)
- Delay XenBus frontend resume is backend (xenstored) is not running
- Fix build error with various combinations of CONFIG_
* tag 'stable/for-linus-3.10-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen:
xenbus_client.c: correct exit path for xenbus_map_ring_valloc_hvm
xen-pciback: more uses of cached MSI-X capability offset
xen: Clean up apic ipi interface
xenbus: save xenstore local status for later use
xenbus: delay xenbus frontend resume if xenstored is not running
xmem/tmem: fix 'undefined variable' build error.
|
|
Tomi and I will now take care of the Framebuffer Layer
The git tree is now on kernel.org
Signed-off-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com>
Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Florian Tobias Schandinat <FlorianSchandinat@gmx.de>
Cc: linux-fbdev@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound updates from Takashi Iwai:
"Again very calm updates at this time.
All small fixes for individual drivers, mostly ASoC codecs, in
addition to soc-compress fix for capture streams which is safe to
apply as there is no in-tree users yet."
* tag 'sound-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
ASoC: cs42l52: fix default value for MASTERA_VOL.
ASoC: wm8994: check for array index returned
ASoC: wm8994: Fix reporting of accessory removal on WM8958
ASoC: wm8994: use the correct pointer to get the control value
ASoC: wm5110: Correct DSP4R Mixer control name
ALSA: usb-6fire: Modify firmware version check
ASoC: cs42l52: fix master playback mute mask.
ASoC: cs42l52: fix bogus shifts in "Speaker Volume" and "PCM Mixer Volume" controls.
ASoC: cs42l52: microphone bias is controlled by IFACE_CTL2 register.
ASoC: davinci: fix sample rotation
ASoC: wm5110: Add missing speaker initialisation
ASoC: soc-compress: Send correct stream event for capture start
ASoC: max98090: request IRQF_ONESHOT interrupt
|
|
The following warning was seen on 3.9 when a corrected PCIe error was being
handled by the AER subsystem.
WARNING: at .../drivers/pci/search.c:214 pci_get_dev_by_id+0x8a/0x90()
This occurred because a call to pci_get_domain_bus_and_slot() was added to
cper_print_pcie() to setup for the call to cper_print_aer(). The warning
showed up because cper_print_pcie() is called in an interrupt context and
pci_get* functions are not supposed to be called in that context.
The solution is to move the cper_print_aer() call out of the interrupt
context and into aer_recover_work_func() to avoid any warnings when calling
pci_get* functions.
Signed-off-by: Lance Ortiz <lance.ortiz@hp.com>
Acked-by: Borislav Petkov <bp@suse.de>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
|
|
Merge mn10300 fixes from David Howells.
* emailed patches from David Howells <dhowells@redhat.com>:
MN10300: Need pci_iomap() and __pci_ioport_map() defining
MN10300: ASB2305's PCI code needs the definition of XIRQ1
MN10300: Enable IRQs more in system call exit work path
MN10300: Fix ret_from_kernel_thread
|
|
Include the generic definitions of pci_iomap() and __pci_ioport_map()
otherwise we can get errors like:
lib/pci_iomap.c: In function 'pci_iomap':
lib/pci_iomap.c:37: error: implicit declaration of function '__pci_ioport_map'
lib/pci_iomap.c:37: warning: return makes pointer from integer without a cast
and:
drivers/pci/quirks.c: In function 'disable_igfx_irq':
drivers/pci/quirks.c:2893: error: implicit declaration of function 'pci_iomap'
drivers/pci/quirks.c:2893: warning: initialization makes pointer from integer without a cast
drivers/pci/quirks.c: In function 'reset_ivb_igd':
drivers/pci/quirks.c:3133: warning: assignment makes pointer from integer without a cast
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Ken Cox <jkc@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The code for PCI in the ASB2305 needs the definition of XIRQ1 from proc/irq.h
otherwise the following error appears:
arch/mn10300/unit-asb2305/pci.c: In function 'unit_pci_init':
arch/mn10300/unit-asb2305/pci.c:481: error: 'XIRQ1' undeclared (first use in this function)
arch/mn10300/unit-asb2305/pci.c:481: error: (Each undeclared identifier is reported only once
arch/mn10300/unit-asb2305/pci.c:481: error: for each function it appears in.)
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Ken Cox <jkc@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Enable IRQs when calling schedule() for TIF_NEED_RESCHED and
do_notify_resume(). If interrupts are enabled during do_notify_resume(), a
warning can be seen (see lower down).
Whilst we're at it, resume_userspace can be made local to entry.S as it is not
called outside of there and it can be merged with the part of work_resched that
occurs after schedule() is called.
WARNING: at kernel/softirq.c:160 local_bh_enable+0x42/0xa0()
Call Trace:
local_bh_enable+0x42/0xa0
unix_release_sock+0x86/0x23c
unix_release+0x20/0x28
sock_release+0x17/0x88
sock_close+0x20/0x28
__fput+0xc9/0x1fc
____fput+0xb/0x10
task_work_run+0x64/0x78
do_notify_resume+0x53d/0x544
work_notifysig+0xa/0xc
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Ken Cox <jkc@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
ret_from_kernel_thread needs to set A2 to the thread_info pointer before
jumping to syscall_exit.
Without this, we never correctly start userspace.
This was caused by the rejuggling of the fork/exec paths in commit
ddf23e87a804 ("mn10300: switch to saner kernel_execve() semantics")
Reported-by: Ken Cox <jkc@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Ken Cox <jkc@redhat.com>
Acked-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pin-control fixes from Linus Walleij:
- Six patches fixing up the suspend/resume and wakeup handling of the
Samsung and Exynos drivers.
- Errorpath fixes for four different drivers. All on the probe()
errorpath.
- Make the debugfs code for pin config take the right mutex.
* tag 'pinctrl-fixes-v3.10-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
pinctrl: pinconf: take the right mutex
pinctrl: sunxi: fix error return code in sunxi_pinctrl_probe()
pinctrl: exynos: Handle suspend/resume of GPIO EINT registers
pinctrl: samsung: Allow per-bank SoC-specific private data
pinctrl: samsung: Add support for SoC-specific suspend/resume callbacks
pinctrl: Don't override the error code in probe error handling
ARM: EXYNOS: Fix EINT wake-up mask configuration when pinctrl is used
pinctrl: exynos: Add support for set_irq_wake of wake-up EINTs
pinctrl: samsung: fix suspend/resume functionality
|
|
into drm-next
just a few minor fixes for radeon.
* 'drm-fixes-3.10' of git://people.freedesktop.org/~agd5f/linux:
radeon: use max_bus_speed to activate gen2 speeds
drm/radeon: narrow scope of Apple re-POST hack
drm/radeon: don't check crtcs in card_posted() on cards without DCE
drm/radeon: fix card_posted check for newer asics
drm/radeon: fix typo in cu_per_sh on verde
drm/radeon: UVD block on SUMO2 is the same as on SUMO
|
|
Apparently we should not free page that has not been allocated.
This is b/c alloc_xenballooned_pages will take care of freeing
the page on its own.
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
radeon currently uses a drm function to get the speed capabilities for
the bus, drm_pcie_get_speed_cap_mask. However, this is a non-standard
method of performing this detection and this patch changes it to use
the max_bus_speed attribute.
From: Lucas Kannebley Tavares <lucaskt@linux.vnet.ibm.com>
Signed-off-by: Kleber Sacilotto de Souza <klebers@linux.vnet.ibm.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
This narrows the scope of the apple re-POST hack added in:
drm/radeon: re-POST the asic on Apple hardware when booted via EFI
That patch prevents UVD from working on macs when booted in EFI
mode. The original patch fixed macbook2,1 systems which were
r5xx and hence have no UVD. Limit the hack to those systems to
prevent UVD breakage on newer systems.
Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=63935
Cc: Matthew Garrett <mjg59@srcf.ucam.org>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Matthew Garrett <matthew.garrett@nebula.com>
|
|
Skip checking crtcs in hardware without them. Avoids checking
non-existent hardware.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
Newer asics have variable numbers of crtcs. Use that
rather than the asic family to determine which crtcs
to check. This avoids checking non-existent crtcs or
missing crtcs on certain asics.
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
Should be 5 rather than 2.
Noticed by sroland and glisse on IRC.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
The chip id for SUMO2 isn't used.
fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=63935
Tested-By: Dave Witbrodt <dawitbro@sbcglobal.net>
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
As of f025adf191924e3a75ce80e130afcd2485b53bb8 "sunrpc: Properly decode
kuids and kgids in RPC_AUTH_UNIX credentials" any rpc containing a -1
(0xffff) uid or gid would fail with a badcred error.
Reported symptoms were xmbc clients failing on upgrade of the NFS
server; examination of the network trace showed them sending -1 as the
gid.
Reported-by: Julian Sikorski <belegdol@gmail.com>
Tested-by: Julian Sikorski <belegdol@gmail.com>
Cc: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: stable@vger.kernel.org
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
|
|
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
Commit f447d56d36af18c5104ff29dcb1327c0c0ac3634 introduced the
implementation of the PV apic ipi interface. But there were some
odd things (it seems none of which cause really any issue but
maybe they should be cleaned up anyway):
- xen_send_IPI_mask_allbutself (and by that xen_send_IPI_allbutself)
ignore the passed in vector and only use the CALL_FUNCTION_SINGLE
vector. While xen_send_IPI_all and xen_send_IPI_mask use the vector.
- physflat_send_IPI_allbutself is declared unnecessarily. It is never
used.
This patch tries to clean up those things.
Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
Save the xenstore local status computed in xenbus_init. It can then be used
later to check if xenstored is running in this domain.
Signed-off-by: Aurelien Chartier <aurelien.chartier@citrix.com>
[Changes in v4:
- Change variable name to xen_store_domain_type]
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
If the xenbus frontend is located in a domain running xenstored, the device
resume is hanging because it is happening before the process resume. This
patch adds extra logic to the resume code to check if we are the domain
running xenstored and delay the resume if needed.
Signed-off-by: Aurelien Chartier <aurelien.chartier@citrix.com>
[Changes in v2:
- Instead of bypassing the resume, process it in a workqueue]
[Changes in v3:
- Add a struct work in xenbus_device to avoid dynamic allocation
- Several small code fixes]
[Changes in v4:
- Use a dedicated workqueue]
[Changes in v5:
- Move create_workqueue error handling to xenbus_frontend_dev_resume]
Acked-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Updates for v3.10
A series of driver specific updates, none particularly critical, plus
one fix to the compressed API code to handle capture streams properly
which is very safe for mainline as there's no current users.
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM Exynos fixes from Olof Johansson:
"Here's a shorter set of fixes for 3.10, all for Samsung Exynos
platforms.
It also includes a defconfig update so that exynos_defconfig provides
a meaningful set of drivers to boot an unmodified kernel on the
Samsung ARM-based Chromebooks."
* tag 'fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
ARM: exynos: defconfig update
ARM: SAMSUNG: Add names to fimd0 IRQ resources
ARM: EXYNOS: fix software reset logic for EXYNOS5440 SOC
ARM: EXYNOS: Fix support of Exynos4210 rev0 SoC
ARM: dts: Enabling samsung-usb2phy driver for exynos5250
|
|
This turns on a number of configs that are useful on the Chromebook, but also
good to have on in general:
* USB host and MMC drivers(!)
* I2C GPIO arbitration driver
* CYAPA trackpad driver
* simplefb
* CROS EC and keyboard drivers
* S5M8767 driver
* MAX77686 drivers
* MAX8997 driver
* DEVTMPFS + mount
* DM_CRYPT (as module)
* CRYPTOLOOP
* HIGHMEM
* PRINTK timestamps
This also turns off DEBUG_LL, and switches the hardcoded Samsung lowlevel
uart to uart 3 (which is only used to show the "uncompressing kernel"
message at boot, it seems).
Signed-off-by: Olof Johansson <olof@lixom.net>
Reviewed-by: Doug Anderson <dianders@chromium.org>
Tested-by: Tushar Behera <tushar.behera@linaro.org>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
|
|
In head_64.S, a switchover has been used to handle kernel crossing
1G, 512G boundaries.
And commit 8170e6bed465b4b0c7687f93e9948aca4358a33b
x86, 64bit: Use a #PF handler to materialize early mappings on demand
said:
During the switchover in head_64.S, before #PF handler is available,
we use three pages to handle kernel crossing 1G, 512G boundaries with
sharing page by playing games with page aliasing: the same page is
mapped twice in the higher-level tables with appropriate wraparound.
But from the switchover code, when we set up the PUD table:
114 addq $4096, %rdx
115 movq %rdi, %rax
116 shrq $PUD_SHIFT, %rax
117 andl $(PTRS_PER_PUD-1), %eax
118 movq %rdx, (4096+0)(%rbx,%rax,8)
119 movq %rdx, (4096+8)(%rbx,%rax,8)
It seems line 119 has a potential bug there. For example,
if the kernel is loaded at physical address 511G+1008M, that is
000000000 111111111 111111000 000000000000000000000
and the kernel _end is 512G+2M, that is
000000001 000000000 000000001 000000000000000000000
So in this example, when using the 2nd page to setup PUD (line 114~119),
rax is 511.
In line 118, we put rdx which is the address of the PMD page (the 3rd page)
into entry 511 of the PUD table. But in line 119, the entry we calculate from
(4096+8)(%rbx,%rax,8) has exceeded the PUD page. IMO, the entry in line
119 should be wraparound into entry 0 of the PUD table.
The patch fixes the bug.
Signed-off-by: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
Link: http://lkml.kernel.org/r/5191DE5A.3020302@cn.fujitsu.com
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: <stable@vger.kernel.org> v3.9
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
|
|
Somebody noticed LTP was complaining about O_NONBLOCK opens of
/proc/net/rpc/use-gss-proxy succeeding and then a following read
hanging.
I'm not convinced LTP really has any business opening random proc files
and expecting them to behave a certain way. Maybe this isn't really a
bug.
But in any case the O_NONBLOCK behavior could be useful for someone that
wants to test whether gss-proxy is up without waiting.
Reported-by: Jan Stancek <jstancek@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
|
|
Pull drm fixes from Dave Airlie:
"This is mostly exynos and intel fixes, along with some vblank patches
I lost from Rob a few months ago that make wayland work better on lots
of GPUs, also a qxl kconfig fix."
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: (22 commits)
qxl: fix Kconfig deps - select FB_DEFERRED_IO
drm/exynos: replace request_threaded_irq with devm function
drm/exynos: remove unnecessary devm_kfree
drm/exynos: fix build warnings from ipp fimc
drm/exynos: cleanup device pointer usages
drm/exynos: wait for the completion of pending page flip
drm/exynos: use drm_send_vblank_event() helper
drm/i915: avoid premature DP AUX timeouts
drm/i915: avoid premature timeouts in __wait_seqno()
drm/i915: use msecs_to_jiffies_timeout instead of open coding the same
drm/i915: add msecs_to_jiffies_timeout to guarantee minimum duration
drm/i915: force full modeset if the connector is in DPMS OFF mode
drm/exynos: page flip fixes
drm/exynos: exynos_hdmi: Pass correct pointer to free_irq()
drm/exynos: exynos_drm_ipp: Fix incorrect usage of IS_ERR_OR_NULL
drm/exynos: exynos_drm_fbdev: Fix incorrect usage of IS_ERR_OR_NULL
drm/imx: use drm_send_vblank_event() helper
drm/shmob: use drm_send_vblank_event() helper
drm/radeon: use drm_send_vblank_event() helper
drm/nouveau: use drm_send_vblank_event() helper
...
|
|
Pull crypto fixes from Herbert Xu:
"This push fixes a crash in the new sha256_ssse3 driver as well as a
DMA setup/teardown bug in caam"
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: sha256_ssse3 - fix stack corruption with SSSE3 and AVX implementations
crypto: caam - fix inconsistent assoc dma mapping direction
|
|
Pull CIFS fixes from Steve French:
"Fixes for a couple of DFS problems, a problem with extended security
negotiation and two other small cifs fixes"
* 'for-3.10' of git://git.samba.org/sfrench/cifs-2.6:
cifs: fix composing of mount options for DFS referrals
cifs: stop printing the unc= option in /proc/mounts
cifs: fix error handling when calling cifs_parse_devname
cifs: allow sec=none mounts to work against servers that don't support extended security
cifs: fix potential buffer overrun when composing a new options string
cifs: only set ops for inodes in I_NEW state
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fixes from Steven Rostedt:
"Two more fixes:
The first one was reported by Mauro Carvalho Chehab, where if a poll()
is done against a trace buffer for a CPU that has never been online,
it will crash the kernel, as buffers are only created when a CPU comes
on line, but the trace files are for all possible CPUs.
This fix is to check if the buffer was allocated and if not return
-EINVAL.
That was the simple fix, the real fix is a bit more complex and not
for a -rc release. We could have the files created when the CPUs come
online. That would require some design changes.
The second one was reported by Peter Zijlstra. If the kernel command
line has ftrace=nop, it will lock up the system on boot up. This is
because the new design for 3.10 has the nop tracer bootstrap the
tracing subsystem. When ftrace=<trace> is defined, when a that tracer
is registered, it starts the tracing, but uses the nop tracer to clear
things out. What happened here was that ftrace=nop caused the
registering of nop to start it and use nop before it was initialized.
The only thing nop needs to have done to initialize it is to have the
tracer point its current_tracer structure member to the nop tracer.
Doing that before registering the nop tracer makes everything work."
* tag 'trace-fixes-v3.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
ring-buffer: Do not poll non allocated cpu buffers
tracing: Fix crash when ftrace=nop on the kernel command line
|