Age | Commit message (Collapse) | Author |
|
After looking at all of the nfsv4 operations the label structure has been added
to the prototypes of the functions which can transmit label data.
Signed-off-by: Matthew N. Dodd <Matthew.Dodd@sparta.com>
Signed-off-by: Miguel Rodel Felipe <Rodel_FM@dsi.a-star.edu.sg>
Signed-off-by: Phua Eu Gene <PHUA_Eu_Gene@dsi.a-star.edu.sg>
Signed-off-by: Khin Mi Mi Aung <Mi_Mi_AUNG@dsi.a-star.edu.sg>
Signed-off-by: Steve Dickson <steved@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
The fattr handling bitmap code only uses the first two fattr words sofar. This
patch adds the 3rd word to being sent but doesn't populate it yet.
Signed-off-by: Miguel Rodel Felipe <Rodel_FM@dsi.a-star.edu.sg>
Signed-off-by: Phua Eu Gene <PHUA_Eu_Gene@dsi.a-star.edu.sg>
Signed-off-by: Khin Mi Mi Aung <Mi_Mi_AUNG@dsi.a-star.edu.sg>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
In order to mimic the way that NFSv4 ACLs are implemented we have created a
structure to be used to pass label data up and down the call chain. This patch
adds the new structure and new members to the required NFSv4 call structures.
Signed-off-by: Matthew N. Dodd <Matthew.Dodd@sparta.com>
Signed-off-by: Miguel Rodel Felipe <Rodel_FM@dsi.a-star.edu.sg>
Signed-off-by: Phua Eu Gene <PHUA_Eu_Gene@dsi.a-star.edu.sg>
Signed-off-by: Khin Mi Mi Aung <Mi_Mi_AUNG@dsi.a-star.edu.sg>
Signed-off-by: Steve Dickson <steved@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
This enable NFSv4.2 support. To enable this code the
CONFIG_NFS_V4_2 Kconfig define needs to be set and
the -o v4.2 mount option need to be used.
Signed-off-by: Steve Dickson <steved@redhat.com>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
There is no way to differentiate if a text mount option is passed from user
space or the kernel. A flags field is being added to the
security_sb_set_mnt_opts hook to allow for in kernel security flags to be sent
to the LSM for processing in addition to the text options received from mount.
This patch also updated existing code to fix compilation errors.
Acked-by: Eric Paris <eparis@redhat.com>
Acked-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: David P. Quigley <dpquigl@tycho.nsa.gov>
Signed-off-by: Miguel Rodel Felipe <Rodel_FM@dsi.a-star.edu.sg>
Signed-off-by: Phua Eu Gene <PHUA_Eu_Gene@dsi.a-star.edu.sg>
Signed-off-by: Khin Mi Mi Aung <Mi_Mi_AUNG@dsi.a-star.edu.sg>
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Dave reported a panic because the extent_root->commit_root was NULL in the
caching kthread. That is because we just unset it in free_root_pointers, which
is not the correct thing to do, we have to either wait for the caching kthread
to complete or hold the extent_commit_sem lock so we know the thread has exited.
This patch makes the kthreads all stop first and then we do our cleanup. This
should fix the race. Thanks,
Reported-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
|
|
Commit be283b2e674a09457d4563729015adb637ce7cc1
( Btrfs: use helper to cleanup tree roots) introduced the following bug,
BUG: unable to handle kernel NULL pointer dereference at 0000000000000034
IP: [<ffffffffa039368c>] extent_buffer_get+0x4/0xa [btrfs]
[...]
Pid: 2463, comm: btrfs-cache-1 Tainted: G O 3.9.0+ #4 innotek GmbH VirtualBox/VirtualBox
RIP: 0010:[<ffffffffa039368c>] [<ffffffffa039368c>] extent_buffer_get+0x4/0xa [btrfs]
Process btrfs-cache-1 (pid: 2463, threadinfo ffff880112d60000, task ffff880117679730)
[...]
Call Trace:
[<ffffffffa0398a99>] btrfs_search_slot+0x104/0x64d [btrfs]
[<ffffffffa039aea4>] btrfs_next_old_leaf+0xa7/0x334 [btrfs]
[<ffffffffa039b141>] btrfs_next_leaf+0x10/0x12 [btrfs]
[<ffffffffa039ea13>] caching_thread+0x1a3/0x2e0 [btrfs]
[<ffffffffa03d8811>] worker_loop+0x14b/0x48e [btrfs]
[<ffffffffa03d86c6>] ? btrfs_queue_worker+0x25c/0x25c [btrfs]
[<ffffffff81068d3d>] kthread+0x8d/0x95
[<ffffffff81068cb0>] ? kthread_freezable_should_stop+0x43/0x43
[<ffffffff8151e5ac>] ret_from_fork+0x7c/0xb0
[<ffffffff81068cb0>] ? kthread_freezable_should_stop+0x43/0x43
RIP [<ffffffffa039368c>] extent_buffer_get+0x4/0xa [btrfs]
We've free'ed commit_root before actually getting to free block groups where
caching thread needs valid extent_root->commit_root.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
|
|
Dave reported a NULL pointer deref. This is caused because he thought he'd be
smart and add sanity checks to the extent_io bit operations, but he didn't
expect a tree to have a NULL mapping. To fix this we just need to init the
relocation's processed_blocks with the btree_inode->i_mapping. Thanks,
Reported-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
|
|
There is a path where btrfs_drop_inode() is called with its inode's root
is NULL: In btrfs_new_inode(), when btrfs_set_inode_index() fails,
iput() is called. We should handle this case before taking look at the
root->root_item.
Signed-off-by: Naohiro Aota <naota@elisp.net>
Reviewed-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
|
|
We get a use after free if we had a transaction to cleanup since there could be
delayed inodes which refer to their respective fs_root. Thanks
Reported-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
|
|
The 'dest' abbreviation is only used in crypt_scatterlist(), while all
other functions in crypto.c use 'dst' so dest_sg should be renamed to
dst_sg.
The crypt_stat parameter is typically the first parameter in internal
eCryptfs functions so crypt_stat and dst_page should be swapped in
crypt_extent().
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
crypt_page_offset() simply initialized the two scatterlists and called
crypt_scatterlist() so it is simple enough to move into the only
function that calls it.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
They are identical except if the src_page or dst_page index is used, so
they can be merged safely if page_index is conditionally assigned.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
Combine ecryptfs_encrypt_page_offset() and
ecryptfs_decrypt_page_offset(). These two functions are functionally
identical so they can be safely merged if the caller can indicate
whether an encryption or decryption operation should occur.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
These two functions are identical except for a debug printk and whether
they call crypto_ablkcipher_encrypt() or crypto_ablkcipher_decrypt(), so
they can be safely merged if the caller can indicate if encryption or
decryption should occur.
The debug printk is useless so it is removed.
Two new #define's are created to indicate if an ENCRYPT or DECRYPT
operation is desired.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
When reading in a page, eCryptfs would allocate a helper page, fill it
with encrypted data from the lower filesytem, and then decrypt the data
from the encrypted page and store the result in the eCryptfs page cache
page.
The crypto API supports in-place crypto operations which means that the
allocation of the helper page is unnecessary when decrypting. This patch
gets rid of the unneeded page allocation by reading encrypted data from
the lower filesystem directly into the page cache page. The page cache
page is then decrypted in-place.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
There is no longer a need to accept different offset values for the
source and destination pages when encrypting/decrypting an extent in an
eCryptfs page. The two offsets can be collapsed into a single parameter.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
Now that lower filesystem IO operations occur for complete
PAGE_CACHE_SIZE bytes, the calculation for converting an eCryptfs extent
index into a lower file offset can be simplified.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
When reading and writing encrypted pages, perform IO using the entire
page all at once rather than 4096 bytes at a time.
This only affects architectures where PAGE_CACHE_SIZE is larger than
4096 bytes.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
When encrypting eCryptfs pages and decrypting pages from the lower
filesystem, utilize the entire helper page rather than only the first
4096 bytes.
This only affects architectures where PAGE_CACHE_SIZE is larger than
4096 bytes.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tyhicks/ecryptfs
Pull ecryptfs fixes from Tyler Hicks:
- Fixes how eCryptfs handles msync to sync both the upper and lower
file
- A couple of MAINTAINERS updates
* tag 'ecryptfs-3.10-rc5-msync' of git://git.kernel.org/pub/scm/linux/kernel/git/tyhicks/ecryptfs:
eCryptfs: Check return of filemap_write_and_wait during fsync
Update eCryptFS maintainers
ecryptfs: fixed msync to flush data
|
|
If sysfs_notify is called on a binary attribute, bad things can
happen, so prevent it.
Note, no in-kernel usage of this is currently present, but in the
future, it's good to be safe.
Changes in V2:
- Also ignore sysfs_notify on dirs, links
- Use WARN_ON rather than silently failing
- Compiled and tested (huge apologies about first submission)
Signed-off-by: Nick Dyer <nick.dyer@itdev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Pull CIFS fix from Steve French:
"Fix one byte buffer overrun with prefixpaths on cifs mounts which can
cause a problem with mount depending on the string length"
* 'for-3.10' of git://git.samba.org/sfrench/cifs-2.6:
cifs: fix off-by-one bug in build_unc_path_to_root
|
|
1d2ef5901483004d74947bbf78d5146c24038fe7 caused a regression in ncpfs such that
directories could no longer be removed. This was because ncp_rmdir checked
to see if a dentry could be unhashed before allowing it to be removed. Since
1d2ef5901483004d74947bbf78d5146c24038fe7 introduced a change that incremented
dentry->d_count causing it to always be greater than 1 unhash would always
fail. Thus causing the error path in ncp_rmdir to always be taken. Removing
this error path is safe as unhashing is still accomplished by calls to dput
from vfs_rmdir.
Signed-off-by: Dave Chiluk <chiluk@canonical.com>
Signed-off-by: Petr Vandrovec <petr@vandrovec.name>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
It is possible that iput is skipped after iget during the recovery.
In recover_dentry(),
dir = f2fs_iget();
...
if (de && inode->i_ino == le32_to_cpu(de->ino))
goto out;
In this case, this dir is not able to be added in dirty_dir_inode_list.
The actual linking is done only when set_page_dirty() is called.
So let's add this newly got inode into the list explicitly, and put it at the
end of the recovery routine.
Signed-off-by: Jaegeuk Kim <jaegeuk.kim@samsung.com>
|
|
Pull more xfs updates from Ben Myers:
"Here are several fixes for filesystems with CRC support turned on:
fixes for quota, remote attributes, and recovery. There is also some
feature work related to CRCs: the implementation of CRCs for the inode
unlinked lists, disabling noattr2/attr2 options when appropriate, and
bumping the maximum number of ACLs.
I would have preferred to defer this last category of items to 3.11.
This would require setting a feature bit for the on-disk changes, so
there is some pressure to get these in 3.10. I believe this
represents the end of the CRC related queue.
- Rework of dquot CRCs
- Fix for remote attribute invalidation of a leaf
- Fix ordering of transaction replay in recovery
- Implement CRCs for inode unlinked list
- Disable noattr2/attr2 mount options when CRCs are enabled
- Bump the limitation of ACL entries for v5 superblocks"
* tag 'for-linus-v3.10-rc5' of git://oss.sgi.com/xfs/xfs:
xfs: increase number of ACL entries for V5 superblocks
xfs: disable noattr2/attr2 mount options for CRC enabled filesystems
xfs: inode unlinked list needs to recalculate the inode CRC
xfs: fix log recovery transaction item reordering
xfs: fix remote attribute invalidation for a leaf
xfs: rework dquot CRCs
|
|
State recovery currently relies on being able to find a valid
nfs_open_context in the inode->open_files list.
We therefore need to put the nfs_open_context on the list while
we're still protected by the sp->so_reclaim_seqcount in order
to avoid reboot races.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Instead of having the callers set ctx->state, do it inside
_nfs4_open_and_get_state.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
All the callers have an open_context at this point, and since we always
need one in order to do state recovery, it makes sense to use it as the
basis for the nfs4_do_open() call.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
We already check the EXEC access mode in the lower layers.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
ctx->cred == ctx->state->owner->so_cred, so let's just use the former.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Use the EXCHGID4_FLAG_BIND_PRINC_STATEID exchange_id flag to enable
stateid protection. This means that if we create a stateid using a
particular principal, then we must use the same principal if we
want to change that state.
IOW: if we OPEN a file using a particular credential, then we have
to use the same credential in subsequent OPEN_DOWNGRADE, CLOSE,
or DELEGRETURN operations that use that stateid.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
This is not strictly needed, since get_deviceinfo is not allowed to
return NFS4ERR_ACCESS or NFS4ERR_WRONG_CRED, but lets do it anyway
for consistency with other pNFS operations.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
We want to use the same credential for reclaim_complete as we used
for the exchange_id call.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
We need to use the same credential as was used for the layoutget
and/or layoutcommit operations.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Ensure that we use the same credential for layoutget, layoutcommit and
layoutreturn.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
|
|
Rename ext4_da_writepages() to ext4_writepages() and use it for all
modes. We still need to iterate over all the pages in the case of
data=journalling, but in the case of nodelalloc/data=ordered (which is
what file systems mounted using ext3 backwards compatibility will use)
this will allow us to use a much more efficient I/O submission path.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
The limit of 25 ACL entries is arbitrary, but baked into the on-disk
format. For version 5 superblocks, increase it to the maximum nuber
of ACLs that can fit into a single xattr.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Mark Tinguely <tinuguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 5c87d4bc1a86bd6e6754ac3d6e111d776ddcfe57)
|
|
attr2 format is always enabled for v5 superblock filesystems, so the
mount options to enable or disable it need to be cause mount errors.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit d3eaace84e40bf946129e516dcbd617173c1cf14)
|
|
The inode unlinked list manipulations operate directly on the inode
buffer, and so bypass the inode CRC calculation mechanisms. Hence an
inode on the unlinked list has an invalid CRC. Fix this by
recalculating the CRC whenever we modify an unlinked list pointer in
an inode, ncluding during log recovery. This is trivial to do and
results in unlinked list operations always leaving a consistent
inode in the buffer.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 0a32c26e720a8b38971d0685976f4a7d63f9e2ef)
|
|
There are several constraints that inode allocation and unlink
logging impose on log recovery. These all stem from the fact that
inode alloc/unlink are logged in buffers, but all other inode
changes are logged in inode items. Hence there are ordering
constraints that recovery must follow to ensure the correct result
occurs.
As it turns out, this ordering has been working mostly by chance
than good management. The existing code moves all buffers except
cancelled buffers to the head of the list, and everything else to
the tail of the list. The problem with this is that is interleaves
inode items with the buffer cancellation items, and hence whether
the inode item in an cancelled buffer gets replayed is essentially
left to chance.
Further, this ordering causes problems for log recovery when inode
CRCs are enabled. It typically replays the inode unlink buffer long before
it replays the inode core changes, and so the CRC recorded in an
unlink buffer is going to be invalid and hence any attempt to
validate the inode in the buffer is going to fail. Hence we really
need to enforce the ordering that the inode alloc/unlink code has
expected log recovery to have since inode chunk de-allocation was
introduced back in 2003...
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Mark Tinguely <tinguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit a775ad778073d55744ed6709ccede36310638911)
|
|
When invalidating an attribute leaf block block, there might be
remote attributes that it points to. With the recent rework of the
remote attribute format, we have to make sure we calculate the
length of the attribute correctly. We aren't doing that in
xfs_attr3_leaf_inactive(), so fix it.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Mark Tinguely <tinuguely@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 59913f14dfe8eb772ff93eb442947451b4416329)
|
|
Calculating dquot CRCs when the backing buffer is written back just
doesn't work reliably. There are several places which manipulate
dquots directly in the buffers, and they don't calculate CRCs
appropriately, nor do they always set the buffer up to calculate
CRCs appropriately.
Firstly, if we log a dquot buffer (e.g. during allocation) it gets
logged without valid CRC, and so on recovery we end up with a dquot
that is not valid.
Secondly, if we recover/repair a dquot, we don't have a verifier
attached to the buffer and hence CRCs are not calculated on the way
down to disk.
Thirdly, calculating the CRC after we've changed the contents means
that if we re-read the dquot from the buffer, we cannot verify the
contents of the dquot are valid, as the CRC is invalid.
So, to avoid all the dquot CRC errors that are being detected by the
read verifier, change to using the same model as for inodes. That
is, dquot CRCs are calculated and written to the backing buffer at
the time the dquot is flushed to the backing buffer. If we modify
the dquot directly in the backing buffer, calculate the CRC
immediately after the modification is complete. Hence the dquot in
the on-disk buffer should always have a valid CRC.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Ben Myers <bpm@sgi.com>
Signed-off-by: Ben Myers <bpm@sgi.com>
(cherry picked from commit 6fcdc59de28817d1fbf1bd58cc01f4f3fac858fb)
|
|
The test_root() function could potentially loop forever due to
overflow issues. So rewrite test_root() to avoid this issue; as a
bonus, it is 38% faster when benchmarked via a test loop:
int main(int argc, char **argv)
{
int i;
for (i = 0; i < 1 << 24; i++) {
if (test_root(i, 7))
printf("%d\n", i);
}
}
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
The group number passed to ext4_get_group_info() should be valid, but
let's add an assert to check this before we start creating a pointer
based on that group number and dereferencing it.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
Check the group number for sanity earilier, before calling routines
such as ext4_bg_has_super() or ext4_group_overhead_blocks().
Reported-by: Jonathan Salwan <jonathan.salwan@gmail.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
The bio_alloc() function can return NULL if the memory allocation
fails. So we need to check for this.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|