Age | Commit message (Collapse) | Author |
|
Constify struct sysfs_ops.
This is part of the ops structure constification
effort started by Arjan van de Ven et al.
Benefits of this constification:
* prevents modification of data that is shared
(referenced) by many other structure instances
at runtime
* detects/prevents accidental (but not intentional)
modification attempts on archs that enforce
read-only kernel data at runtime
* potentially better optimized code as the compiler
can assume that the const data cannot be changed
* the compiler/linker move const data into .rodata
and therefore exclude them from false sharing
Signed-off-by: Emese Revfy <re.emese@gmail.com>
Acked-by: David Teigland <teigland@redhat.com>
Acked-by: Matt Domsch <Matt_Domsch@dell.com>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Acked-by: Hans J. Koch <hjk@linutronix.de>
Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>
Acked-by: Jens Axboe <jens.axboe@oracle.com>
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Rename for_each_bit to for_each_set_bit in the kernel source tree. To
permit for_each_clear_bit(), should that ever be added.
The patch includes a macro to map the old for_each_bit() onto the new
for_each_set_bit(). This is a (very) temporary thing to ease the migration.
[akpm@linux-foundation.org: add temporary for_each_bit()]
Suggested-by: Alexey Dobriyan <adobriyan@gmail.com>
Suggested-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Artem Bityutskiy <dedekind@infradead.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs-2.6
* 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs-2.6: (33 commits)
quota: stop using QUOTA_OK / NO_QUOTA
dquot: cleanup dquot initialize routine
dquot: move dquot initialization responsibility into the filesystem
dquot: cleanup dquot drop routine
dquot: move dquot drop responsibility into the filesystem
dquot: cleanup dquot transfer routine
dquot: move dquot transfer responsibility into the filesystem
dquot: cleanup inode allocation / freeing routines
dquot: cleanup space allocation / freeing routines
ext3: add writepage sanity checks
ext3: Truncate allocated blocks if direct IO write fails to update i_size
quota: Properly invalidate caches even for filesystems with blocksize < pagesize
quota: generalize quota transfer interface
quota: sb_quota state flags cleanup
jbd: Delay discarding buffers in journal_unmap_buffer
ext3: quota_write cross block boundary behaviour
quota: drop permission checks from xfs_fs_set_xstate/xfs_fs_set_xquota
quota: split out compat_sys_quotactl support from quota.c
quota: split out netlink notification support from quota.c
quota: remove invalid optimization from quota_sync_all
...
Fixed trivial conflicts in fs/namei.c and fs/ufs/inode.c
|
|
Get rid of the initialize dquot operation - it is now always called from
the filesystem and if a filesystem really needs it's own (which none
currently does) it can just call into it's own routine directly.
Rename the now static low-level dquot_initialize helper to __dquot_initialize
and vfs_dq_init to dquot_initialize to have a consistent namespace.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Currently various places in the VFS call vfs_dq_init directly. This means
we tie the quota code into the VFS. Get rid of that and make the
filesystem responsible for the initialization. For most metadata operations
this is a straight forward move into the methods, but for truncate and
open it's a bit more complicated.
For truncate we currently only call vfs_dq_init for the sys_truncate case
because open already takes care of it for ftruncate and open(O_TRUNC) - the
new code causes an additional vfs_dq_init for those which is harmless.
For open the initialization is moved from do_filp_open into the open method,
which means it happens slightly earlier now, and only for regular files.
The latter is fine because we don't need to initialize it for operations
on special files, and we already do it as part of the namespace operations
for directories.
Add a dquot_file_open helper that filesystems that support generic quotas
can use to fill in ->open.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Get rid of the drop dquot operation - it is now always called from
the filesystem and if a filesystem really needs it's own (which none
currently does) it can just call into it's own routine directly.
Rename the now static low-level dquot_drop helper to __dquot_drop
and vfs_dq_drop to dquot_drop to have a consistent namespace.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Currently clear_inode calls vfs_dq_drop directly. This means
we tie the quota code into the VFS. Get rid of that and make the
filesystem responsible for the drop inside the ->clear_inode
superblock operation.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Get rid of the transfer dquot operation - it is now always called from
the filesystem and if a filesystem really needs it's own (which none
currently does) it can just call into it's own routine directly.
Rename the now static low-level dquot_transfer helper to __dquot_transfer
and vfs_dq_transfer to dquot_transfer to have a consistent namespace,
and make the new dquot_transfer return a normal negative errno value
which all callers expect.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Get rid of the alloc_inode and free_inode dquot operations - they are
always called from the filesystem and if a filesystem really needs
their own (which none currently does) it can just call into it's
own routine directly.
Also get rid of the vfs_dq_alloc/vfs_dq_free wrappers and always
call the lowlevel dquot_alloc_inode / dqout_free_inode routines
directly, which now lose the number argument which is always 1.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Get rid of the alloc_space, free_space, reserve_space, claim_space and
release_rsv dquot operations - they are always called from the filesystem
and if a filesystem really needs their own (which none currently does)
it can just call into it's own routine directly.
Move shared logic into the common __dquot_alloc_space,
dquot_claim_space_nodirty and __dquot_free_space low-level methods,
and rationalize the wrappers around it to move as much as possible
code into the common block for CONFIG_QUOTA vs not. Also rename
all these helpers to be named dquot_* instead of vfs_dq_*.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
|
|
Currently we were adding ioctl cmds/structures for ocfs2 into ocfs2_fs.h
which was used for define ocfs2 on-disk layout. That sounds a little bit
confusing, and it may be quickly polluted espcially when growing the
ocfs2_info_request ioctls afterwards(it will grow i bet).
As a result, such OCFS2 IOCs do need to be placed somewhere other than
ocfs2_fs.h, a separated ocfs2_ioctl.h will be added to store such ioctl
structures and definitions which could also be used from userspace to
invoke ioctls call.
Signed-off-by: Tristan Ye <tristan.ye@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
This patch makes ocfs2 send SIGXFSZ if new file size exceeds the rlimit.
Processes may get SIGXFSZ on one node (in the cluster) while others will
not on another if file size limits are different on the two nodes.
Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Make use of the newly added BASTS masklog to trace ASTs and BASTs in userdlm.
Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
This patch adds a new masklog and uses it allow tracing ASTs and BASTs
in the dlmglue layer. This has been found to be very useful in debugging
cluster locking issues.
Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
If a node down event happens while dlm shutdown in progress, dlm recovery
should be done before dlm is shutdown. We can't migrate unrecovered locks,
obviously. But dlm_reco_thread only does recovery if the dlm_state is
in DLM_CTXT_JOINED.
dlm_reco_thread should do recovery if dlm_state is in DLM_CTXT_JOINED or
DLM_CTXT_IN_SHUTDOWN.
Signed-off-by: Srinivas Eeda <srinivas.eeda@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
In ocfs2_direct_IO_get_blocks, we only need to bug out
in case of we are going to write a recounted extent rec.
What a silly bug introduced by me!
Signed-off-by: Tao Ma <tao.ma@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
Cc: stable@kernel.org
|
|
This patch fixes a compiling warning in ocfs2_file_aio_write().
Signed-off-by: Coly Li <coly.li@suse.de>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Unlike ocfs2, dlmfs has no permanent storage. It can't store off a
cluster stack it is supposed to be using. So it can't specify the stack
name in ocfs2_cluster_connect().
Instead, we create ocfs2_cluster_connect_agnostic(), which simply uses
the stack that is currently enabled. This is find for dlmfs, which will
rely on the stack initialization.
We add the "stackglue" capability to dlmfs's capability list. This lets
userspace know dlmfs can be used with all cluster stacks.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Rather than directly using o2dlm, dlmfs can now use the stackglue. This
allows it to use userspace cluster stacks and fs/dlm. This commit
forces o2cb for now. A latter commit will bump the protocol version and
allow non-o2cb stacks.
This is one big sed, really. LKM_xxMODE becomes DLM_LOCK_xx. LKM_flag
becomes DLM_LKF_flag.
We also learn to check that the LVB is valid before reading it. Any DLM
can lose the contents of the LVB during a complicated recovery. userdlm
should be checking this. Now it does. dlmfs will return 0 from read(2)
if the LVB was invalid.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
We want folks using dlmfs to be able to use the LVB in places other than
just write(2)/read(2). By ignoring truncate requests, we allow 'echo
"contents" > /dlm/space/lockname' to work.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Inside the stackglue, the locking protocol structure is hanging off of
the ocfs2_cluster_connection. This takes it one further; the locking
protocol is passed into ocfs2_cluster_connect(). Now different cluster
connections can have different locking protocols with distinct asts.
Note that all locking protocols have to keep their maximum protocol
version in lock-step.
With the protocol structure set in ocfs2_cluster_connect(), there is no
need for the stackglue to have a static pointer to a specific protocol
structure. We can change initialization to only pass in the maximum
protocol version.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
With the full ocfs2_locking_protocol hanging off of the
ocfs2_cluster_connection, ast wrappers can get the ast/bast pointers
there. They don't need to get them from their plugin structure.
The user plugin still needs the maximum locking protocol version,
though. This changes the plugin structure so that it only holds the max
version, not the entire ocfs2_locking_protocol pointer.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
With the ocfs2_cluster_connection hanging off of the ocfs2_dlm_lksb, we
have access to it in the ast and bast wrapper functions. Attach the
ocfs2_locking_protocol to the conn.
Now, instead of refering to a static variable for ast/bast pointers, the
wrappers can look at the connection. This means different connections
can have different ast/bast pointers, and it reduces the need for the
static pointer.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
We're going to want it in the ast functions, so we convert union
ocfs2_dlm_lksb to struct ocfs2_dlm_lksb and let it carry the connection.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
The stackglue ast and bast functions tried to maintain the fiction that
their arguments were void pointers. In reality, stack_user.c had to
know that the argument was an ocfs2_lock_res in order to get the status
off of the lksb. That's ugly.
This changes stackglue to always pass the lksb as the argument to ast
and bast functions. The caller can always use container_of() to get the
ocfs2_lock_res or user_dlm_lock_res. The net effect to the caller is
zero. They still get back the lockres in their ast. stackglue gets
cleaner, and now can use the lksb itself.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
We're going to remove the tie between ocfs2_dlmfs and o2dlm.
ocfs2_dlmfs doesn't belong in the fs/ocfs2/dlm directory anymore. Here
we move it to fs/ocfs2/dlmfs.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
o2dlm's userspace filesystem is an easy way to use the DLM from
userspace. It is intentionally simple. For example, it does not allow
for asynchronous behavior or lock conversion. This is intentional to
keep the interface simple.
Because there is no asynchronous notification, there is no way for a
process holding a lock to know another node needs the lock. This is the
number one complaint of ocfs2_dlmfs users. Turns out, we can solve this
very easily. We add poll() support to ocfs2_dlmfs. When a BAST is
received, the lock's file descriptor will receive POLLIN.
This is trivial to implement. Userdlm already has an appropriate
waitqueue, and the lock knows when it is blocked.
We add the "bast" capability to tell userspace this is available.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
Acked-by: Mark Fasheh <mfasheh@suse.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Over time, dlmfs has added some features that were not part of the
initial ABI. Unfortunately, some of these features are not detectable
via standard usage. For example, Linux's default poll always returns
POLLIN, so there is no way for a caller of poll(2) to know when dlmfs
added poll support. Instead, we provide this list of new capabilities.
Capabilities is a read-only attribute. We do it as a module parameter
so we can discover it whether dlmfs is built in, loaded, or even not
loaded (via modinfo).
The ABI features are local to this machine's dlmfs mount. This is
distinct from the locking protocol, which is concerned with inter-node
interaction.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
ocfs2 can store extended attribute values as large as a single file. It
does this using a standard ocfs2 btree for the large value. However,
the previous code did not handle all error cases cleanly.
There are multiple problems to have.
1) We have trouble allocating space for a new xattr. This leaves us
with an empty xattr.
2) We overwrote an existing local xattr with a value root, and now we
have an error allocating the storage. This leaves us an empty xattr.
where there used to be a value. The value is lost.
3) We have trouble truncating a reused value. This leaves us with the
original entry pointing to the truncated original value. The value
is lost.
4) We have trouble extending the storage on a reused value. This leaves
us with the original value safely in place, but with more storage
allocated when needed.
This doesn't consider storing local xattrs (values that don't require a
btree). Those only fail when the journal fails.
Case (1) is easy. We just remove the xattr we added. We leak the
storage because we can't safely remove it, but otherwise everything is
happy. We'll print a warning about the leak.
Case (4) is easy. We still have the original value in place. We can
just leave the extra storage attached to this xattr. We return the
error, but the old value is untouched. We print a warning about the
storage.
Case (2) and (3) are hard because we've lost the original values. In
the old code, we ended up with values that could be partially read.
That's not good. Instead, we just wipe the xattr entry and leak the
storage. It stinks that the original value is lost, but now there isn't
a partial value to be read. We'll print a big fat warning.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
ocfs2_xattr_ibody_set() is the only remaining user of
ocfs2_xattr_set_entry(). ocfs2_xattr_set_entry() actually does two
things: it calls ocfs2_xa_set(), and it initializes the inline xattrs.
Initializing the inline space really belongs in its own call.
We lift the initialization to ocfs2_xattr_ibody_init(), called from
ocfs2_xattr_ibody_set() only when necessary. Now
ocfs2_xattr_ibody_set() can call ocfs2_xa_set() directly.
ocfs2_xattr_set_entry() goes away.
Another nice fact is that ocfs2_init_dinode_xa_loc() can trust
i_xattr_inline_size.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
ocfs2_xattr_block_set() calls into ocfs2_xattr_set_entry() with just the
HAS_XATTR flag. Most of the machinery of ocfs2_xattr_set_entry() is
skipped. All that really happens other than the call to ocfs2_xa_set()
is making sure the HAS_XATTR flag is set on the inode.
But HAS_XATTR should be set when we also set di->i_xattr_loc. And
that's done in ocfs2_create_xattr_block(). So let's move it there, and
then ocfs2_xattr_block_set() can just call ocfs2_xa_set().
While we're there, ocfs2_create_xattr_block() can take the set_ctxt for
a smaller argument list. It also learns to set HAS_XATTR_FL, because it
knows for sure. ocfs2_create_empty_xatttr_block() in the reflink path
fakes a set_ctxt to call ocfs2_create_xattr_block().
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
ocfs2_xattr_set_in_bucket() doesn't need to do its own hacky space
checking. Let's let ocfs2_xa_prepare_entry() (via ocfs2_xa_set()) do
the more accurate work. Whenever it doesn't have space,
ocfs2_xattr_set_in_bucket() can try to get more space.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
ocfs2_xa_set() wraps the ocfs2_xa_prepare_entry()/ocfs2_xa_store_value()
logic. Both callers can now use the same routine. ocfs2_xa_remove()
moves directly into ocfs2_xa_set().
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
ocfs2_xa_prepare_entry() gets all the logic to add, remove, or modify
external value trees. Now, when it exits, the entry is ready to receive
a value of any size.
ocfs2_xa_remove() is added to handle the complete removal of an entry.
It truncates the external value tree before calling
ocfs2_xa_remove_entry().
ocfs2_xa_store_inline_value() becomes ocfs2_xa_store_value(). It can
store any value.
ocfs2_xattr_set_entry() loses all the allocation logic and just uses
these functions. ocfs2_xattr_set_value_outside() disappears.
ocfs2_xattr_set_in_bucket() uses these functions and makes
ocfs2_xattr_set_entry_in_bucket() obsolete. That goes away, as does
ocfs2_xattr_bucket_set_value_outside() and
ocfs2_xattr_bucket_value_truncate().
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
We're going to want to make sure our buffers get accessed and dirtied
correctly. So have the xa_loc do the work. This includes storing the
inode on ocfs2_xa_loc.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
We use the ocfs2_xattr_value_buf structure to manage external values.
It lets the value tree code do its work regardless of the containing
storage. ocfs2_xa_fill_value_buf() initializes a value buf from an
ocfs2_xa_loc entry.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Previously the xattr code would send in a fake value, containing a tree
root, to the function that installed name+value pairs. Instead, we pass
the real value to ocfs2_xa_set_inline_value(), and it notices that the
value cannot fit. Thus, it installs a tree root.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
We create two new functions on ocfs2_xa_loc, ocfs2_xa_prepare_entry()
and ocfs2_xa_store_inline_value().
ocfs2_xa_prepare_entry() makes sure that the xl_entry field of
ocfs2_xa_loc is ready to receive an xattr. The entry will point to an
appropriately sized name+value region in storage. If an existing entry
can be reused, it will be. If no entry already exists, it will be
allocated. If there isn't space to allocate it, -ENOSPC will be
returned.
ocfs2_xa_store_inline_value() stores the data that goes into the 'value'
part of the name+value pair. For values that don't fit directly, this
stores the value tree root.
A number of operations are added to ocfs2_xa_loc_operations to support
these functions. This reflects the disparate behaviors of xattr blocks
and buckets.
With these functions, the overlapping ocfs2_xattr_set_entry_local() and
ocfs2_xattr_set_entry_normal() can be replaced with a single call
scheme.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
An ocfs2 xattr entry stores the text name and value as a pair in the
storage area. Obviously names and values can be variable-sized. If a
value is too large for the entry storage, a tree root is stored instead.
The name+value pair is also padded.
Because of this, there are a million places in the code that do:
if (needs_external_tree(value_size)
namevalue_size = pad(name_size) + tree_root_size;
else
namevalue_size = pad(name_size) + pad(value_size);
Let's create some convenience functions to make the code more readable.
There are three forms. The first takes the raw sizes. The second takes
an ocfs2_xattr_info structure. The third takes an existing
ocfs2_xattr_entry.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Rather than calculating strlen all over the place, let's store the
name length directly on ocfs2_xattr_info.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
struct ocfs2_xattr_info is a useful structure describing an xattr
you'd like to set. Let's put prefixes on the member fields so it's
easier to read and use.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Add ocfs2_xa_remove_entry(), which will remove an xattr entry from its
storage via the ocfs2_xa_loc descriptor.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
The ocfs2 extended attribute (xattr) code is very flexible. It can
store xattrs in the inode itself, in an external block, or in a tree of
data structures. This allows the number of xattrs to be bounded by the
filesystem size.
However, the code that manages each possible storage location is
different. Maintaining the ocfs2 xattr code requires changing each hunk
separately.
This patch is the start of a series introducing the ocfs2_xa_loc
structure. This structure wraps the on-disk details of an xattr
entry. The goal is that the generic xattr routines can use
ocfs2_xa_loc without knowing the underlying storage location.
This first pass merely implements the basic structure, initializing it,
and wiping the name+value pair of the entry.
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Add current->comm to the standard mlog() output to help with debugging.
Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
When ocfs2 has to do CoW for refcounted extents, we disable direct I/O
and go through the buffered I/O path. This makes the combined check
easier to read.
Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
This patch add extent block (metadata) stealing mechanism for
extent allocation. This mechanism is same as the inode stealing.
if no room in slot specific extent_alloc, we will try to
allocate extent block from the next slot.
Signed-off-by: Tiger Yang <tiger.yang@oracle.com>
Acked-by: Tao Ma <tao.ma@oracle.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
Connect and disconnect messages are more than informational as they are required
during root cause analysis for failures. This patch changes them from KERN_INFO
to KERN_NOTICE.
Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
Acked-by: Mark Faseh <mfasheh@suse.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
The debug call printing the name of the lock resource was chopping
off the last character. This patch fixes the problem.
Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
Acked-by: Mark Fasheh <mfasheh@suse.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
The wrong member was compared in the continguousness check.
Acked-by: Tao Ma <tao.ma@oracle.com>
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|
|
During recovery, the dlm frees the locks for the dead node. If it finds a
lock in a resource for the dead node, it expects that node to also have a
ref in that lock resource. If not, it BUGs.
ossbz#1175 was filed with the above BUG. Now, while it is correct that we
should be expecting the ref, I see no reason why we have to BUG. After all,
we are freeing up the lock and clearing the ref.
This patch replaces the BUG_ON with a printk(). Hopefully, that will give
us more clues next time this happens.
http://oss.oracle.com/bugzilla/show_bug.cgi?id=1175
Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com>
Acked-by: Mark Fasheh <mfasheh@suse.com>
Signed-off-by: Joel Becker <joel.becker@oracle.com>
|