diff options
author | Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> | 2009-05-13 15:13:42 -0400 |
---|---|---|
committer | Theodore Ts'o <tytso@mit.edu> | 2009-05-13 15:13:42 -0400 |
commit | 79ffab34391933ee3b95dac7f25c0478fa2f8f1e (patch) | |
tree | 8bc139928e172ef2ebd38e01f97dc01f886d8526 /fs/ext4 | |
parent | 9fa7eb283c5cdc2b0f4a8cfe6387ed82e5e9a3d3 (diff) |
ext4: Properly initialize the buffer_head state
These struct buffer_heads are allocated on the stack (and hence are
initialized with stack garbage). They are only used to call a
get_blocks() function, so that's mostly OK, but b_state must be
initialized to be 0 so we don't have any unexpected BH_* flags set by
accident, such as BH_Unwritten or BH_Delay.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Diffstat (limited to 'fs/ext4')
-rw-r--r-- | fs/ext4/extents.c | 1 | ||||
-rw-r--r-- | fs/ext4/inode.c | 15 |
2 files changed, 15 insertions, 1 deletions
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index e3a55eb8b26..a953214f282 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -3150,6 +3150,7 @@ retry: ret = PTR_ERR(handle); break; } + map_bh.b_state = 0; ret = ext4_get_blocks_wrap(handle, inode, block, max_blocks, &map_bh, EXT4_CREATE_UNINITIALIZED_EXT, 0, 0); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 2a9ffd528dd..d7ad0bb73cd 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2055,7 +2055,20 @@ static int mpage_da_map_blocks(struct mpage_da_data *mpd) if ((mpd->b_state & (1 << BH_Mapped)) && !(mpd->b_state & (1 << BH_Delay))) return 0; - new.b_state = mpd->b_state; + /* + * We need to make sure the BH_Delay flag is passed down to + * ext4_da_get_block_write(), since it calls + * ext4_get_blocks_wrap() with the EXT4_DELALLOC_RSVED flag. + * This flag causes ext4_get_blocks_wrap() to call + * ext4_da_update_reserve_space() if the passed buffer head + * has the BH_Delay flag set. In the future, once we clean up + * the interfaces to ext4_get_blocks_wrap(), we should pass in + * a separate flag which requests that the delayed allocation + * statistics should be updated, instead of depending on the + * state information getting passed down via the map_bh's + * state bitmasks plus the magic EXT4_DELALLOC_RSVED flag. + */ + new.b_state = mpd->b_state & (1 << BH_Delay); new.b_blocknr = 0; new.b_size = mpd->b_size; next = mpd->b_blocknr; |