diff options
author | Wu Fengguang <fengguang.wu@intel.com> | 2010-06-06 10:38:15 -0600 |
---|---|---|
committer | Wu Fengguang <fengguang.wu@intel.com> | 2011-06-08 08:25:20 +0800 |
commit | 6e6938b6d3130305a5960c86b1a9b21e58cf6144 (patch) | |
tree | de5546e8390ce31cd31412d2ef78ce732a42191c /include/linux/writeback.h | |
parent | 59c5f46fbe01a00eedf54a23789634438bb80603 (diff) |
writeback: introduce .tagged_writepages for the WB_SYNC_NONE sync stage
sync(2) is performed in two stages: the WB_SYNC_NONE sync and the
WB_SYNC_ALL sync. Identify the first stage with .tagged_writepages and
do livelock prevention for it, too.
Jan's commit f446daaea9 ("mm: implement writeback livelock avoidance
using page tagging") is a partial fix in that it only fixed the
WB_SYNC_ALL phase livelock.
Although ext4 is tested to no longer livelock with commit f446daaea9,
it may due to some "redirty_tail() after pages_skipped" effect which
is by no means a guarantee for _all_ the file systems.
Note that writeback_inodes_sb() is called by not only sync(), they are
treated the same because the other callers also need livelock prevention.
Impact: It changes the order in which pages/inodes are synced to disk.
Now in the WB_SYNC_NONE stage, it won't proceed to write the next inode
until finished with the current inode.
Acked-by: Jan Kara <jack@suse.cz>
CC: Dave Chinner <david@fromorbit.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
Diffstat (limited to 'include/linux/writeback.h')
-rw-r--r-- | include/linux/writeback.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/writeback.h b/include/linux/writeback.h index 17e7ccc322a..3f6542ca619 100644 --- a/include/linux/writeback.h +++ b/include/linux/writeback.h @@ -47,6 +47,7 @@ struct writeback_control { unsigned encountered_congestion:1; /* An output: a queue is full */ unsigned for_kupdate:1; /* A kupdate writeback */ unsigned for_background:1; /* A background writeback */ + unsigned tagged_writepages:1; /* tag-and-write to avoid livelock */ unsigned for_reclaim:1; /* Invoked from the page allocator */ unsigned range_cyclic:1; /* range_start is cyclic */ unsigned more_io:1; /* more io to be dispatched */ |