summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorWu Fengguang <fengguang.wu@intel.com>2011-08-03 14:30:36 -0600
committerWu Fengguang <fengguang.wu@intel.com>2011-12-18 14:20:33 +0800
commitbdaac4902a8225bf247ecaeac46c4b2980cc70e5 (patch)
treecf6c065d9e46c53b45248f634491a6d864563382
parent82791940545be38810dfd5e03ee701e749f04aab (diff)
writeback: balanced_rate cannot exceed write bandwidth
Add an upper limit to balanced_rate according to the below inequality. This filters out some rare but huge singular points, which at least enables more readable gnuplot figures. When there are N dd dirtiers, balanced_dirty_ratelimit = write_bw / N So it holds that balanced_dirty_ratelimit <= write_bw The singular points originate from dirty_rate in the below formular: balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate where dirty_rate = (number of page dirties in the past 200ms) / 200ms In the extreme case, if all dd tasks suddenly get blocked on something else and hence no pages are dirtied at all, dirty_rate will be 0 and balanced_dirty_ratelimit will be inf. This could happen in reality. Note that these huge singular points are not a real threat, since they are _guaranteed_ to be filtered out by the min(balanced_dirty_ratelimit, task_ratelimit) line in bdi_update_dirty_ratelimit(). task_ratelimit is based on the number of dirty pages, which will never _suddenly_ fly away like balanced_dirty_ratelimit. So any weirdly large balanced_dirty_ratelimit will be cut down to the level of task_ratelimit. There won't be tiny singular points though, as long as the dirty pages lie inside the dirty throttling region (above the freerun region). Because there the dd tasks will be throttled by balanced_dirty_pages() and won't be able to suddenly dirty much more pages than average. Acked-by: Jan Kara <jack@suse.cz> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
-rw-r--r--mm/page-writeback.c5
1 files changed, 5 insertions, 0 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 936dc7b61dc..0ae2008eb54 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -822,6 +822,11 @@ static void bdi_update_dirty_ratelimit(struct backing_dev_info *bdi,
*/
balanced_dirty_ratelimit = div_u64((u64)task_ratelimit * write_bw,
dirty_rate | 1);
+ /*
+ * balanced_dirty_ratelimit ~= (write_bw / N) <= write_bw
+ */
+ if (unlikely(balanced_dirty_ratelimit > write_bw))
+ balanced_dirty_ratelimit = write_bw;
/*
* We could safely do this and return immediately: