summaryrefslogtreecommitdiffstats
path: root/fs/nfs/read.c
diff options
context:
space:
mode:
authorWu Fengguang <fengguang.wu@intel.com>2008-12-23 15:21:30 -0500
committerTrond Myklebust <Trond.Myklebust@netapp.com>2008-12-23 15:21:30 -0500
commit136221fc3219b3805c48db5da065e8e3467175d4 (patch)
tree62fa68f46cb765143b6159993eceb30a6b81f9b2 /fs/nfs/read.c
parent3d44cc3e01ee1b40317f79ed54324e25c4f848df (diff)
nfs: remove redundant tests on reading new pages
aops->readpages() and its NFS helper readpage_async_filler() will only be called to do readahead I/O for newly allocated pages. So it's not necessary to test for the always 0 dirty/uptodate page flags. The removal of nfs_wb_page() call also fixes a readahead bug: the NFS readahead has been synchronous since 2.6.23, because that call will clear PG_readahead, which is the reminder for asynchronous readahead. More background: the PG_readahead page flag is shared with PG_reclaim, one for read path and the other for write path. clear_page_dirty_for_io() unconditionally clears PG_readahead to prevent possible readahead residuals, assuming itself to be always called in the write path. However, NFS is one and the only exception in that it _always_ calls clear_page_dirty_for_io() in the read path, i.e. for readpages()/readpage(). Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Wu Fengguang <wfg@linux.intel.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Diffstat (limited to 'fs/nfs/read.c')
-rw-r--r--fs/nfs/read.c6
1 files changed, 0 insertions, 6 deletions
diff --git a/fs/nfs/read.c b/fs/nfs/read.c
index 40d17987d0e..f856004bb7f 100644
--- a/fs/nfs/read.c
+++ b/fs/nfs/read.c
@@ -533,12 +533,6 @@ readpage_async_filler(void *data, struct page *page)
unsigned int len;
int error;
- error = nfs_wb_page(inode, page);
- if (error)
- goto out_unlock;
- if (PageUptodate(page))
- goto out_unlock;
-
len = nfs_page_length(page);
if (len == 0)
return nfs_return_empty_page(page);