From: Wu Fengguang Remove the unnecessary size limit on setting read_ahead_kb. Also make possible large values harmless. The stock readahead is protected by always consulting the avaiable memory before applying this number. Other readahead paths have already did so. read_ahead_kb used to be guarded by the queue's max_sectors, which can be too rigid because some devices set max_sectors to small values like 64kb. That leads to many user complains. Signed-off-by: Wu Fengguang Signed-off-by: Andrew Morton --- block/ll_rw_blk.c | 5 ----- mm/readahead.c | 2 +- 2 files changed, 1 insertion(+), 6 deletions(-) diff -puN block/ll_rw_blk.c~readahead-remove-size-limit-on-read_ahead_kb block/ll_rw_blk.c --- devel/block/ll_rw_blk.c~readahead-remove-size-limit-on-read_ahead_kb 2006-06-09 01:22:58.000000000 -0700 +++ devel-akpm/block/ll_rw_blk.c 2006-06-09 01:22:58.000000000 -0700 @@ -3754,12 +3754,7 @@ queue_ra_store(struct request_queue *q, unsigned long ra_kb; ssize_t ret = queue_var_store(&ra_kb, page, count); - spin_lock_irq(q->queue_lock); - if (ra_kb > (q->max_sectors >> 1)) - ra_kb = (q->max_sectors >> 1); - q->backing_dev_info.ra_pages = ra_kb >> (PAGE_CACHE_SHIFT - 10); - spin_unlock_irq(q->queue_lock); return ret; } diff -puN mm/readahead.c~readahead-remove-size-limit-on-read_ahead_kb mm/readahead.c --- devel/mm/readahead.c~readahead-remove-size-limit-on-read_ahead_kb 2006-06-09 01:22:58.000000000 -0700 +++ devel-akpm/mm/readahead.c 2006-06-09 01:22:58.000000000 -0700 @@ -155,7 +155,7 @@ EXPORT_SYMBOL_GPL(file_ra_state_init); */ static inline unsigned long get_max_readahead(struct file_ra_state *ra) { - return ra->ra_pages; + return max_sane_readahead(ra->ra_pages); } static inline unsigned long get_min_readahead(struct file_ra_state *ra) _