Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between specu...
Network congestion remains one of the main barriers to the continuing success of the Internet. For w...
Prefetching disk blocks to main memory will become increasingly important to overcome the widening g...
The “Memory Wall” [1], is the gap in performance between the processor and the main memory. Over the...
Speculative prefetching has been proposed to improve the response time of network access. Previous s...
Previous studies in speculative prefetching focus on building and evaluating access models for the p...
Previous studies in speculative prefetching focus on building and evaluating access models for the p...
To improve the accuracy of access prediction, a prefetcher for web browsing should recognize the fac...
We investigate speculative prefetching under a model in which prefetching is neither aborted nor pre...
Prefetching is a potential method to reduce waiting time for retrieving data over wireless network c...
Speculative service implies that a client's request for a document is serviced by sending, in additi...
This is the published version. Copyright © 1998 Society for Industrial and Applied MathematicsRespon...
Efficient data supply to the processor is the one of the keys to achieve high performance. However, ...
Mobile users connected to wireless networks expect performance comparable to those on wired networks...
Aggressive prefetching is an effective technique for reducing the execution times of disk-bound appl...
Parallel applications can benefit greatly from massive computational capability, but their performan...
Network congestion remains one of the main barriers to the continuing success of the Internet. For w...
Prefetching disk blocks to main memory will become increasingly important to overcome the widening g...
The “Memory Wall” [1], is the gap in performance between the processor and the main memory. Over the...
Speculative prefetching has been proposed to improve the response time of network access. Previous s...
Previous studies in speculative prefetching focus on building and evaluating access models for the p...
Previous studies in speculative prefetching focus on building and evaluating access models for the p...
To improve the accuracy of access prediction, a prefetcher for web browsing should recognize the fac...
We investigate speculative prefetching under a model in which prefetching is neither aborted nor pre...
Prefetching is a potential method to reduce waiting time for retrieving data over wireless network c...
Speculative service implies that a client's request for a document is serviced by sending, in additi...
This is the published version. Copyright © 1998 Society for Industrial and Applied MathematicsRespon...
Efficient data supply to the processor is the one of the keys to achieve high performance. However, ...
Mobile users connected to wireless networks expect performance comparable to those on wired networks...
Aggressive prefetching is an effective technique for reducing the execution times of disk-bound appl...
Parallel applications can benefit greatly from massive computational capability, but their performan...
Network congestion remains one of the main barriers to the continuing success of the Internet. For w...
Prefetching disk blocks to main memory will become increasingly important to overcome the widening g...
The “Memory Wall” [1], is the gap in performance between the processor and the main memory. Over the...