* What assumptions does does paper make 1. Dependent disk requests come from same process 2. Most of a process's requests have similar spacial/temporal locality * Basic idea of anticip. sched. (AS): p. 4 - Choose request to schedule as usual - Possibly delay next request, if process that issued previous request may issue another soon * What kind of workloads benefit from anticip. sched. (AS)? - Two processes each sequentially reading data from a large file? - AS won't have much benefit beyond readahead - Two processes each randomly reading from a large file? - Yes! Blocks within one file probably closer than across files - Two processes each randomly writing to a large file? - No, buffering will coalesce writes anyway - Untar a directory - Yes, many synchronous writes to same directory/cylinder group * SPTF: When should you delay a disk request? Equations p. 5 - How do you predict seek time? - p6/sec 3.6 You estimate to 75%--but that's good enough. Why? * Aged SPTF - Like SPTF, except don't wait if another request has aged sufficiently * CSCAN - Additionally maintain expected direction of next seek - If next seek 80% likely to go backwards, don't wait - How well does this work? - For random reads, not very well * Proportional share schedulers - Tradition: Virtual clock, ticks more slowly for high priority proc. - Only schedule process that's used minclock+T (for relaxation thresh. T) - AS heuristic: Wait if SPTF would wait or other process > minclock+T - Don't wait if currenc process > minclock+T * Performance - Explain figures 5 and 6 - Why does comple go slower in Figure 8? CPU processing of interrupts for timers - How well does apache web server fit AS assumptions? Files accessed by a web server don't have locality, but, readahead in medium files does not kick in soon enough! This could be fixed without AS. How? * How is evaluation? - Would you use this? - Any hidden catches? (workload where you might do very poorly) Figure 7 pretty good * Will anticipatory scheduling continue to be useful in the future? + Seek time improving more slowly than other system aspects -> deceptive idleness a bigger problem + Faster CPU -> faster think time - Bigger track buffers/readahead -> data may be getting cached anyway