Revisiting the I/O Bottleneck
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Henry Robinson.
As commodity computers become more powerful with ever increasing
amounts of RAM and now 4 CPU cores as standard (with further increases
expected in the near future), is it the I/O subsystem that is the
bottleneck? And if it is, then given the abundance of RAM and CPU
power available could we use them to help alleviate the problem?
When writing Linux programs today programmers are faced with a
plethora of choices of how to do I/O, and the “wrong” choice could
mean their program takes twice as long to execute. In this talk I
present results from experimenting with the Linux I/O APIs and show
that performance can differ substantially depending on which and how
the API is used. Clearly educating the programmers and re-writing all
the existing code would be a solution, but with the abundance of RAM
and CPU power available could other techniques allow the programmer to
always use the same API , but with the kernel giving it the performance
equal to if it had used the “fastest” API .
This talk is part of the Computer Laboratory NetOS Group Talklets series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|