We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

I/O Scheduling in CAM

Formal Metadata

Title
I/O Scheduling in CAM
Title of Series
Number of Parts
41
Author
License
CC Attribution - ShareAlike 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor and the work or content is shared also in adapted form only under the conditions of this
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
SSD have many unique characteristics not present in spinning drives. Applications have different access patterns and desire different performance trade offs. Geom offers some scheduling facilities, but they are hampered by no visibility into the underlying device's characteristics. Scheduling I/O in CAM allows peripheral drivers to use their detailed knowledge of a drive to schedule I/Os that are optimal for the application's needs (with hints from the application) Netflix operates a small fleet of Video servers for its Video Streaming service . There are two main kinds of server used in our operations. We have a storage appliance, which is used for long-tail access and filling other servers. We have a Flash appliance for serving popular titles. Our service has a certain amount of change each day, as titles change in popularity, contracts expire or come on line, etc. While our workload is read mostly, we also need to write and trim the drive from time to time. With flash drives we found any sustained write activity above a certain level lead to a sudden decrease in the read performance, reducing our effective capacity at times when this happens. By clever scheduling, one can reduce these effects to keep read performance good, but write performance will suffer. The traditional scheduler didn't allow any efficient way to do this, short of write throttling in the application. While this does help mitigate things, when there's many threads or processes acting in parallel it can be hard for the application to coordinate everything, and the many layers between the application and the disk can interfere with even perfect coordination. Moving the throttling to the lowest layer in the system helps smooth out the bumps, as well as adapt dynamically to the changing workloads (you can write more, if you need to read less, for example).