![]() ![]() The third graph measures the number of IOPS performed on that same EBS volume, again measured by the operating system of the instance.The second graph measures the Volume Queue Length of that same EBS volume.The first graph measures the time it takes, in milliseconds, for requests to be serviced by the EBS volume, as measured by the operating system of the instance.To show this,let us look at an example of latency observed on an EBS volume compared to the number of IOPS. On a shared storage system, the picture is a lot less clear as the behavior of other users of the system will have a direct impact on the latency you experience. At the point of saturation, pushing more IOPS simply creates a backlog ahead of the storage system (usually at the operating system layer). As you dial up the number of IOPS, latency is going to increase slowly until you saturate the storage bus or the drives themselves. On a dedicated storage system, by and large, latency and IOPS are strongly correlated. ![]() AWS has started to add in dedicated network connections for storage to make EBS latency more predictable, it is not the norm as of the time of this writing. Moreover, the network that exists between your instances and your EBS volumes is shared with other customers. Because EBS data traffic must use the network, it will always be slower (as measured by its latency) than local storage, by an order of magnitude. The sharing of hardware for EBS is much more a design constraint. ![]() Common strategies include: using RAID to pool together EBS volumes, getting Provisioned IOPS volumes or skipping EBS altogether in favor of solid-state drives (SSD). Once you know to expect 100 IOPS from standard EBS volumes, you can devise strategies to support your application.
0 Comments
Leave a Reply. |