Disk Response Time
For 10 minutes IOMeter is sending a stream of requests to read and write 512-byte data blocks with a request queue of 1. The total of requests processed by the disk subsystem is much larger than its cache, so we get a sustained response time that doesn’t depend on the amount of cache memory.
A read response time is a queer parameter. On one hand, it is a too concise characteristic of a disk subsystem’s performance. It is next to impossible to make any conclusion basing on this parameter alone. On the other hand, it is indicative of the disk subsystem’s reaction speed, i.e. how fast it can react to a request and allow the system use the requested data. With RAID arrays, this parameter depends on the response time of the employed hard disk drives in the first place. The controller contributes to it, too. The 3ware and Areca controllers are somewhat better than the others, being about half a millisecond faster than the worst controller (it is the Promise). This seems to be a trifle, but it is actually very hard to achieve this 0.5-millisecond advantage when the response time is lower than 7 milliseconds. In other words, these two controllers enjoy an 8% advantage in this test.
We would also like to note the successful performance of the HighPoint and LSI with the mirror-based arrays. These controllers win 1 millisecond by effectively choosing the “luckier” disk in a mirror pair. This excellent result indicates excellent firmware algorithms that other developers should try to copy. RAID10 arrays are usually used for storing databases. If your database doesn’t fit entirely into the server’s system memory, you should not neglect this opportunity to improve its performance.
You might have predicted the leader in terms of write response time. It is the Adaptec which showed an excellent writing performance throughout IOMeter: Database. This controller is a little bit better than same-class opponents with each array type.
The HighPoint and Promise are poor with RAID5 and RAID6. The former has problems with deferred writing while the latter has no deferred writing at all, its write response time being huge as the consequence.
Take a look at the results of the Areca with 2GB memory. In three cases its performance is the same as with 512MB, but 2GB looks much better with RAID6. Perhaps this controller just lacks a higher load to use those 2 gigabytes of cache fully. Our test conditions may be viewed as easy for the controllers because each disk has an individual data channel and we don’t use expanders. Perhaps we’d see the benefit of 2GB memory if there were multiple disks on each channel and the 3Gbps bandwidth were not enough for all of them (this is not a far-fetched situation but a real case when a rack with two dozen disks is attached to one external connector). Alas, we don’t have an opportunity to check out this supposition and the 2GB cache doesn’t show anything exceptional as yet.