Performance in Intel IOMeter
In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% throughout the test while the request queue size varies from 1 to 256.
You can click the following links to see tabled results:
- IOMeter Database results for RAID0 (click here)
- IOMeter Database results for RAID10 and RAID5 (click here)
I will discuss the results for queue depths of 1, 16 and 256 grouped by the array type.
One data thread and the simplest array type. There is nothing to optimize and nothing to get worse. As a result, the controllers all deliver similar performance. You can note a couple of interesting facts, though. First, the modern PCI-E 4x models are in the lead when there is a high percentage of writes. Second, the AAR-1420SA and AAR-2820SA are slower than the others irrespective of load.
It is more interesting with the RAID10. The simplest controller, AAR-1430SA, copes best with this combination of array type and load. It is only at high percentages of writes that it is overtaken by the ASR-3405, probably because the latter has a cache buffer and can perform deferred writing in the cache. The AAR-2820SA is about as fast as the ASR-4305 at high percentages of reads but slows down at writing (perhaps it has problems caching random write requests: the AAR-2820SA has a weaker processor than the ASR-3405 and the computing load in this mode may have a negative effect on the controller’s performance). The ASR-44300 is the slowest in this group, having a sudden performance slump in the most complex load range where there is about the same amount of reads and writes.
The RAID5-supporting controllers both show original algorithms. As you know, the ideal graph is nearly flat with a smooth slope in the right part.
Now I will check the controllers out under heavier load at a queue depth of 16 commands.
The two PCI-E 4x controllers are again the best with the RAID0, behaving in an absolutely normal way. The ASR-44300 is good too, but its results are somewhat lower at high percentages of writes. The ASR-1420SA is an unpleasant surprise as it is far slower than its mate.
The AAR-2820SA is a surprise, too. It is 100 IOps slower than the leaders at any ratio of reads to writes. Its processor and buffer memory must be doing something wrong indeed.
It is all very confusing about the RAID10. The ASR-3405 and AAR-1430SA are on top again, the latter being better at pure reading while the former, at every other load type. The ASR-44300 delivers superb performance at pure reading but slows down at writing. It is only at high percentages of writes that it accelerates a little. The AAR-2820SA is slower than the leaders by about 100 IOps again.
The ASR-3405 shows a nearly ideal picture of RAID5 performance at a queue depth of 16. The AAR-2820SA is not that good as it is slower and has a jagged graph which indicates some flaws in the controller’s firmware.
The RAID0 results get very odd when the queue is increased to 256 requests. The only controllers to cope with this load successfully are the AAR-1430SA and ASR-44300 (the latter slows down somewhat at high percentages of writes). The ASR-3405 is surprisingly poor if compared with the previous results. The AAR-2820SA is the worst controller again, delivering rather low performance.
The ASR-4305 has problems reordering read requests to the RAID10 just like it had with the RAID0. As a result, it is inferior to the simpler AAR-1430SA which gains the lead. The ASR-44300 is slower than the leader, too. Just as at a queue depth of 16, the ASR-44300 slows down at loads with 10 to 60% percent of writes. The AAR-2820SA is very slow again.
The RAID5 graphs are almost the same as at the shorter queue except that the performance of the controllers is somewhat higher now.