Performance in Intel IOMeter: Sequential Read & Write Patterns
IOMeter is sending a stream of read/write requests to the array with request queue depth = 4. Every minute the size of the data block changes, so we can see the dependence of the linear read/write speed on the data block size. The results for WriteBack mode are tabled below:
The diagrams below show the performance/number-of-disks correlation for two groups of RAID arrays.
The RAID0 arrays consisting of more drives enjoy an advantage only when the requested data block is large, i.e. when the controller can split the large data block into several smaller ones and use the hard drives in parallel. The RAID0 arrays did well in this test. The 2- and 3-disk arrays reached their maximum speed as soon as 64KB data block size and the 4-disk array on 128KB blocks. The read speed scales up depending on the number of disks per array almost ideally.
The mirroring RAID1 and RAID10 arrays perform much worse, their speeds being similar to that of the single drive and the 2-disk RAID0, respectively. However in some modes, especially at large data block sizes, the read speeds of the mirroring arrays are far below the speeds of the arrays the mirrors are made up of.
The RAID5 arrays don’t look very good, either. Their behavior at 64KB and smaller blocks is quite explicable, but the read speed slump on 128KB blocks is beyond our understanding.
The WriteThrough results are given in the table below:
We will next compare the performance of the 4-disk arrays at different caching policies.
Caching should have no effect on the results since there are no write requests here. This is generally so, yet there are one or two data block sizes for each array where the difference between its WriteBack and WriteThrough speeds is bigger than might be explained by measurement errors.