Performance in Intel IOMeter Sequential Read and Write Patterns
This pattern helps us explore the controller performance at processing streams of sequential read/write requests. The array receives a stream of read/write requests with a request queue depth of 4. Every minute the size of the data block changes, so we can see the dependence of the linear read/write speed on the size of the data block. The results (the correlation of the controller data-transfer rate and the size of the data block) are listed in the following tables:
We split the arrays into two groups and build diagrams:
The advantages of RAID arrays that consist of many HDDs become apparent when the data block is big enough, that is, when the request size is so big that the controller can break it into several smaller blocks and give them to the drives of the array that are working in parallel. The two-disk RAID0 reaches its maximum speed only on 512KB blocks. The arrays of three and four drives didn’t make it even on 1MB blocks.
The graph of the mirror RAID1 array, like in the DataBase pattern, is much similar to that of the single drive, while the graph of RAID10 coincides with the graph of the two-disk RAID0 until the performance slump at reading 128KB data blocks. It’s quite logical to infer that the algorithm of optimized reading form the mirror doesn’t work with sequential requests.
When we disabled the lazy write algorithms for the disks, we found the following results:
The lack of write requests should mean that the status of the lazy write algorithm (on/off) doesn’t influence the controller speed. Well, this is what we see, with some reservations: there are one or two block sizes for each of the arrays when the difference in speed of the same array with and without disk caching is higher than the measurement error. I couldn’t find a reasonable explanation to this fact…