Performance in Intel IOMeter DataBase Pattern
We start out as usual with checking the controller performance when processing mixed request streams.
This pattern serves to send a mixed stream of requests to read and write 8KB data blocks with a random address. By changing the ratio of reads to writes, we can find out how good the controller driver is at sorting the requests out.
The largest table comes first: the results of the controller in the WriteBack mode:
The following diagrams show the dependence of the data-transfer rate on the reads/writes ratio for different request queue depths. For easier reading, I created two diagrams:
All arrays show similar speeds under linear workload, at the beginning of the graph (Random Read mode). However, the “mirror” arrays, RAID1 and RAID10, are faster due to TwinStor technology that doesn’t just alternate the read requests between the two disks of a mirror couple, but does this intellectually, that is, by determining which disk will perform a given request faster, according to the current position of its read/write heads (or, to be more precise, according to the request history).
The performance of the single drive is going up as there appear more write requests in the queue (in our case, there is a higher probability of such a request). The speed of the RAID0 grows in proportion to the number of disks in the array, but this proportion remains only in operational modes with a big share of write requests. RAID1 and RAID10 arrays also speed up, but more slowly. RAID5 arrays draw a sloping down graph: write requests impede them greatly, and the more write requests are in the queue, the more difficult it is for the controller. However, when there is a big share of writes, the controller even increases its pace a little bit, which is a pretty curious fact.