Performance in Intel IOMeter DataBase Pattern
As usual we will start the discussion of our controller performance with the pattern producing the biggest bunch of data.
As you remember, in this pattern we test how fast the controller can process a mixed requests stream including reads and writes of 8KB data blocks with random address. Changing the reads-to-writes ratio we can figure out how well the controller drive can sort the reads from writes.
As always, a details performance chart in WriteThrough mode comes first:
Now have a look at the graphs:
Note that under linear workload (one outgoing request) the performance of RAID 1 array of two hard disk drives is almost identical to that of JBOD, while RAID 01 array runs as fast as RAID 0 or two HDDs.
It looks as if the controller in Write Through mode “didn’t slow down the requests processing” but distributed them among the drives immediately, that is why read requests interleaving between the drives of the mirrored pair simply didn’t work at all.
At the same time, as the writes share increases, the major contribution to the array performance is made not by the controller but by the WD360GD hard drive. To be more exact, by the lazy write algorithms of the Raptors…
During reading (when there are no writes at all), all arrays demonstrate very similar performance, i.e. their speed actually depends on the access time value of the WD Raptor drives.