Performance in Intel IOMeter
In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% throughout the test while the request queue size varies from 1 to 256.
We’ll discuss the results for queue depths of 1, 16 and 256.
At high percentages of reads the arrays all have similar performance. The differences can only be seen in the right part of the diagram where deferred writing influences the results. The speed of the RAID0 arrays increases depending on the number of disks in them, but a strict proportion can only be observed at 80% or higher writes.
It’s interesting to watch the behavior of the mirrored arrays RAID1 and RAID10. The former is ahead of the single disk until high percentages of writes. The same is true for the RAID10 in comparison with the two-disk RAID0 but it has a sudden slump at 60% writes.
The RAID5 arrays are no good at writing data due to the checksum calculation overhead. Take note that the performance of this array type depends but little on the number of disks in it. With RAID5 technology each write operation involves two read operations, two XOR operations, and two writes. All these disk operations are performed on two disks irrespective of the total amount of disks in the array. The controller can reduce the disk load by reading a full stripe (look here for details) when processing sequential data, but has to stick to the 2R-2X-2W method for random addresses.
When the queue is increased to 16 requests, the picture is different because the controller has got the opportunity to load all the disks of the array. The RAID0 arrays show a proportional performance growth depending on the number of disks under any load. Well, the four-disk RAID0 doesn’t speed up as much as you could expect looking at the same-type arrays with fewer disks. These arrays also increase their speed at high percentages of writes but this trend is not so conspicuous here because the left part of the graphs has risen up, too.
The mirrored RAID1 and RAID10 arrays are faster at reading. They are always ahead of the single disk and two-disk RAID0, respectively, in every test mode except for writes-only. Take note that the RAID10 proves to be the fastest array in the reads-only mode, outperforming the four-disk RAID0. The RAID1 is considerably faster than the two-disk RAID0 in this mode.
The RAID5 arrays are about as fast as the RAID0 arrays with the same amount of disks at random reading but slow down suddenly as soon as there are write requests to be performed. As a result, they are slower than the single disk or any other array at 70% and higher writes. Interestingly, the three-disk RAID5 is somewhat faster than the four-disk RAID5 in the writes-only mode.
There are few changes in the standings when the request queue is increased further. We can note all the arrays deliver even higher speeds when processing read requests. The four-disk RAID0 and RAID5 are ahead of the RAID10 in the reads-only mode.
Here are diagrams that show the performance of each array at five different queue depths:
- Database, 1HDD (graph)
- Database, RAID0, 2HDD (graph)
- Database, RAID0, 3HDD (graph)
- Database, RAID0, 4HDD (graph)
- Database, RAID1, 2HDD (graph)
- Database, RAID10, 4HDD (graph)
- Database, RAID5, 3HDD (graph)
- Database, RAID5, 4HDD (graph)