Articles: Storage
 

Bookmark and Share

(0) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 ]

Performance in Intel IOMeter

Database Patterns

In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% (stepping 10%) throughout the test while the request queue depth varies from 1 to 256.

We’ll be discussing graphs and diagrams but you can view the data in tabled format using the following links:

Everything would be perfectly normal at a queue depth of 1 if it were not for the 8-disk RAID10. Both the healthy and degraded arrays are too slow at 100% writes. The 4-disk RAID10 does not have a performance hit at 100% writes and delivers the same speed as the 8-disk RAID10 as the result. We can’t find an explanation for this.

The checksum-based arrays are all right, too. The small response time of the HDDs we use and the high performance of the controller help the RAID arrays show high speeds at high percentages of writes. We can remind you that RAID arrays built out of SATA drives on previous-generation controllers slowed down rather than accelerated in this test as the percentage of writes was increasing.

Comparing the LSI controller with the other controllers we have tested in our labs, we can note that the LSI handles the degraded arrays well while the healthy RAID5 and RAID6 arrays built out of a large number of disks are not very good on it. Is it the processor’s fault we wonder?

We increase the queue depth to 16 outstanding requests and see the graphs rise up. The RAID10 have problems at writing again. It looks like deferred writing is turned off for them. On the other hand, they can effectively find the luckier disk in a mirror: the RAID10 arrays are much faster than the same-size RAID0 at high percentages of reads.

The degraded RAID10 behaves in an interesting manner. It goes neck and neck with the healthy array at writing but slows down at reading because it does not have the opportunity to choose the better disk in one of the mirrors. Still, it is better than the RAID0 at pure reading, which is an excellent result.

The RAID5 and RAID6 arrays are not quite good at writing, either. Their performance is too low at high percentages of writes, and the 4-disk arrays lose to the single HDD. The competitor controllers did better in this test.

The arrays are all right at reading, though. Take note of the performance hit provoked by the degradation: the 8-disk arrays with one failed disk are only as fast as the 4-disk arrays at reading.

The further increase of the queue depth does not help the RAID10 arrays: they still have no deferred writing. The degraded array does not feel good at all. The loss of one disk has a fatal effect on its reading speed.

The RAID5 and RAID6 arrays draw very neat, nice-looking graphs at the maximum queue depth. However, their performance at high percentages of writes is somewhat lower than we might expect basing on the results of the competitor controllers. The degraded arrays have good read speeds, by the way. The controller seems to have a high-performance processor and effective firmware algorithms but it has too little of system memory to do deferred writing efficiently. The competitor controllers had more memory on board.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 ]

Discussion

Comments currently: 0

Add your Comment