Articles: Storage
 

Bookmark and Share

(3) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 ]

Performance in Intel IOMeter

Database Patterns

In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of reads to writes is changing from 0% to 100% (stepping 10%) throughout the test while the request queue depth varies from 1 to 256.

We will be discussing graphs and diagrams but you can view the data in tabled format using the following links:

Everything is all right with the RAID0 and RAID10 arrays at a queue depth of 1. Deferred writing works properly, so every array boasts very good scalability of performance at writing depending on the number of disks in it. To remind you, a RAID10 should write as fast as a RAID0 built out of twice the number of disks because each mirror pair is writing data synchronously.

The RAID5 and RAID6 arrays have problems at a queue depth of 1 request. The arrays all perform in the same way (the degraded RAID6 are worse than the others, though) and are slower than the single disk. The controller seems to be processing data requests without caring to cache them. But why is there such a sudden performance growth at pure writing then? Does it mean that the controller’s firmware wakes up and tries to cache requests after all? If so, it does that inefficiently. Overall, this is a serious problem.

When the queue is 16 requests long, the controller delivers a predictable performance growth and shows the specific behavior of its firmware with arrays of different types. For example, we can note that the controller can effectively choose what disk of a mirror pair can read the requested data faster. Thanks to that, the RAID10 arrays are ahead of the RAID0 ones at high percentages of reads. The degraded RAID10 is incapable of that. After the loss of a disk the controller stops to look for the “luckier” disk in a mirror couple (although it might do that in the healthy pairs).

The checksum-based arrays accelerate to normal level at reading but still have problems with writing. They write slowly, especially the degraded arrays. The healthy RAID5 and RAID6 built out of eight disks are also not fast enough for arrays based on disks with such a low response time as our Fujitsu MBA3073RC.

When the queue is 256 requests long, the degraded RAID10 is ahead of the 4-disk RAID0, save at pure writing. The driver seems to be looking for the luckier disk in the healthy mirror pairs at such long queue depths. Still, the performance hit of the degraded array in comparison with the healthy one is obvious at high percentages of reads.

In the second group of arrays all the degraded arrays look poor. Their performance is low at both reading and writing. The healthy RAID6 are no good, either. Even the 8-disk RAID6 is worse at writing than the single HDD.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 ]

Discussion

Comments currently: 3
Discussion started: 06/26/09 02:27:26 AM
Latest comment: 06/30/09 07:36:29 AM

View comments

Add your Comment