Articles: Storage
 

Bookmark and Share

(21) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 ]

Performance in Intel IOMeter DataBase Pattern

We traditionally start out by checking the controller’s operation with mixed streams of requests.

This pattern sends a stream of requests to read and write 8KB random-address data blocks. By changing the ratio of reads to writes we can check how well the controller’s driver can sort them out. The results of the controller in WriteBack mode are presented in the table:

Let’s view these numbers as diagrams, which will show the dependence of the controller’s speed on the percentage of write requests for queue depths of 1, 16 and 256 requests. For better readability we divide the arrays into two groups.

As the number of write requests increases, the efficiency of lazy writing grows up and the speed of the single drive rises. The speed of the RAID0 arrays also grows up depending on the number of discs per array, but it doesn’t scale up exactly proportionally to the number of the disks even in Random Write mode (when there are 100% of writes). In RandomRead mode under linear workload all arrays perform close enough, however this time we can notice that the performance is inversely proportional to the number of HDDs in the array.

The arrays with mirrored pairs (RAID1 and RAID10) alternate the requests between the two disks of the mirror because their performance improves (above that of the JBOD and the two-disc RAID0) at higher percentages of reads. When the probability of writes is high, RAID1 and RAID10 arrays run much slower than a single HDD or a two-disc RAID0 array.

RAID5 array performance should theoretically reduce as the share of write requests grows up. The only exception is the RandomWrite mode, in all other cases the performance of these arrays corresponds to the theory. Only a four-HDD array for some reason fell behind the three-HDD array.

Now let’s increase the workload:

The higher workload results into RAID0 array speed being proportional to the number of HDDs in the array when we get close to 100% read requests. However, as the share of write requests increases, the picture gets less rosy. If the array is built of the odd number of hard disc drives, the lowest speed is demonstrated in case of 50% writes. And by the arrays built of even number of discs the minimal speed is achieved in case of 60% writes.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 ]

Discussion

Comments currently: 21
Discussion started: 02/26/06 02:52:18 PM
Latest comment: 11/27/07 10:18:14 PM

View comments

Add your Comment