Articles: Storage
 

Bookmark and Share

(0) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ]

Performance in Intel IOMeter

Database Pattern

In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% throughout the test while the request queue size varies from 1 to 256.

We’ll be discussing graphs and diagrams but you can view tabled data using the following links:

We’ll discuss the results for queue depths of 1, 16 and 256.

We are discussing RAID0 and RAID10 separately from the checksum-based RAID5 and RAID6.

It’s all simple here: deferred writing is at work under minimum load and the array’s performance at writing depends on the number of disks in a stripe. Interestingly, the eight-disk RAID10 is somewhat slower than the four-disk RAID0. The mirrors of HDDs are somewhat worse than the single HDD in terms of performance.

It’s good that the RAID5 and RAID6 arrays go neck and neck in this test. The graphs of same-size arrays almost merge into each other, indicating that the controller is indifferent to the additional load (calculation of the second checksum).

The shape of the graphs does not impress, however. The lack of deferred writing into the controller’s cache (due to the lack of a BBU) kills its performance when writing to checksum-based arrays. Arrays of this type are usually equal to the corresponding single drive at the shortest queue, but here they suffer a terrible performance hit at high percentages of writes.

The results get rather odd when we increase the queue depth. It’s all right when the percentage of reads is high: the RAID10 arrays read from both disks in a mirror and are no different from RAID0 as the result. But what about performance scalability? The eight-disk arrays are not two times as fast as the four-disk ones as the theory suggests. Their advantage is especially small at high percentages of reads.

Writing is not quite good, either. The four-disk arrays have low-efficiency deferred writing as opposed to their eight-disk counterparts. Deferred writing can only be done in the HDDs’ cache, so perhaps the multi-disk arrays are better just because they have a larger total of cache memory?

Scalability is far from perfect with the checksum-based arrays, too. The results at reading suggest that: the performance growth from increasing the number of disks from four to eight is smaller than the performance of one disk! This is a problem indeed because these arrays are supposed to speed up nearly proportionally to the number of disks in them.

The arrays are doing better at writing thanks to the request queue. They do not set any performance records but do not look far worse than the single drive.

Interestingly, while the four-disk RAID5 and RAID6 are almost equals in terms of performance, the eight-disk RAID6 is somewhat worse than the eight-disk RAID5. We wonder if this is a small defect of the controller’s firmware or the controller finds it difficult to calculate two checksums for the eight-disk array.

When the request queue is very long, the arrays deliver higher performance and show better scalability. However, the four-disk array is not four times as fast as the single drive. The same is true for the eight-disk array. Note that deferred writing is not efficient here. This is due to the caching policy of Fujitsu’s SAS drives which do not take too much data into the cache at long queues.

Anyway, with all the drawbacks, you can see the new level of performance now. We could not reach 1000 operations per second with four Raptor 2 drives, but the four-disk array made out of 15,000rpm Fujitsu HDDs easily overtakes this barrier. The eight-disk arrays can deliver a few thousand operations per second.

We see problems with scalability again, but the arrays are finally not slower than the single drive at writing.

Judging by the gap between the eight-disk RAID6 and the RAID5, the controller’s processor is not powerful enough to computing two checksums for this amount of disks.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ]

Discussion

Comments currently: 0

Add your Comment