Articles: Storage
 

Bookmark and Share

(1) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 ]

Performance in Intel IOMeter

Database Patterns

In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of reads to writes is changing from 0% to 100% (stepping 10%) throughout the test while the request queue depth varies from 1 to 256.

We will be discussing graphs and diagrams but you can view the data in tabled format using the following links:

The RAID0 and RAID10 arrays behave normally at a queue depth of 1 request. The controller’s writing performance scales up depending on the number of HDDs in the array. The 4-disk RAID0 almost coincides with the 8-disk RAID10: a mirror pair is equivalent to one disk at writing. It is good that the degraded array doesn’t differ from the healthy one.

The checksum-based arrays are all right, too. The degraded RAID6 with two failed disks is considerably slower than the single HDD while the other arrays behave normally except that we’d expect a higher write speed from the multi-disk arrays considering the fast HDDs and the controller’s large amount of memory and high-performance processor. We will discuss this in more detail in our upcoming comparative review of several RAID controllers.

When the queue is 16 requests deep, the arrays all speed up. They use deferred writing more actively and enable request reordering. The standings of the arrays are just as expected, but the controller might be more effective at choosing the luckier disk in a mirror pair. We have seen other controllers deliver higher performance when reading from a RAID10 than from a same-size RAID0. Here, such arrays are only equals.

The RAID5 and RAID6 arrays have some problems with writing: the controller should be faster with these heavy-to-operate arrays because it has such a high-speed processor on board (we saw the effect of a fast processor in our Adaptec ASR-5808 review). The RAID6 arrays which require two checksums to be calculated simultaneously suffer the most.

One interesting thing can be noted at the longest queue depth. Maximum performance is achieved at mixed loads that combine both write and read requests. While individual HDDs find such loads to be the most difficult, modern RAIDs deliver their best under them by combining effective request reordering in the long queue with a large deferred writing buffer. If you have read our RAID controller reviews, you may have noticed that the peak of performance falls on different writes-to-reads ratios with different controllers. It’s because each manufacturer has its own approach to firmware optimizations, giving more priority to reading or to deferred writing.

The degraded RAID10 is good. Of course, it is not as fast as its healthy counterpart at high percentages of reads, yet its speed never falls to the level of the 4-disk RAID0.

There are no changes among the RAID5 and RAID6 arrays at high loads. They all cope with reading successfully and slow down at writing. As for the degraded arrays, the RAID5 without one disk slows down too much at reading.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 ]

Discussion

Comments currently: 1
Discussion started: 07/30/09 08:37:34 PM
Latest comment: 07/30/09 08:37:34 PM

View comments

Add your Comment