Articles: Storage
 

Bookmark and Share

(1) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ]

Performance in Intel IOMeter

Database Patterns

In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% (stepping 10%) throughout the test while the request queue depth varies from 1 to 256.

We’ll be discussing graphs and diagrams but you can view the data in tabled format using the following links:

We will discuss the results for queue depths of 1, 16 and 256 requests.

We will also be discussing the RAID0 and RAID10 arrays separately from the checksum-based RAID5 and RAID6.

Deferred writing is the only technique at work when the load is low, so we don’t see any surprises. It is good that the eight-disk RAID10 is almost as fast as the four-disk RAID0, just as the theory has it. The degraded RAID10 is also as fast as them, meaning that the failure of one disk does not lead to a performance hit.

The situation is more complicated with the checksum-based arrays. First of all, we have excellent results at high percentages of writes. RAID controllers usually slow down under such load because they are limited by the speed of checksum computations (and the related overhead in the way of additional read and write operations). We don’t see that here: the huge amount of cache memory and the thought-through architecture help the controller cope with low loads easily.

The RAID6 are expectedly a little slower than the RAID5 built out of the same number of disks. It is because not one but two disks drop out of the team work, having to process checksums. The one-disk-degraded RAID6 and RAID5 are good enough, but the latter slows down suddenly when doing pure writing. The degraded RAID6 with two failed disks (on the verge of failing completely) is sluggish.

When the queue depth is increased to 16 requests, the RAID0 and RAID10 just increase their performance steadily. The RAID10 are still as fast at writing as the RAID0 consisting of half the disks.

Note that at high percentages of reads the RAID10 arrays are much better than the RAID0 arrays built out of the same number of disks. This is normal because the controller can alternate read requests between the disks of each mirror depending on what exactly disk can perform the request quicker (its heads are closer to the requested location on the disk platter).

The degraded RAID10 is good, too. Its performance hit is directly proportional to the percentage of reads in the queue. This is easy to explain: it has only one disk in one of the mirrors, which affects the speed of reading, whereas writing is still done in the buffer memory and is not affected by the loss of one disk.

The RAID5 and RAID6 arrays do not feel at their ease here. Their graphs are zigzagging, indicating imperfections in the firmware. Take note that these arrays have only sped up on reads. The number of operations per second is the same at pure writing. Even highly efficient architectures have their limits, and we see this limit at pure writing. Anyway, the performance is very high compared to what we saw in our earlier test sessions.

Curiously enough, the eight-disk RAID6 is ahead of the RAID5 at pure reading. This is indicative of firmware problems with respect to the latter array type.

As for the degraded arrays, the RAID6 without one disk still maintains a high speed but the failure of a second disk just kills its performance. The degraded RAID5 has an inexplicable performance hit when it does pure writing.

There is nothing unusual with the RAID0 and RAID10 arrays when the queue becomes as long as 256 requests. Everything is just as expected. Perhaps the only surprising thing is the behavior of the eight-disk RAID0: graphs in this test usually sag in the middle where there is about the same amount of reads and writes, but the graph of that array has a rise at that area!

When the queue depth is very long, the graphs of the RAID5 and RAID6 arrays smooth out, making their performance more predictable. There are no surprising results here, and even the degraded RAID6 without two disks tries its best to catch up with the others. The degraded RAID5 without one disk acts up a little, slowing down too much at high percentages of reads.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 ]

Discussion

Comments currently: 1
Discussion started: 05/23/09 02:02:06 AM
Latest comment: 05/23/09 02:02:06 AM

View comments

Add your Comment