Articles: Storage
 

Bookmark and Share

(3) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]

Now we will see the benchmark results for four-disk arrays with lazy write disabled for the hard disk drives.

To compare the speeds of the RAID arrays in different caching modes, we fill the table with ratios of the controller speed with lazy write enabled to its speed with lazy write disabled. The bigger number indicates higher caching efficiency in this mode. If the number is smaller than 1 (marked with red), lazy write for the drives in the array is harmful. If the number is above 1 (marked with blue), it brings a performance gain. If you see “1.0”, then the status of lazy write doesn’t influence the array performance.

It’s clear that disabling write request caching for the drives is negative for every array, although to a variable degree. A certain speed reduction in the random read mode is explained by the lack of write requests. In other modes, caching affects the performance of each array more when there are a lot of write requests in the queue and, usually, affects it less when the queue is longer.

Let’s now compare the results. We draw graphs for each array in WriteThrough and WriteBack modes for queues of 1, 16 and 256 requests.

By disabling caching for the RAID0 array we reduce its speed in all modes, save for the random read. The maximum performance loss equals to 528%! The only doubtful mode is the 16-request-long queue plus 10% writes. As I have mentioned above, we saw the same performance reduction in our previous test sessions, too. Now we are 100% sure that it is the HDD cache that is responsible for that.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 ]

Discussion

Comments currently: 3
Discussion started: 06/01/04 03:35:08 AM
Latest comment: 08/31/04 04:27:24 PM

View comments

Add your Comment