To check out the influence of lazy write on the results, let’s compare the numbers we have just got (in the WriteBack mode) to the results in the WriteThrough mode:
To compare the speeds of the RAID arrays in different caching modes, we fill the table with ratios of the controller speed in the WB mode to its speed in the WT mode. The higher is the number, the higher is the efficiency of WB caching in this mode. If the number is below 1 (marked with red), WB caching is harmful. If the number is above 1 (marked with blue), WB caching brings a performance gain. If you see “1.0”, then WB and WT caching modes are equally useful.
As you see, you can speed up RAID arrays by enabling WB caching in the controller’s BIOS. We see no speed reduction even in the Random Read mode (when there are no write requests at all), while in the Random Write mode the speed gets six times higher in some cases. Only RAID5 array slows down with WB caching and requests queue = 256, but such a long queue is highly improbable in real tasks.
The results of WB caching are more illustrative when demonstrated on the graphs. We created three of them, for three different queue depths.
RAID0 is speeding up when caching is enabled and the writes share is high. As the number of requests in the queue grows, the gap between WriteBack and WriteThrough modes becomes smaller, but the advantages of WB caching are perfectly seen everywhere, save for the Random Read mode where there are no write requests and, accordingly, there is nothing to optimize!