In the Database pattern the drive is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% with a step of 10% throughout the test while the request queue depth varies from 1 to 256.
You can click this link to view the tabled results for the IOMeter: Database pattern.
We will build diagrams for request queue depths of 1, 16 and 256.
The HDDs go close to each other at the shortest queue depth. The Seagate 10K.1 is the only difference as it is always a little slower than the other 10,000rpm products. The Fujitsu is somewhat worse than its opponents at processing a large number of write requests.
The Seagate 15K.1 behaves in an interesting way. Having no rivals at reading, it is slower than the others at writing. This HDD just doesn’t seem to have any deferred writing at all.
When the queue is longer, the HDDs begin to exhibit their unique traits, i.e. their firmware algorithms. The Seagate 10K.2 and the Hitachi C10K300 are competing for first place among the 10,000rpm products. The former has a higher performance at high percentages of reads but the latter is better at writing. There are two drives that look worse than the others: the Seagate 10K.1 has modest deferred writing and the Fujitsu is too slow at reading.
The Seagate 15K.1 still does not want to show any trace of deferred writing. Having excellent results at reading, it sinks to fourth place at writing.
Hitachi’s HDDs, especially the new C10K300, are beyond competition at very long queue depths. Take note that they match the Seagate 15K.1 which has a higher spindle rotation speed. Of course, such high loads do not often occur in real servers, yet this is a good illustration of how firmware algorithms can affect performance.
Winding up this part of our tests, we will build a few diagrams that show the performance of each drive at five queue depths.
The Fujitsu MBB2 RC behaves like Fujitsu’s 3.5-inch SAS drives which produced similarly shaped graphs but with more effective deferred writing.
The two drives from Hitachi show a similar behavior but we can note that the newer C10K300 has a more aggressive and effective way of reordering requests: the graphs corresponding to medium and long queue depths are higher.
Interestingly, unlike the Fujitsu, Hitachi’s drives have a small but obvious performance growth at writing at a queue depth of 256 requests. The same is true for the lower queue depths.
The Seagate 10K.1 performs like the first 3.5-inch SAS drive from the same maker. It has low-efficiency writing and no performance growth at queue depths of 64 requests.
And like in full-size SAS drives, we see the next series, 10K.2, progress dramatically. Deferred writing is improved and the reordering of read requests is more effective. The processing of very long queues has not changed: the results are even lower at a queue depth of 256 requests than at 64 requests.
Funnily, but we have already seen Seagate drives behave like that. We are not 100% sure, but it looks like deferred writing is turned off while all requests undergo reordering (not only read requests as usual). This strategy was quickly abandoned in 3.5-inch drives, so the 15K.2 model is likely to behave in a more ordinary way.