Performance in Intel IOMeter: DataBase Pattern
As usual we are going to start with the DataBase pattern. If you are curious about the exact numbers obtained during the tests, please have a look at the table below, which carries the results for Total I/O under five types of workload (we changed the requests queue depth) in 11 modes with different writes share (from 0 to 100% with 10% shift).
Now let’s have a look at the HDD performance during requests processing under three key workloads:
Under linear workload (one outgoing request) our hard disk drives have already proven individual enough, so that we can start drawing some conclusions.
Of course, the Hitachi drive feels completely at home in the marginal modes. In both: RandomRead and especially RandomWrite, it proved much faster than the competitors. At the same time, it doesn’t seem tom like those cases when the reads and writes are more or less balanced.
Maxtor drives are faster than the competitors from Western Digital in RandomRead mode, but yield to them in all other cases.
As for the performance differences between the ATA and SerialATA drives from one and the same manufacturer, the situation here is very interesting. The SerialATA drive from Western Digital appeared faster than its ATA counterpart, while by Maxtor the situation turned out just the opposite: the ATA HDD outperformed the SATA one.
Now let’s find out what happened with 16 outgoing requests.
Under this workload the performance difference between the drives is more evident, however, the graphs overall shape remained unchanged.
When the workload increases to the maximum of 256 outgoing requests, we can see that the two Maxtor graphs would merge into one. Hm… Despite different firmware versions of the ATA and SATA hard drives, they behave absolutely identical in DataBase pattern. Hitachi drive managed to break all the records in RandomRead and RandomWrite modes, though in intermediate modes it appeared as fast as any of the Maxtors.
Western Digital hard drives proved to have the best balanced algorithms: their graphs are very straight and they indicate a clear performance growth with the increase in the writes share.
In fact, there is one more thing that can be seen from the DataBase results. Since this pattern works with 8KB data blocks, which size hardly matters for the today’s high density HDDs, we can regard the requests processing speed for queue=1 as random access time value. The performance difference during 8KB data blocks reading will make 0.1ms compared to 512Byte blocks reading speed, which can be neglected. This way, if we take Total I/O results for RandomRead and RandomWrite modes in case queue=1, we will be able to calculate the access time during reading and writing. And if we divide the former by the latter we will get the lazy write algorithms efficiency coefficient.
Please pay attention to the Hitachi’s results. It is the first time that I remember of when the lazy write efficiency coefficient exceeded 2! This is a really great job!