Random Read & Write Patterns
Now we’ll see the dependence between the drives’ performance in random read and write modes on the size of the data block size.
We will discuss the results of the disk subsystems at processing random-address data in two versions. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.
When reading small data chunks, the HDDs are ranked according to their access time. The Western Digital is ahead, and the Hitachi is last, the other two drives being in between.
Access time is not a decisive factor when the drive is reading large data blocks with random addresses. Therefore the Samsung goes ahead thanks to its higher sequential read speed. Its advantage is growing up along with the data block size. The Hitachi and Western Digital have identical results, the Toshiba following them closely.
The HDD from Western Digital is far faster than the others when writing small data blocks. The other drives go close to each other but the Hitachi slows down and falls behind as the data block size gets larger. Its firmware seems to stumble on such blocks, being unable to pack them fully into the buffer segments.
The HDD from Western Digital is unrivalled when writing large blocks, too. The Hitachi is in last place but its speed is growing up proportionally to the data block size. When the load gets close to sequential writing, the Hitachi overtakes the Samsung and nearly catches up with the Toshiba.