Random Read & Write Patterns
Now we will see the dependence between the drives’ performance in random read and write modes on the size of the data block.
We will discuss the results in two ways. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second.
- IOMeter: Random Read, operations per second
- IOMeter: Random Read, MBps
- IOMeter: Random Write: operations per second
- IOMeter: Random Write, MBps
An HDD’s performance at random-address operations is proportional to its response time until the requested data block is as large as to make the speed of sequential reading or writing more important. You can see this moment easily: staring from 2MB data blocks, the HDDs with 500GB platters (the Samsung F3 and the Hitachi 1000.C) go ahead and leave WD’s Black Caviar series behind.
Of course, this general rule applies only if there are no special factors like the 4KB sectors of the WD10EARS. Its writing graphs go much lower than the other HDDs’ because, when processing small data blocks (smaller than 4 KB), it has to write back the unrequested part of a 4KB sector (to remind you, the HDD itself does not know if there is any valuable information at this address or not). When it comes to large data blocks, the requested address block coincides with the disk sectors in one out of eight cases only. In the other seven cases, data in two sectors has to be rewritten. It is hard to tell why but this HDD is much slower than the same-class opponents until very large requests when the load becomes sequential.