Random Read & Write Patterns
Now we’ll see the dependence of the drive’s performance in random read and write modes on the size of the processed data blocks.
We will discuss the results in two ways. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.
Today’s HDDs can effectively cache small data blocks at writing, delivering high performance. The modest read speed is due to the high access time.
Nothing can save the day if the interface is much slower than the hard disk drive itself. The Solo reaches its top write speed on 512KB data blocks as the USB interface cannot pump through more data. The read speed hits the ceiling at much larger data blocks.
In the Database pattern the drive is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% with a step of 10% throughout the test while the request queue depth varies from 1 to 256.
You can click this link to view the tabled results for the IOMeter: Database pattern.
The graph is indicative of very low performance. The USB interface is the culprit again. As opposed to eSATA, USB does not support queuing requests on the drive. The queue can only be built in the USB driver. As the result, it is only at long queue depths that there is a noticeable performance growth.