Random Read & Write Patterns
Now we’ll see the dependence between the drives’ performance in random read and write modes on the size of the data block size.
We will discuss the results of the disk subsystems at processing random-address data in two versions. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.
Let’s start with reading.
The drives all deliver the same performance when reading data in small portions. It is only with 128KB data blocks that you can see any difference – the XTreme model with fast FireWire and eSATA interfaces goes ahead.
The fast interfaces are unrivalled with large data blocks because the result of the test is influenced by the drive’s linear read speed.
We see an odd picture when the drives are writing data in small blocks. The USB interface of the new XTreme and Desk series proves to be much faster than the same interface in the old Desktop as well as the theoretically faster eSATA and FireWire.
It is even weirder when the writing is done in large data blocks. The fast interfaces should theoretically win here, but instead the new models with USB interface are ahead. This might be explained by the odd sequential writing results we have seen above, but why does the Desktop model has the inexplicable slump of speed? Well, these drives really have problems with writing.