In the Database pattern the drive is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% with a step of 10% throughout the test while the request queue depth varies from 1 to 256.
You can click this link to view the tabled results for IOMeter: Database patterns.
We will build diagrams for request queue depths of 1, 16 and 256.
Once again we see the two SSDs from Western Digital behave in the same manner. It is clear that the junior model is just based on smaller-capacity chips but has the same architecture as the senior one. We’ve actually got two groups of drives in this test. The first and faster group includes the SSDs from Intel: the X25-E is expectedly better than its cousin, especially at writing. What is more surprising for us, the RAID array of two X25-V drives can challenge the X25-E throughout a wide range of loads, being only inferior to it at very high percentages of writes.
The rest of the SSDs are in the second, slower, group. We can note that the WD drives are better at pure reading and at write percentages up to 20%. The Kingston, on its part, manages to keep its performance up at high percentages of writes. It is only at 60% writes that it does not cope and slows down to 100 operations per second. Anyway, these three SSDs (and their two controllers) look much more appealing than the notorious JM602B controller that used to be a total failure at high percentages of writes.
When the request queue is longer, the X25-V improves its standing. The RAID0 driver seems to have highly effective deferred writing algorithms which ensure a large advantage of the array over the X25-E everywhere but at extremely high percentages of writes.
Winding up this part of our tests, we want to show you diagrams that illustrate each SSD’s performance at five different request queue depths.
We can note that Intel’s SSD controllers can effectively work with long request queues. They can identify a long queue and increase the performance accordingly.
The effect from RAID0 can be observed easily. Just take a look at the performance growth in the left part of the diagram, which extends rightwards, towards the higher writes, as the request queue grows longer.
The JM612-based controller from Toshiba is very different from its predecessors and from the JM602B due to having cache memory or to some serious changes in the operation algorithms. First, this SSD can reorder requests, producing a performance growth at request queue depths other than 1. This is not too effective as the graphs for queue depths of 4 and 256 requests are almost identical. Anyway, it supports NCQ, which is good. Second, the controller copes successfully with low percentages of writes. This was an insurmountable task for the JM602B controller installed into the first-generation Kingston V series as well as for the Toshiba T6UG1XB installed into the Kingston V+.
The JMicron 612 controller in the Western Digital drives is better than its predecessor, too. It features NCQ (in small amounts, too) and its performance is higher at both reading (the read speed is higher than that of the JM618 and the difference cannot be due to their possibly having different flash memory) and writing (alas, the write speed is still much lower compared to the Intel and Indilinx controllers). We have an impression that JMicron developed this controller on the basis of the JM602 but took some ideas from the Samsung PB22-J because the graphs are shaped characteristically.