Multithreaded Read & Write Patterns
The multithreaded tests simulate a situation when there are one to four clients accessing the virtual disk at the same time – the clients’ address zones do not overlap. We’ll discuss diagrams for a request queue of 1 as the most illustrative ones. When the queue is 2 or more requests long, the speed doesn’t depend much on the number of applications.
There is an odd performance ceiling at about 210MBps most of the arrays hit against at one thread. We gave a lot of thinking to this phenomenon and then looked at the table with the results of the multithreaded test (you can do this by clicking on the links above). And we found that the controller just didn’t have a long enough queue. When the queue gets longer, the speeds are higher. As a result, the fastest arrays can only show their best under high loads. For example, the four-disk RAID0 reaches its top speed at a queue depth of four requests while the eight-disk RAID0 doesn’t reach it even at a queue depth of eight requests.
But let’s get back to the multithreaded tests and see what we have when we add a second read thread. After all, we are interested in the effect this will have on speed, not in the specific numbers.
So, it is overall good enough with two threads expect that the four-disk RAID6 slows down suddenly. The other types of arrays slow down by less than 50%. The RAID0 and the eight-disk arrays of all types are especially good at processing two threads.
The arrays that were good at processing two threads speed up a little here while the others slow down. There are no dramatic changes, though.
There is again an odd barrier when the arrays are writing in one thread although this speed limit is set lower now. This is especially bad for the checksum-based arrays. Their speeds are ridiculously low, about 3MBps. The RAID10 arrays are slower than the single drive, too.
The checksum-based arrays improve somewhat with the addition of a second write thread. Interestingly, the eight-disk RAID0 improves its performance notably, too. The other arrays lose some speed, but not too much. The single drive slows down more than them.
And when we add even more threads, every array accelerates, save for the four-disk RAID10 (this array has problems again). The multiple threads must be similar to increasing the queue depth.
As for the problem arrays, they remain slow (slower than the single drive) at any combination of threads and queue depth.