Performance in FC-Test
For this test two 32GB partitions are created on the virtual disk of the RAID array and formatted in NTFS and then in FAT32. Then, a file-set is created on it. The file-set is then read from the array, copied within the same partition and then copied into another partition. The time taken to perform these operations is measured and the speed of the array is calculated. The Windows and Programs file-sets consist of a large number of small files whereas the other three patterns (ISO, MP3, and Install) include a few large files each.
We’d like to note that the copying test is indicative of the array’s behavior under complex load. In fact, the array is working with two threads (one for reading and one for writing) when copying files.
This test produces too much data, so we will only discuss the results of the Install, ISO and Programs patterns in NTFS which illustrate the most characteristic use of the arrays. You can use the links below to view the other results:
The RAID0 arrays are obviously the best when creating files. However, the speeds are far from what you could expect from multi-disk arrays based on fast HDDs when reading large files. If we disregard the obviously poor performance of the single disk on the LSI controller, the speed of writing should equal the number of disks multiplied by 100MBps (this is the rate at which modern HDDs can deliver data from the platter).
The four-disk RAID0 arrays have inexplicable problems with certain file-sets.
The RAID5 and RAID6 arrays are surprisingly good at writing, being only inferior to the eight-disk RAID0. The degraded RAID5 even becomes the leader. It seems to save a lot on checksum calculations. This does not refer to the degraded RAID6 without two disks, which writes very slowly.
The arrays reach the same limit we have seen in the multithreaded test. Reading is usually performed by the OS with a short queue depth. As a result, the controller proves to be faster at writing than at reading.
We see the same speed limitation at a queue depth of 1 for the checksum-based arrays, too. This limitation refers to the full arrays, though. The degraded arrays slow down as they need to recover data from checksums. It should be noted that the RAID6 (both normal and degraded by one disk) are as fast as the RAID5.
Copying within the same partition seems to be largely determined by the speed of reading. The arrays are similar in speed, the eight-disk arrays having but a small advantage.
The RAID5 are somewhat better than the RAID6 based on the same amount of disks when copying within the same partition: the read speed limitation must be combined with the ability of the RAID5 arrays to write a little bit faster than their opponents. Every degraded array, except for the RAID6 without two disks, feels good here.
There is nothing new in the Copy Far subtest: the standings are like in the previous subtest.