by Alexey Volkov , Nikita Nikolaichev
02/02/2004 | 11:18 PM
After a short break, we decided to resume our studies of dual-channel SerialATA RAID controllers’ performance.
We think this is necessary, since we have received three new controllers we haven’t yet tested and also because we used to run the tests with pre-release versions of the BIOSes and drivers. Since then, all companies have released official drivers and the controllers themselves are widely spread in stores.
So we’ve got eight dual-channel SATA RAID controllers for our tests. They can be split into three categories.
The first category includes two controllers, integrated into mainboard chipsets.
We will start out with the controller integrated into Intel ICH5-R South Bridge, the first “revolutionary” controller of the kind.
We tested it using versions 188.8.131.5268 of the BIOS and the driver.
VIA Technologies followed Intel’s example in creating South Bridges with integrated SATA RAID controllers. So, welcome the V8237 South Bridge.
We tested the VIA V8237 chip using the BIOS version 2.01 and the driver version 5.0.2195.210.
Both “chipset” controllers have an advantage over “discrete” ones since they are attached to the South Bridge directly, rather than via the PCI bus. It means that the maximum data-transfer rate to and from the HDD can be higher than the peak bandwidth of the PCI bus (133MB/s). The North and South Bridges of the VIA PT800 chipset are connected with the 8x V-Link bus that has a peak bandwidth of 533MB/s; the Intel 875/865 chipsets link the Bridges by means of the Hub-Link bus with a bandwidth of 266MB/s. Although the bandwidth of the V-Link and Hub-Link buses can be claimed by other devices of the South Bridge, the RAID controllers should “theoretically” feel better than if they were attached to the PCI bus. And that’s exactly how other RAID controllers of our eight testing participants will communicate with the chipset.
The second category includes RAID controllers on the Silicon Image 3112 chip. Since Sil3112 chip is a controller of the PCI-to-SATA type, the RAID controller on this chip is a typical firmware RAID. In other words, it is implemented on the software level.
We tested Silicon Image Sil3112 controller using the BIOS version 42/4 and the driver version 1.0.032.
Adaptec 1210SA controller features very functional BIOS – you don’t often see a fully-fledged BIOS, like that of a good SCSI RAID controller, in such a low-cost product. We tested it using the BIOS v.1.0-OB1016 and the driver v.1.00.07.
LSI SATA150-2 controller is the last one in this category. Like the controller from Adaptec, this one is based on Silicon Image Sil3112 chip. One of the peculiarities of this solution is extremely long time it takes this controller to create a RAID1 array: it took about 100 hours to be performed! That’s unacceptably long, but you cannot do anything about that in the BIOS settings.
We tested this controller using the BIOS v.5.0.11011038R and the driver v.184.108.40.2063.
The third category includes Promise FT S150 TX2 plus, HighPoint RocketRAID1520, Acard 6890S.
Promise FT S150 TX2 plus is one of the most easily recognizable controllers, and it is the only one to have a driver that can change the request caching mode (Write Back / Write Through).
We tested it using the BIOS v.1.00.0.37 and the driver v.1.00.0.37.
HighPoint RocketRAID1520 is not a native SATA controller. You can notice it in the snapshot below that it is based around the well-known HPT 372 chip, and works with SATA drives through the ubiquitous bridges from Marvel – 88i8030 chips. It’s hard to tell if the converters affected negatively the performance in any way. As we are going to see below, the converters are not the main problem of this controller.
We tested it using BIOS v.2.355 and driver v.2.355.
The last SATA RAID controller is Acard-6890S. The controller from the obscure Acard caused us much trouble during testing. For example, it wouldn’t work with Seagate Barracuda 7200.7 SATA drives (it never went through the Winbench tests), and we had to retest all other controllers with Maxtor DM Plus 9 SATA HDDs. Moreover, this controller was very unstable throughout the tests and used to accidentally crash the array once in a while.
It was tested with the BIOS and drivers version 2.10.
We didn’t include the 3Ware 8500 controller into our today’s testing session for three reasons. First, it is not formally a dual-channel controller. Second, this controller carries cache memory chips onboard and thus belongs to a different class than the reviewed controllers. Third, we are going to dedicate one of our upcoming articles exclusively to 3Ware8500 controller.
We had to use two mainboards, since we’ve got two chipset-integrated controllers. Although the mainboards are based on different chipsets, they are on a similar level of performance. So, the use of two platforms shouldn’t affect seriously the test results. And even if this affects the results, we just can’t test the performance of a chipset-integrated controller apart from the performance of the chipset itself.
We used the following benchmarking software:
We tested RAID arrays of two types: RAID0 (stripe) and RAID1 (mirror).
Before the tests, the drives were switched into the “fast” mode using the Hitachi Feature Tool. We also wrote something into each HDD sector before the tests to avoid the influence of the write verify feature on the results of our tests (see this review for details: Real Maxtor DiamondMax Plus 9 HDD with 80GB Platters Reviewed!).
For the WinBench99 tests we formatted the drives in FAT32 and NTFS as one partition with the default cluster size (formatting for FAT32 was performed with Paragon Partition Manager). We ran the tests seven times each and took the highest result for further analysis. The HDDs didn’t cool down between the tests. For FC-Test we split the array into two logical partitions, 32GB each. We used the following patterns in the IOMeter tests: Sequential Read, Sequential Write, DataBase, WorkStation, FileServer and WebServer. For a detailed description of the patterns you can refer to our previous reviews.
The following table lists the versions of the BIOS and drivers for each tested controller:
The array receives a stream of read/write requests with a request queue depth of 4. Every minute the data block size changes, so we get a dependence of the linear read/write speed on the data block size.
Click to enlarge
HighPoint RR1520 failed in the Sequential Read test – its speed hits against a mysterious ceiling of “31MB/s”. It’s like the good old times of ATA/33 returned!
Promise with the WB configuration and the VIA controller are leaders on small blocks, while the same Promise with the WT configuration was slower than the rest of the controllers on blocks 4KB-64KB big. Other controllers form up a dense group.
All controllers show the same speed of 100MB/s on big data blocks, save for the loser from HighPoint.
The excellent performance of the VIA controller came as a surprise so we checked out one supposition. As you remember, the high read speed of the Promise controller is achieved by pre-processing of the requests. If the driver “sees” that the array receives sequential requests, it “glues” them up together, producing a higher data-transfer rate. So it was quite logical to suppose that the similar performance of the VIA controller is achieved due to the same requests processing mechanisms. But in this case the CPU workload should be above the average anyway!
Our supposition was confirmed as the VIA controller produced a CPU workload of 77% when processing 512-byte data blocks, while other controllers, save for the Promise (WB), loaded the CPU by about 50% on the same blocks. So, as usual, every performance gain comes at a certain price.
Let’s now turn to writing:
Click to enlarge
Promise controller in the WB mode wins the write test (thanks to the driver’s ability to “enlarge” the write request). The integrated controller from Intel jumped above the rest of the controllers on 8KB blocks. It is very curious fact, I should say.
The controller from HighPoint performed like any other at first, but soon fell hopelessly behind. On the other hand, it is definitely better at writing than at reading!
It is funny, but the LSI controller couldn’t keep up the tempo on the 2KB-32KB stretch. I guess the driver is the one to blame here, since other two controllers on the Silicon Image chip showed higher speed.
This pattern serves to check out the controller’s ability to process a mixed stream of read/write requests with random-address 8KB data blocks. By changing the ratio of reads/writes we can see how well the driver sorts out the mixed stream of requests.
The table is unusually large, and I gave up the idea of painting it up (it was a horrible multi-colored picture). Let’s do this in diagrams. First, let’s see how the controllers behave under the linear workload (request queue depth = 1).
All controllers start out with similar results, but as the writes share increases, we see two controllers – the VIA and the Promise in the WT mode – getting to the top. We know them from the Sequential Read pattern, they just rank up otherwise here. The same Promise in the WB mode joins them by the end of the diagram (100% writes).
Note that the integrated controller from VIA is the best at doing deferred writes! Is it going to be the favorite of our today’s tests?
We increase the workload to 16 outgoing requests to see the following:
The VIA controller got a bit faster, but anyway lost to the others! Promise in the WT mode sped up more than everyone else. The VIA controller approaches the leaders when there are more write operations.
Now we increase the workload to the maximum of 256 outgoing requests:
There is no definite leader, but the VIA controller fell far back at reading. When there are more read requests, the integrated controller from Intel seems preferable, while the controller from Promise in the WT mode is better than the others when the writes share is higher.
Now let’s check the performance of our testing participants in the patterns that emulate the disk subsystem workload of a typical server.
Let’s compare the average performance of the tested controllers (the arithmetic mean of the controller speed under five different workloads).
Promise in the WT mode was the best of all under server-like workloads, but other controllers didn’t lose hopelessly to it, save for the one from VIA and Adaptec. Anyway, we’ve got a kind of rating “ladder”.
Note also the good results shown by Acard and the superiority of the reference Silicon Image controller over the products of respected manufacturers of server RAID controllers like LSI and Adaptec, which are based on the same chip from Silicon Image.
Promise in the WT mode remained the leader in the WebServer pattern, but Intel controller is close behind. The outsiders - VIA and Adaptec got one more fellow in their camp: HighPoint, although VIA lost even more here. This is quite a predictable result if we recall VIA’s behavior in the DataBase pattern.
Curiously enough, the two integrated controllers are quite different: one is among the leaders, and the other is at the very bottom.
This pattern features a greater share of write requests, so we should see different results here:
Although the leader hasn’t changed (it’s still the controller from Promise in its WT incarnation), the integrated controller from VIA Technologies shares the second position with the one from Silicon Image. As we have noticed earlier, this controller likes write requests and low workloads.
Anyway, Acard and Intel remained among the leaders. The controllers from LSI, HighPoint and Adaptec are the slowest, although the gap is small.
The Workstation32 pattern differs from the regular Workstation pattern by the fact that it only uses first 32GB of the array’s address space.
By narrowing the “operational zone” of the test, we leveled up the results of the controllers. The leader is the same, while the poor controller from VIA is now closer to the end of the list. The outsiders exchanged their places: Adaptec is now faster than HighPoint.
We use the Winbench package to test the disk subsystem as if it worked in a desktop computer. We format the array of total 240GB capacity in NTFS using the standard tools (the default cluster size is 4KB) and in FAT32 using Paragon Partition Manager (the cluster size is 32KB). We also perform tests on 32 gigabytes of the array in NTFS and FAT32 (using the Disk Manager of Windows 2000 for partitioning).
So, let’s start with a FAT32 logical volume of 240GB capacity.
Click to enlarge
Here are the linear read speeds of the controllers at the Beginning and End of the logical volume:
HighPoint’s results chnage for the worse from the rest of the controllers, but this only confirms the results we received in the Sequential Read pattern.
You can check the linear read graphs for the arrays:
Next come two integral tests, Business Disk Winmark and High-End Disk Winmark.
The controller from HighPoint is unlike the others in Business Disk Winmark. This performance may be caused by the peculiarity of its driver. Other controllers run at smaller, but acceptable speed.
The two integrated controllers took the first two places in High-End Disk Winmark. I guess this success can be explained by the more advantageous connection type (these controllers are integrated into the chipset; they are not “PCI-devices” and, in theory, have higher bandwidth to access the memory).
HighPoint showed a surprisingly poor result. This one-sided optimization of the driver is rather strange, you know.
Let’s see what we have in NTFS:
Click to enlarge
The speeds went down in NTFS compared to FAT32, but the driver of the HighPoint works brilliantly: much better than the others. Promise (WB) and Silicon Image perform fine, too. The rest of the controllers showed very similar results.
The same is true for High-End Disk Winmark: the speeds are slower than in FAT32, and all controllers produce similar results, save for the one from HighPoint that fell behind the group.
Now, we will check the performance of our racers in a logical volume of 32GB capacity.
First come the Winbench99 results for FAT32 file system:
Click to enlarge
Again, we examine the results in Business Disk Winmark and in High-End Disk Winmark separately.
The driver of the HighPoint controller was born to run the Business Disk Winmark test! :)
The speed of the RocketRAID is about 50% higher than that of its closest rival (Intel ICH5-R); the rest of the controllers closely follow the Intel’s chip.
Intel ICH5-R wins High-End Disk Winmark, while the VIA controller fell from the top into the middle of the list. The HighPoint controller couldn’t repeat its excellent performance once again.
To finish with Winbench99, let’s take a look at the results of the controllers with a 32GB NTFS volume.
Click to enlarge
The controller from HighPoint is the fastest here, just like it was in the test that used the full capacity of the array. It is followed by Promise in its WB and WT configurations, by the two integrated controllers, then by the controllers on the Silicon Image chips and the solution from Acard.
We have already seen a similar picture. The integrated controllers regained their leadership, the Silicon Image family is at the bottom, but the controller from HighPoint is the slowest of all. We would like to give credit to the Acard controller, as it showed really good result here.
So, we’ve got the daintiest dish left – File-Copy Test. We stick to our traditional methodology: we create two logical volumes, 32GB each, on the array and format them in NTFS and FAT32. We create a set of files on the first volume, then this set is read from the array, then copied into a folder on the first volume (copy-near – inside one and the same logical disk), and finally copied onto another disk (copy-far). FC-Test version 0.5.3 differs from version 0.3 in the zip emulation function. You can measure the time spent on zipping and the average speed (as we know the total size of the files included into the pattern).
So let’s view the results. NTFS comes first. We will begin with creating and reading the files.
The worst results are marked with red, the best ones with blue.
For a more illustrative picture I drew diagrams for the Create and Read tests.
The VIA controller is definitely the best of all at creating (writing) files as it showed the highest speed on four file sets out of five. The Acard controller was the slowest at writing.
Note also a good average speed of the Intel ICH5-R and Promise (WT) controllers as well as of the LSI controller, which performed really fast at creating the Install pattern. I would also like to single out the performance of the HighPoint controller, when it worked with small files (Programs and Windows patterns).
The rating list is different in the read test. The VIA controller is closely followed by the one from Silicon Image, Adaptec and, in some cases, Intel ICH5-R. It is quite natural that the integrated controllers are faster than others at reading. It is also understandable that the controllers based on the Silicon Image chip showed high read speed: they have always been good at “pumping” the PCI bus. What surprises me most is the poor result of the LSI controller (based on the same Silicon Image chip). I guess the driver is the one to blame here.
HighPoint is far behind the competitors, but it was quite predictable after the results of SequentialRead and Disk Transfer Rate tests.
Now, here are the copy results:
There are few changes: the controller from VIA is on top, and Promise (WT) competes with the solution from Intel for the second position. The reference controller from Silicon Image holds the third position. HighPoint is too slow again.
Well, let’s now switch to FAT32 and run the same tests.
The controller from VIA is brilliant again. The one from Intel is good at creating large ISO-like files, but creates smaller files much slower. Note that the Acard controller works much better in FAT32 than in NTFS.
The solution from Adaptec shows good reading capabilities in FAT32, but the others did just a little worse, save for the unlucky HighPoint.
Our test session in File-Copy test ends with the actual copy patterns in FAT32.
It’s the same as in NTFS, only the overall speed is higher. The integrated controllers from Intel and VIA are on top; the Promise (WT) gets close to them in some tests.
Weighing up all pros and contras, I can’t say there is any definite leader. Here is a list of the controllers tested today with their weak and strong points emphasized for your convenience:
ACARD-6890S is an average product that never fell to the bottom of the list, but also never flew above the third place. It’s fine for the first time – we haven’t tested any controllers from Acard before. I only wish this controller had more compatibility with hard disk drives (particularly, this controller wouldn’t work with Seagate Barracuda 7200.7 SATA HDDs as well as with Western Digital 360GD).
Adaptec 1210SA lost to Silicon Image in all tests (except FC-Test), although they are based on the same chip. It is clear that the BIOS and the driver of Adaptec controller are not as fresh as those coming with the Sil reference controller.
HighPoint RocketRAID1520 flunked all the tests. It can only boast good speed in the Business Disk Winmark test and at creating files in FC-Test.
Intel ICH5-R controller showed modest results in the synthetic IOMeter test, without any remarkable moves for the better or worse, but proved to be a strong combatant in Winbench99 and FC-Test, always finding itself in the “top three”.
LSI SATA150-2 controller was better in synthetic tests than the Adaptec, but lost to it in WinBench99 and in FC-Test.
Promise FT S150 TX2 Plus did well in Intel IOMeter tests, especially in the WT mode. As for WinBench99 and FC-Test, it showed somewhat poorer results than the others. The speed of this controller greatly depended on the driver’s caching algorithms (WB/WT).
Silicon Image Sil3112 controller rarely won a test in IOMeter, but was always present in the leading trio. However, this stability vanished completely in WinBench99 and FC-Test: it would fly high in one test and slump in the next one.
The last controller (in the alphabetic order) is the one integrated into the VIA V8237 South Bridge. It wasn’t too speedy in the IOMeter tests, especially under high workloads. It did somewhat better in WinBench99, and was always among the best in FC-Test.
P.S.: If you are choosing a controller for your home/office computer, we would recommend you carefully study the results of FC-Test, since it simulates the most usual types of workload on the disk subsystem of such a computer.
We will publish the test results for a RAID1 array in our next review. Stay tuned!