Comparative Roundup: Six SAS RAID Controllers

We are going to compare six SAS RAID controllers we have reviewed before side by side in this roundup. Solutions from 3ware, Adaptec, Areca, HighPoint, LSI and promise - all in one article.

by Aleksey Meyev
11/09/2009 | 09:31 AM

Testing RAID controllers is a daunting and unrewarding task. Although the process can be automated well enough, it takes a very long time. There are too many long tests producing too many numbers. But as we have already tested as many as six SAS RAID controllers of the latest generation, one from each major brand, we can’t miss the opportunity to throw them all together into a comparative review.

 

Right now, the market is calm. Existing controllers cope with today’s loads well enough. The PCI Express bus has ousted PCI-X while the serial interfaces SAS and SATA have replaced the parallel SCSI and PATA. The SAS 2.0 interface is going to boost bandwidth from 3 to 6Gbps but it is not anywhere near as yet, so the manufacturers have some time to catch their breath and release firmware updates to solve various bugs and optimize performance. We have to confess that we will compare the controllers with those firmware versions that were the newest at the time of our acquiring and testing them. Perhaps this is not quite correct as the controllers tested most recently have an advantage, but we don’t have an opportunity to test as many as six controllers all at the same time.

Before making any comparisons, we want to explain what results we expect. First of all, we expect stable operation and high performance, preferably predictable performance. Unfortunately, stability can hardly be checked out. While all the controllers worked flawlessly on our testbed, some users report problems related to specific combinations of controllers and hard disks. We can’t check out every combination, so you should follow the official compatibility lists or just take the risk. Today’s controllers are generally less susceptible to inexplicable glitches than their predecessors. As for the performance factor, there are some things that we try to focus on.

RAID0 is the simplest case to discuss. The controller only has to cope with load correctly, deliver a high sequential read speed and perform as many random-address operations per second as possible. Multithreaded loads and the processing of files are a special case. Practice suggests that all controllers slow down when processing multiple threads of data but the performance hit varies. It is especially hard when the requests are coming in from a rather short queue. Most controllers have surprisingly low speeds under such seemingly easy conditions. The speed delivered when processing files is usually much lower than what we can see in synthetic benchmarks. Sometimes, a controller may write one and the same file (or a set of files) faster than read it. This is partly due to the amount of cache memory which has grown up to 256 or even 512 megabytes, but our test loads are designed to be bigger than the controller’s total cache. Thus, it is the controller’s firmware algorithms that must be held responsible for such capricious performance.

Mirror-based arrays, RAID1, RAID 10 and RAID1E, are somewhat more complex for a RAID controller (you can learn more about RAID1E here if you are not yet familiar with it). Besides the above-mentioned things, there are a few characteristic factors that concern mirror-based arrays only.

First of all, it is the controller’s ability to read data from both disks in a mirror pair simultaneously. The read/write head of one disk is going to be somewhat closer to the necessary sector, so this “luckier” disk can yield the requested data somewhat faster. As a result, a controller that can make use of this technique has an advantage at random reading. Its RAID10 array does not degenerate into a RAID0 built out of half the number of disks but can compete with and even outperform a same-size RAID0.

The second characteristic feature we always pay our attention to is the ability (or the lack thereof) to parallel sequential reading to both disks of a mirror pair. If, while reading a long piece of sequentially located data, the controller does not just read symmetrically from both disks but sends each even-numbered request to one disk and each odd-numbered request to the other disk, the resulting sequential read speed can improve greatly. The value of this improvement usually depends not only on the size of the requested file but also on the size of the chunks the file is being read by. Of course, we want to see a bigger performance growth at a smaller size of the data chunk.

The third feature is the ability to parallel multithreaded load into different disks of mirror pairs. If the controller can identify this kind of load (i.e. that it is being requested to read two separate fragments of sequentially placed data), it would be perfect to send each thread of requests to one disk of a mirror pair. We guess it is clear that this technique can produce a considerable performance boost, too.

Summing it up, mirror-based arrays are not so simple to work with. The controller’s firmware has to effectively identify the type of load and apply appropriate optimizations in order to meet our requirements. And we are not too strict, actually. All such algorithms have been around for a decade, first introduced in ATA controllers with very weak (by today’s standards) processors.

And finally, there are rotating checksum arrays RAID5 and RAID6. On one hand, the latest generation of controllers find it easier to cope with such arrays as they are equipped with high-performance processors to calculate checksums (or even two checksums as in RAID6). On the other hand, any flaws in firmware are now much more apparent, having been previously concealed by the insufficiently high speed of XOR operations. These types of arrays are becoming more and more demanded because few users want to lose half the disk capacity to build a mirror-based array whereas amounts of crucial information that requires fault-tolerant storage are growing up at a tremendous rate. Why is RAID6 so popular? It’s simple. Today’s arrays grow to impressive size. Even if you don’t take newest disks, you can easily build a 30-terabyte array out of Seagate’s ST31000640SS with SAS interface. If one disk of such a huge array fails, restoring the array takes not a few hours but a few (or even many) days. For information to be secure during these few days, RAID6 is preferred to RAID5 as being capable of surviving a loss of not one but two disks simultaneously. Yes, a RAID6 suffers a terrible performance hit when two of its disks fail, but it is still usually better than a mirror-based array because the latter has only half the useful capacity, the number of disks in the array being the same.

Winding up this already too long introduction, we want to answer one popular question. Are these controllers any good at all if you can go and buy solid state drives that deliver astonishing performance? Yes, they are good despite that competition. The cost of HDD storage is as yet much lower than that of SSDs. SSDs also have lower storage capacities, so you have to take a few such drives for a serious disk subsystem, in which case it is still handier to connect them via RAID controllers. The performance factor is not so definite, either. While multi-channel (they are multi-channel indeed) SSDs are much better than any HDD-based array at reading, HDDs are not so hopeless at writing. Coupled with the fact that an SSD has a limited number of rewrite cycles, it looks like a RAID array built out of SAS drives is still the most optimal choice for a disk subsystem capable of handling a large number of write requests: it is going to be both cheaper and more reliable.

Testing Participants

Here are the controllers we are going to compare today:

You can click the links to read a detailed review of the specific controller, so we will only give you a brief description here. Unification seems to be the main trend among the manufacturers. They produce RAID controllers in large series with unified design, the specific models differing only in the number of ports, amount of onboard memory and processor frequency. The latter parameter does not vary much, though. Four out of these six models are equipped with Intel’s dual-core IOP81348: the controllers from Areca, Adaptec and HighPoint use a 1.2GHz version whereas the Promise uses an 800MHz version of the processor. 3ware and LSI keep to their own solutions: the 3ware controller uses a 266MHz AMCC Power PC405CR and the LSI uses an LSISAS1078 (with the PowerPC architecture as well) clocked at 500MHz.

The considerable reduction in RAM prices allows installing more memory on controllers: the LSI has 128 megabytes, the HighPoint and Promise, 256 megabytes. The other controllers have as much as 512 megabytes of onboard RAM.

Moreover, the memory of the Areca controller is not soldered to its PCB but installed as a separate module. Therefore we tested this controller twice: with its default 512MB memory and with a 2GB module. You’ll be surprised to see the results.

All the controllers support most popular RAID levels including 2-layer ones like RAID50. There are only minor differences that can hardly concern general users (e.g. some controllers support RAID3 while others do not support RAID1E). Each of the six controllers has drivers for different operating systems. Each of them boasts an advanced OS-based management and monitoring system. All such systems use networking protocols to provide interaction not only with local but also with remote (located on other computers) controllers. By the way, it is the existing infrastructure that often determines the choice of the controller brand for building a new subsystem. It is much better to have a handy and centralized management system than to support equipment from multiple brands simultaneously.

All of these controllers also support a battery backup unit and we strongly recommend using one. Its cost is usually incomparably lower than the cost of data stored on the RAID which may get lost in case of a power failure if deferred writing is turned on. What happens if deferred writing is turned off can be seen by the example of the Promise controller in the tests. This controller does not allow turning deferred writing on unless you install a BBU, and we could not get one in any shop over the last half year. Yes, this controller is greatly handicapped in this test session but we can’t do anything about that.

Testbed and Methods

We used the following software for this test session:

Testbed configuration:

Each controller was installed into the mainboard’s PCI Express x8 slot. We used Fujitsu MBA3073RC disks installing them into the default rack of the SC5200 case. The controllers were tested with eight HDDs in the following modes:

We had to put the results of four-disk and degraded arrays aside because there are already too much data.

The size of the stripe for each array type was set at 64 kilobytes.

Performance in Intel IOMeter

Database Pattern

In the Database pattern the disk subsystem is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% with a step of 10% throughout the test while the request queue depth varies from 1 to 256.

You can find the numeric results in our previous reviews dedicated to each particular controller. Here, we will only work with graphs and diagrams.

We will check out the minimum load first. The request queue depth is 1.

You may wonder why the controllers differ with RAID0 at such a short queue. It is deferred writing that makes the difference. While the controllers are all alike at reading, at high percentages of writes it is necessary to quickly put a lot of requests into the cache and then throw these data out among the disks. The Adaptec is the winner here. The LSI and Promise are slower than the others.

It is similar with RAID10: we’ve got the same leaders and losers at high percentages of writes. The LSI is especially poor, failing to cope with pure writing.

We can see now some difference at reading as the controllers can choose a “luckier” disk in a mirror pair to read data from. The HighPoint is better here. The LSI is excellent at pure reading, but worse at mixed reads and writes.

Now we’ve got to rotated parity arrays. It is simple to write a single block of data to a RAID0 or RAID10 but when it comes to RAID5, each write operation actually translates into the reading of two blocks, two XOR operations, and two write operations. The Adaptec passes this test better than the other controllers. The 3ware is good at pure writing, but slower than its opponents at mixed reads/writes. The HighPoint has problems caching requests and the Promise does not have deferred writing at all. The latter’s performance hit is catastrophic.

The same goes for RAID6. The controllers behave exactly like with RAID5, but slower. The algorithm now includes the calculation and writing of a second checksum, but the controllers’ processors cope with that and their standings remain the same.

By the way, the Areca with 2 gigabytes of onboard memory performs almost exactly like with 512 megabytes. Won’t we see any performance records then?

Let’s increase the queue depth to 16 requests.

The controllers are surprisingly similar with RAID0: four graphs almost coincide. The Adaptec stands out with its more effective deferred writing, though. This cannot be explained by a large amount of cache memory because the 3ware and the Areca have as much of it (and the extra 1.5GB doesn’t show up at all, again). The latter controller even has an identical processor as well.

The LSI and Promise are lagging behind again, but the gap isn’t large.

The different combinations of deferred writing, requests reordering and disk selection techniques produce most varying results. The Adaptec is ahead at writing, again. This controller seems to insist that its deferred writing is the best! The LSI is downright poor at writing, having obvious problems with caching, which can hardly be explained by its having the smallest amount of onboard memory among the tested controllers.

However, the LSI is successfully competing with the HighPoint and 3ware for the title of the best reader from mirror-based arrays. Take note how much faster than their opponents these three controllers are.

So, some controllers are better for writing, others for reading. You should start planning your server disk subsystem by determining what kind of load it is going to cope with. There are universal controllers, too. The 3ware is stable at any percentage of writes.

When there is a queue of requests, the controllers can more or less effectively cache them or perform multiple operations simultaneously. How exactly effective are they, though? The Adaptec is good while the HighPoint is only half as fast as the Adaptec at writing (while having exactly the same processor). The gap is not as catastrophic as at the shortest queue depth, though. The Promise is depressingly slow due to the lack of deferred writing.

The overall picture doesn’t change much with RAID6. One thing can be noted: for all its excellent writing, the Adaptec is slower than any other controller at highest percentages of reads. We’ll see at harder loads if this is a coincidence or not.

Unfortunately, we cannot say anything special about the 2GB Areca: the increased amount of onboard memory does not show up at all. This is very odd.

The controllers all cope well, each in its own way, with the hardest load. The Adaptec still shows the best writing capability, the 3ware is ahead at reading while the Areca is right in between at mixed reads and writes, showing the most stable behavior.

Surprisingly enough, we’ve got similar standings with RAID10. The leaders are the same while the LSI turns in a poor performance. Its problems with writing show up under this heavy load as a serious performance hit at any percentage of writes.

The huge queue saves the day for the Promise. At such a long queue depth some sorting of requests is done by the driver before they reach the controller. As a result, the Promise is as fast as the HighPoint here.

The Adaptec is still superior at writing and, like with RAID6 at a queue depth of 16 requests, is not so good at high percentages of reads. This must be a characteristic trait of this controller. Its forte is in writing.

We’ve got the same leaders with RAID6 while the HighPoint is obviously slow. There must be some flaws in its firmware. Its resources are wasted somewhere.

Disk Response Time

For 10 minutes IOMeter is sending a stream of requests to read and write 512-byte data blocks with a request queue of 1. The total of requests processed by the disk subsystem is much larger than its cache, so we get a sustained response time that doesn’t depend on the amount of cache memory.

A read response time is a queer parameter. On one hand, it is a too concise characteristic of a disk subsystem’s performance. It is next to impossible to make any conclusion basing on this parameter alone. On the other hand, it is indicative of the disk subsystem’s reaction speed, i.e. how fast it can react to a request and allow the system use the requested data. With RAID arrays, this parameter depends on the response time of the employed hard disk drives in the first place. The controller contributes to it, too. The 3ware and Areca controllers are somewhat better than the others, being about half a millisecond faster than the worst controller (it is the Promise). This seems to be a trifle, but it is actually very hard to achieve this 0.5-millisecond advantage when the response time is lower than 7 milliseconds. In other words, these two controllers enjoy an 8% advantage in this test.

We would also like to note the successful performance of the HighPoint and LSI with the mirror-based arrays. These controllers win 1 millisecond by effectively choosing the “luckier” disk in a mirror pair. This excellent result indicates excellent firmware algorithms that other developers should try to copy. RAID10 arrays are usually used for storing databases. If your database doesn’t fit entirely into the server’s system memory, you should not neglect this opportunity to improve its performance.

You might have predicted the leader in terms of write response time. It is the Adaptec which showed an excellent writing performance throughout IOMeter: Database. This controller is a little bit better than same-class opponents with each array type.

The HighPoint and Promise are poor with RAID5 and RAID6. The former has problems with deferred writing while the latter has no deferred writing at all, its write response time being huge as the consequence.

Take a look at the results of the Areca with 2GB memory. In three cases its performance is the same as with 512MB, but 2GB looks much better with RAID6. Perhaps this controller just lacks a higher load to use those 2 gigabytes of cache fully. Our test conditions may be viewed as easy for the controllers because each disk has an individual data channel and we don’t use expanders. Perhaps we’d see the benefit of 2GB memory if there were multiple disks on each channel and the 3Gbps bandwidth were not enough for all of them (this is not a far-fetched situation but a real case when a rack with two dozen disks is attached to one external connector). Alas, we don’t have an opportunity to check out this supposition and the 2GB cache doesn’t show anything exceptional as yet.

Random Read & Write Patterns

Now we will see the dependence between the controllers’ performance in random read and write modes on the size of the processed data block.

We will discuss the results in two ways. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.

Let’s start with reading.

The controllers go neck and neck here. Indeed, we shouldn’t have expected a big difference with RAID0. On the other hand, the Areca and 3ware cope with small requests a little bit faster than the other controllers.

The LSI and HighPoint boast an indisputable advantage with RAID10. Their excellent algorithms of reading from mirror pairs leave no chance to their opponents. The 3ware is somewhat better than the rest of the controllers, getting further away from them as the size of the data block increases.

When it comes to RAID5 and RAID6, all the controllers are close to each other again. There is actually nothing to optimize here: just take data from the disks one by one, checksum calculations not being a serious issue anymore.

The controllers differ even with RAID0 when processing large data blocks as their performance is influenced by sequential read speed and look-ahead reading. The LSI is in the lead, followed by the 3ware. It is harder to see the losers. The HighPoint and Areca are poor with blocks the size of a few megabytes but speed up after that, outperforming the Promise and Adaptec.

The Areca acts up, reading faster with 512MB rather than 2GB of cache memory. We can offer only one explanation: the increased amount of cache has increased cache access latencies. We just cannot think of any other reason for this fact.

When reading rather large data blocks from RAID10, choosing the luckier disk is not a winning strategy anymore. The leaders change as the result. The 3ware is first and is followed by the LSI.

The Areca is downright disappointing. In both versions this controller suffers an inexplicable performance hit. There must be some flaws in its firmware and we see them now.

The standings with RAID5 are the same as with RAID0, which is quite all right.

The overall picture remains the same with RAID6 as well. The Adaptec is the only exception, not accelerating quickly enough on very large data blocks.

Now let’s check the controllers out at random-address writing.

We’ve got pure writing here. And it looks like controllers that keep more cache lines simultaneously are on the winning side. The Adaptec is first, proving once again that its writing is good. The LSI and Promise are slow here.

Interestingly, the 2GB Areca is inferior to its 512MB version. Perhaps it is harder to seek data in the larger cache.

Well, we’ve got an indisputable leader at writing indeed. As for the losers, the low performance of the Promise is due to its lack of deferred writing. The performance hit of the LSI with RAID10 is inexplicable.

The controllers behave similarly with RAID5 and RAID6. The Adaptec is in the lead when processing small data blocks. It is followed by the 3ware which is faster on 512-byte blocks than on 2KB ones. This must be the performance peak of this controller’s very specific architecture. It is the HighPoint and Promise that have the biggest problems, though. The HighPoint cannot cope with writing small data blocks. It has a performance hit when the data blocks are smaller than 32KB. It has the same processor as the Adaptec, so it’s not the processor’s fault. The Promise is even worse here. Having no deferred writing, it gets stifled by the stream of small requests.

The controllers are very different when writing large data blocks. First of all, the Adaptec has problems. While being excellent at handling small random-address chunks of data, it is poor at writing large chunks. It seems to have some speed limiter fixed at 120MBps. The similar-specced Areca goes ahead. The 2GB version of the Areca is considerably slower, though, and seems to have a speed limitation, too.

And finally, the LSI and Promise have an unexpected reduction of speed on 512KB data blocks. The Promise then tries to catch up with the leaders on larger data chunks whereas the LSI makes no such attempt. We do not know why these controllers have problems writing full stripes (512KB is eight disks multiplied by 64KB).

None of the controllers has such serious problems when writing to RAID10 except that the Promise has a small performance reduction in the same area. The Areca is in the lead in both its versions.

The results vary wildly when the controllers are working with checksum-based arrays. The Adaptec and 2GB Areca hit some inexplicable performance limits again. The LSI also encounters such a barrier with RAID6. The Promise is much slower than the best controllers which keep close to each other. Writing large data blocks allows the Promise to work with full stripes, which helps to make up for its lack of deferred writing (to remind you, when a full stripe is being written, the checksum is calculated only once for the whole stripe, which cuts the overhead greatly). It cannot catch up with the faster controllers, though.

Sequential Read & Write Patterns

IOMeter is sending a stream of read and write requests with a request queue depth of 4. The size of the requested data block is changed each minute, so that we could see the dependence of the disk subsystem’s sequential read/write speed on the size of the data block. This test is indicative of the maximum speed the disk subsystem can achieve.

All the controllers, save for the 3ware (whose performance is affected by its specific architecture), deliver about the same top speed but reach it on different data blocks. The Areca and HighPoint reach their maximum speed on 16KB data blocks. The LSI does the same on 64KB data blocks and the Adaptec, on 128KB blocks. The Promise needs 512KB blocks (a full stripe with 64KB for each of the eight disks) to show its top speed.

Sequential reading from RAID10 shows the controllers’ ability to parallel read requests to the two disks of a mirror pair. None of them behaves ideally, but the Areca is obviously better and faster than the others (and it delivers the same performance irrespective of the amount of onboard memory). The Areca is also in the lead in terms of reading small-size data chunks. The Adaptec and HighPoint are good, too, but the latter is too capricious about the size of the data chunk. The Promise, LSI and 3ware have disappointing results in this test.

The standings do not change when it comes to reading from RAID5 and RAID6. The Areca and HighPoint are as good at sequential reading as the Adaptec was at random writing.

The 3ware is quite a disappointment. Its top speed is lower than that of the other controllers. It is rather slow with small data chunks and has big problems with large data chunks.

The HighPoint is in the lead when writing to RAID0. The Areca follows the leader closely in the 512MB version but falls behind in the 2GB version. We can’t explain this. The same problem plagues the Adaptec which has such a low speed of writing as if it worked with a single disk rather than a RAID array.

The 2GB Areca has the same problems when writing to RAID10. The Adaptec behaves oddly. It is the best of the average-performance controllers on small data blocks and is the overall best in terms of top speed. However, it has an inexplicable performance hit on 128KB data blocks. The 512MB Areca looks best overall. It has excellent results on small blocks and a very stable and high speed with large ones. The HighPoint falls behind, again. It has problems with large data blocks.

Few controllers can deliver a high speed of sequential writing to RAID5. Good results only come from the Areca (in the 512MB version as its 2GB version has problems again) and HighPoint. The LSI and 3ware are much slower than the leaders while the Adaptec has some problems again. The Promise only gets some speed on 128KB data chunks.

The RAID6 results are similar to the RAID5 ones, so we will only point out a few main differences. The Areca works well in both versions now. The HighPoint slows down on large data blocks and the LSI has huge problems there, too. The Adaptec and Promise still show depressing results. And while the low performance of the Promise is explained by its turned-off deferred writing, we can find no explanation for the Adaptec.

Multithreaded Read & Write Patterns

The multithreaded tests simulate a situation when there are one to four clients accessing the virtual disk at the same time – the clients’ address zones do not overlap. We will discuss diagrams for a request queue of 1 as the most illustrative ones. When the queue is 2 or more requests long, the speed doesn’t depend much on the number of applications. This is also the most real-life situation. What is especially interesting, our experience suggests that not all controllers can deliver maximum speed at the minimum queue depth.

The Areca and HighPoint are better than the others when reading one thread from RAID0. Unlike the others, they have minimum losses from the short queue depth. The Adaptec and LSI deliver only half of their maximum speed whereas the Promise and 3ware, only one fourth of their best.

The controllers all slow down when reading two threads and then keep the same speed when reading three and four threads. It is the Areca that wins here, delivering maximum performance under any load. The Promise and 3ware are too slow.

When the controllers are reading one thread from RAID10, they have the same standings as with RAID0, except that the Areca enjoys a bigger advantage over the other models. It is better at reading alternately from the mirror pairs. This controller is also better than the others at reading multiple threads. Take note how much faster it is at reading two threads, indicating that the threads are effectively divided between the different disks in mirror pairs.

Among the other controllers, it is the Adaptec (at two threads) and HighPoint (at three and four threads) that have good results.

Reading from RAID5 and RAID6 is done in the same way. The Areca and HighPoint cope better with one thread. The HighPoint is somewhat better than the others at reading multiple threads. The controllers are all rather good here, excepting the Promise and 3ware which have lower speeds.

Writing to RAID0 produces a funny picture. On one hand, multithreaded writing is simpler as it is facilitated by deferred writing mechanisms. On the other hand, writing even one thread at a minimum request queue depth is not a simple job for some controllers. The Areca and HighPoint are the only models to deliver really high speeds there. And the Areca is the only controller to keep the same (or even slightly higher) speed when writing multiple threads of data. There are now three controllers that disappoint us with their low results: the Promise (the reason for its low writing performance is obvious) is joined by the Adaptec and LSI.

The Areca is good when writing to RAID10, too. The Adaptec improves with this array type as well. It is competing with the Areca when writing multiple threads. The LSI and Promise are on the losing side, again.

The controllers behave in the same way with both RAID5 and RAID6 arrays. When writing one thread, the Areca and HighPoint are much better than the others but the Areca remains a single leader at multiple threads. The 2GB version of the Areca is less than half as fast as the 512MB version of the same controller. It looks like this controller is not designed for such a large amount of onboard memory.

Lacking deferred writing, the Promise is ridiculously slower under such load.

Web-Server, File-Server and Workstation Patterns

The drives are tested under loads typical of servers and workstations.

The names of the patterns are self-explanatory. The Web-Server pattern emulates a server that receives read requests only whereas the File-Server pattern has a small share of write requests. The request queue is limited to 32 requests in the Workstation pattern.

You can view all the graphs by clicking this link. We will be discussing only summary diagrams.

When there are only read (and mostly random-address) requests in the queue, most of the arrays look identically good. The difference is less than 5%. The type of RAID is rather unimportant as RAID6 is as fast as RAID0. There is only one important exception: RAID0 arrays are fast on those controllers that can effectively find the luckier disk in a mirror pair, i.e. on the 3ware, HighPoint and LSI.

The addition of write requests into the load makes the results more diverse. The RAID and RAID6 arrays are now slower than the RAID0 and RAID10. And we can also see the individual peculiarities of each controller. The 3ware and HighPoint are still the best with RAID10 (the LSI falls behind because its performance stops to grow up starting from a certain request queue depth as you can see in the graph). As expected, the Promise is much worse than the other controllers with checksum-based arrays. It is only with these arrays that the HighPoint is slower than the leaders.

The same goes for the Workstation pattern but the competition is tougher due to the increased share of writes and the different order of requests in the load. As a result, the Areca wins with RAID0 whereas the HighPoint and 3ware are still better than the others with RAID10. The HighPoint and Promise fall far behind the leaders with RAID5 and RAID6.

When the test zone is limited to 32GB (i.e. to the fastest tracks of the HDDs), the standings are different. For example, the LSI and Promise are obviously slow with RAID0. The advantage of the 3ware and HighPoint with RAID10 shrinks to a minimum: it is not important to choose the luckier disk anymore because the response time is low everywhere. The Promise falls behind with RAID10, too. The overall performance of the controllers is higher with RAID5 and RAID6 but the HighPoint and Promise are still lagging behind the leaders.

Performance in FC-Test

For this test two 32GB partitions are created on the disk and formatted in NTFS and then in FAT32. A file-set is then created, read from the disk, copied within the same partition and copied into another partition. The time taken to perform these operations is measured and the speed of the disk is calculated. The Windows and Programs file-sets consist of a large number of small files whereas the other three patterns (ISO, MP3, and Install) include a few large files each.

We’d like to note that the copying test is indicative of the drive’s behavior under complex load. In fact, the disk is working with two threads (one for reading and one for writing) when copying files.

This test produces too much data, so we will only discuss the results achieved with the Install, ISO and Programs file-sets in NTFS. The rest of the results can be learned in the reviews dedicated to each specific controller.

When writing the Install file-set, the Areca and Adaptec are ahead with RAID0 and RAID10 whereas the Promise and LSI are on the losing side.

The 3ware controller joins the Adaptec as a leader with RAID5 and RAID6. The LSI isn’t so bad now while the Promise is downright hopeless.

Take note that the Areca is much worse processing files with 2GB of onboard memory rather than with the default amount. That’s very, very odd.

The Areca is much better than the others with RAID0 and RAID10 but is challenged by the HighPoint with the checksum-based arrays. In every case, save for RAID0, the Adaptec and LSI perform rather depressingly slow. The Promise has no chance at all in the writing test.

The controllers are similar to each other when writing smaller files. Take note of the performance hit that occurs when the average file size is reduced.

The LSI and HighPoint controllers are somewhat better than the others at reading the mixed files of the Install pattern. The 3ware and Promise are slow (the Promise should have done better even with turned-off deferred writing).

The Areca is fast on large files, enjoying a large lead over the closest pursuer HighPoint. The Areca’s RAID10 results are indicative of how important it is to be able to read from disks alternately.

The 3ware and Promise are slow again and the LSI looks poor when reading from RAID10 (it is the tradeoff of the excellent algorithm of finding the luckier disk).

We’ve got the same losers with small files. The leaders are new: the LSI and HighPoint are now contending for first place.

When copying the mixed Install pattern, the Adaptec, HighPoint and LSI are ahead with three array types. The LSI falls behind with RAID10, being replaced by the Areca which takes top place with RAID10. The Promise is always the worst controller due to its lack of deferred writing.

There are only two leaders with large files, namely the HighPoint and the Areca. The latter is much faster with RAID10 but somewhat slower with the other array types. The other controllers are much slower than the leaders.

Copying small files of the Programs pattern is rather difficult for all the controllers. The LSI and Adaptec are just a little better than the others.

Conclusion

However trivial this may sound, we have to say that there are no perfect things in this imperfect world. When it comes to hardware, every controller has its highs and lows and you should take into account that some controllers suit better for certain loads than others. You should also be aware that firmware can dramatically change a controller’s behavior. We will now name the winners and losers of this test session basing only on the performance the controllers deliver with their current firmware.

The Adaptec RAID ASR-5805 and Areca-1680x-16 leave the best overall impression. These two models passed our tests in a stable manner, showing fewer flaws in their firmware algorithms. The Adaptec is overall somewhat better for databases whereas the Areca is superior at multithreaded operations and at processing files. In any case, both are worthy representatives of today’s SAS RAID controller generation. Interestingly, both are based on very similar platforms. They are equipped with the same processor and have the same amount of onboard memory.

Yes, the Areca allows upgrading its memory but our tests have not revealed any benefits from the larger amount. On the contrary, the 2GB version would often prove to be a little slower. As we have written above, 2GB of cache memory may come in handy when there are a lot of disks connected to this controller via expanders and the interface bandwidth is not high enough to fully satisfy all of them.

The 3ware 9690SA-8I and the HighPoint RocketRAID HPT4320 are good but not without a blemish. The former would be a very good controller if it were not for its low performance with files. Thus, it is better suited for database applications in which it will show itself as a balanced and powerful device. The HighPoint has excellent RAID10 algorithms and very good writing, but it has too many problems with checksum-based arrays. Hopefully, these problems will have been solved in the next versions of its firmware and the choice of good controllers will be broader then.

The LSI MegaRAID SAS 8708EM2 and Promise SuperTrak EX8650 are somewhat disappointing. Of course, the Promise was handicapped in our tests due to the lack of deferred writing, but its reading performance was often too slow in comparison with its opponents, too. The LSI has too many flaws although its processing of small files and the excellent algorithm of selecting the luckier disk in a mirror pair are impressive. Still, firmware is improving, so every controller actually has a chance to get better. On the other hand, the existing infrastructure of specific-brand controllers is often a more important factor for making shopping decisions unless performance is downright poor.