HighPoint RocketRAID 3220 and Promise SuperTrak EX8350 RAID Controllers Comparison

In our today’s comparative article we are going to discuss two controllers from Promise and HighPoint and investigate one of their most interesting features: performance of degraded RAID 5 and RAID 6 arrays.

by Aleksey Meyev
10/08/2008 | 05:54 PM

For a while we haven’t reviewed RAID controllers from HighPoint (we tested the RocketRAID 2320 about two years ago) and Promise. Now that we resumed controller testing in our labs we decided to take one SATA RAID controller from each brand’s lineup. HighPoint was represented by the RocketRAID 3220. That’s not the best but far not the weakest model in the company’s line-up. At the HighPoint website it is positioned as an enterprise-level product. Its opponent has similar capabilities: the SuperTrak EX8350 belongs to the latest series of controllers from Promise that support SATA drives only.

 

The HighPoint RocketRAID 3220 is equipped with a PCI-X interface that was considered a de-facto standard for controllers of its class some time ago, but the Promise SuperTrak EX8350 is equipped with PCI Express that is steadily occupying the market. This occupation should have been expected, though. PCI Express provides higher bandwidth (when using all the 16 lanes). The recently released second version of the PCI Express standard doubles the data-transfer rate, ensuring redundant bandwidth even. As for PCI-X, its bandwidth may be insufficient for multi-disk arrays built out of modern HDDs. PCI Express is also more appealing for mainboard makers. Its slot is smaller and its wiring is simpler. No wonder then that it has become popular not only as the interface for graphics cards but also as a peripheral interface on server mainboards. So, this review is going to be our last in which we will talk about a PCI-X controller unless some manufacturer suddenly releases a very interesting model with it.

What makes this review special is that we will test the controllers not only in ordinary operation modes (with one to four disks in JBOD, RAID0, 1, 10, 5 and 6) but also in degraded mode (degraded RAID5 and RAID6), i.e. when one or two disks in the array fail. We didn’t tear a drive out of the operating array, we just shut it down. There are three such modes in this test session: a RAID5 array with one disabled disk and a RAID6 with one or two disabled disks. The latter situation is most unlikely, but if RAID6 can work in such mode, we must check it out, too.

You may wonder if it makes any sense to test RAID arrays in such extraordinary operating modes. If one or two disks in the array fail, it is time to run to the nearest shop searching for a replacement disk! Well, some equipment cannot be just shut down in case of a failure. Many servers operate on a 24/7 basis and shutting them down for maintenance may lead to a serious financial loss. And if you choose RAID6 for its ability to go on working even when two of its disks fail, it must work well without them until your IT department replaces the disks. As the old joke goes, “In a corporation employing tens of thousands people, someone dies daily but it shouldn’t be the reason for halting the work every time it happens.”

So what happens with a degraded array? It’s all right at writing: the controller pretends the array is okay, but the data that should be written to the failed disk is not written at all. Of course, there is nothing wrong in this because checksums are still created for each stripe, and the information written this way can be read afterwards. When the disk is replaced, all the information can be restored using a standard algorithm. There is no need to find out which part of the written information is full and requires redistribution and which part of it requires restoration.

When it comes to reading from a degraded array, the controller has to cope with additional load: the data that was stored on the missing HDD has to be restored basing on checksums each time it is read. It doesn’t sound as terrible as it really is. In fact, the controller must read a full stripe with the missing data and restore the data using the XOR operation. Therefore the performance of a degraded array is not the same as the performance of a same-type array originally built out of the same amount of disks minus one.

When the failed disk is replaced, the RAID controller has an especially difficult time. Besides ordinary operations, it has to fill the new disk with the data that must be stored on the latter. That is, the controller must browse through the whole bulk of data, restoring missing data out of checksums and writing checksums anew to the replacement disk. Of course, you can’t expect the controller to perform fast under such a heavy load. Well, we are merciful in our tests and limit ourselves to checking the load the controller has to cope with when there are failed disks in the array, but not under the array restoration load. After all, you can always choose a time period when the load on the array is the lowest to perform the restoration. As opposed to that, the failure of a disk happens accidentally and, usually, in the most inappropriate moment.

The next section will give you more information about the controllers we are going to test today.

Testing Participants

HighPoint RocketRAID 3220

 

As we said above, the RocketRAID 3220 doesn’t represent the most advanced SATA RAID controller series from HighPoint because this developer also offers the 35xx series which features a more advanced processor (the 800MHz Intel IOP341). Every model of the 35xx series is equipped with a PCI Express interface. Well, the 3220 model has a good processor, too. It is an Intel IOP331 clocked at a frequency of 500MHz. As for system memory, the 3220 comes with 128 megabytes of DDR2 SDRAM with error correction whereas the senior models have 256 megabytes (the RocketRAID 3320 model, which is similar to the 3220 but has a PCI Express interface, is equipped with 256MB of memory, too).

The accessories are scanty: a user manual, CD with drivers, and two cables each of which allows to attach up to four SATA drives to one connector of the controller.

 

As you can see, the controller has two connectors and thus supports up to eight drives. Unfortunately, the controller doesn’t support a backup battery, unlike its PCI Express counterpart. The developer just hints that it is time for a serious user to transition to the newer interface.

Well, you are going to be satisfied with this controller’s RAID capabilities if you upgrade to it. It supports all the basic array types (JBOD, RAID0, RAID1, RAID10 and RAID5) as well as those that you can only have with an advanced controller (RAID6 and RAID50). It even supports RAID3 for connoisseurs (we wonder how many users would instantly recollect the specific features of this rarely used type of RAID).

The last thing we’d like to note is that this series includes only low-profile controllers thanks to dense component mounting.

Promise SuperTrak EX8350

 

Unlike its opponent, the SuperTrak EX8350 belongs to the latest series of Promise’s RAID controllers with SATA support only (without support for SAS). The family is large including four-, eight-, twelve- and sixteen-port models. The 8- and 16-port models are available in a PCI-X version (and their model names then end in “00” instead of “50”).

The two controllers we are going to review today are equals in terms of resources, though. Like the RocketRAID 3220, the Promise SuperTrak EX8350 is equipped with a 500MHz processor and 128 megabytes of memory.

 

The developer installed ordinary SATA connectors on this controller, so you will find as many as eight SATA cables included into the kit. The cables end in “straight” rather than L-shaped connectors. It may be not handy to connect and lay such cables in a low-profile server because the ports are placed very densely, in two rows, on the controller’s PCB. You will also find a few power adapters for your hard drives in the controller box.

The Promise SuperTrak EX8350 supports a backup battery (purchased optionally).

Like many other controllers from this brand, this one has a connector for an interface cable of the SuperSwap 4100 cage (for monitoring the speed of the fan in the cage and the temperature of the HDDs).

Testbed and Methods

The following benchmarks were used:

Testbed configuration:

The controllers were installed into 64-bit 100MHz PCI-X and PCI Express x8 slots depending on their interface.

 

The Western Digital WD740GD (Raptor 2) hard disks were installed into the standard boxes of the SC5200 system case. The controllers were tested with four HDDs in the following modes:

The controller was set at the Performance mode for maximum performance during the tests. This mode allows deferred writing and look-ahead reading for both the controller (in its own buffer memory) and the disks. The Performance mode should not be used without a cache battery because there is a high risk of losing data in case of a power failure.

Performance in Intel IOMeter

Database Pattern

In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% throughout the test while the request queue size varies from 1 to 256.

We’ll be discussing diagrams but you can view tabled data using the following links:

We’ll discuss the results for queue depths of 1, 16 and 256.

The HighPoint controller looks better at minimum loads. It copes better with a single disk and delivers higher performance than its opponent with RAID1 and RAID10 arrays. We can see that the Promise controller has problems with the RAID1. Its RAID1 array turns to be slower than the single drive at mixed loads (at 25 to 75 writes), indicating certain flaws in the controller’s algorithms.

The controllers are both good with RAID0 at minimum load. They are different, though. The HighPoint is better at writes-only load whereas the Promise is better when there is a small share of reads to be processed.

Both controllers have problems with RAID5. The three-disk array proves to be faster than the four-disk RAID5 with the HighPoint controller. The difference is small, yet it makes us wonder what performance RAID5 arrays with even more disks would deliver. The two arrays are roughly similar with the Promise controller, and both slow down considerably and inexplicably at high percentages of writes.

Take note of the performance hit suffered by the degraded arrays. Four minus one is not the same as three when we are talking about RAID arrays.

There are no smooth graphs in the RAID6 diagram, yet the arrays can be compared in general. The Promise copes better when in ordinary mode, losing to the opponent at pure writing only. Moreover, its degraded minus-one array proves to be faster at reading than the ordinary RAID6 on the HighPoint controller. However, the Promise controller has problems again when there is a higher percentage of writes. Perhaps its processor cannot cope with the XOR load due to some peculiarities in the firmware.

When two disks fail, the HighPoint proves to be competitive to the Promise and even superior to it at writing.

When we increase the queue depth to 16 requests, the HighPoint wins with a huge lead with the mirror arrays. The Promise is so much slower at high percentages of reads that it seems to have problems reordering requests for mirror arrays. This is the only explanation we can think of concerning the performance slump in the left part of the diagram.

The Promise controller has problems with RAID0 at every load, save for high percentages of writes. The problems are not so huge as with the mirror arrays, yet considerable anyway. This controller is especially poor when there is about the same amount of writes and reads. Is it the indication that the controller can’t put the available buffer memory to good use? You can see that the controllers are roughly similar under favorable conditions, i.e. in the right part of the diagram. Yes, the processor frequency and the amount of memory are important, but the operation algorithms are important as well. And Promise should certainly revise them for this controller.

The HighPoint is somewhat better than its opponent with RAID5, too. Both controllers have one peculiarity here, though. The three-disk arrays are faster than the four-disk ones when there are more writes than reads. A 500MHz processor must be unable to cope with four HDDs with a spindle rotation speed of 10,000rpm.

The degraded arrays are again slower than the three-disk arrays of the same time. This is especially conspicuous on the HighPoint controller.

It is the Promise controller that proves to be worse with RAID6 in ordinary operating mode. But when it comes to degraded arrays, the HighPoint overtakes the Promise at write operations, but still loses at reading. Curiously enough, the HighPoint survives the loss of one disk easier than the other controller.

When the queue is increased further to 256 requests, the HighPoint accelerates at reading. The Promise, unfortunately, still shows modest performance, especially under mixed load. It really seems to have big problems working with the request queue.

The Promise shows the same problems at reading and under mixed load when it comes to RAID0. In some cases, the performance of its four-disk array is as high as that of the competitor controller’s two-disk array. That’s a depressing result.

The picture doesn’t change much with RAID5: the HighPoint is still in the lead, and the four-disk arrays still have problems at high percentages of writes.

The overall picture is the same with RAID6, too. But we can see some characteristic points even better now. Particularly, the ordinary array on the Promise controller is considerably faster at reading but slows down under mixed load. The HighPoint delivers higher performance than its opponent with both degraded arrays.

Disk Response Time

IOMeter is sending a stream of requests to read and write 512-byte data blocks with a request queue depth of 1 for 10 minutes. The disk subsystem processes over 60 thousand requests, so the resulting response time doesn’t depend on the amount of cache memory.

The HighPoint controller is obviously better when reading from the mirror arrays. However, it also has a worse result with the single drive, with a rather big gap.

It is simpler at writing: the HighPoint always has a lower response time, processing disk requests in a more efficient way.

The Promise controller is considerably better with RAID0 at reading but its writing speed is lower with every type of RAID0.

The controllers’ behavior with RAID5 is similar to that with RAID0: the Promise is far better at reading and worse at writing. The HighPoint controller seems to be faster at processing write requests. Take note of the performance of the degraded arrays: both have the same response time at writing as the ordinary arrays (this is normal because the situation doesn’t change from the controller’s point of view) but have a considerably higher response time at reading.

The Promise is also much better than its opponent at reading when it comes to the more complex RAID6 array type. The HighPoint behaves somewhat oddly: the response time of the degraded minus-one array is worse than the response time of the minus-two array. The HighPoint is still better at writing, especially with the degraded arrays. Interestingly, the response time increases not only at reading but also at writing when this array type degrades. You can see this with each controller.

Random Read & Write Patterns

Now we’ll see the dependence of the disk subsystems’ performance in random read and write modes on the data chunk size.

We will discuss the results of the disk subsystems at processing random-address data in two versions basing on our updated methodology. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.

We will start out with reading.

IOMeter: Random Read (operations per second)

The HighPoint controller is somewhat faster when reading small data chunks from mirror arrays. But curiously enough, the Promise proves to be faster when reading very small data chunks from a single drive.

Random reading produces odd results with RAID0: the two-disk arrays prove to be the fastest on the Promise controller. On the HighPoint, it is the three-disk array that shows the highest performance. Well, the six arrays all have very similar results and the Promise controller enjoys but a small lead. The performance of each controller must be limited by its ability to process so many small-size data chunks.

The Promise is so much better with RAID5 that its degraded array is faster than the ordinary arrays on the HighPoint controller. Take note of the performance hit suffered by the degraded arrays!

The RAID6 results are similar to what we have seen with RAID5: the Promise controller is better again.

IOMeter: Random Read, MBps

Well, the HighPoint has problems reading large data blocks, at least with mirror arrays. This controller’s RAID10 is far slower than the single drive as the consequence.

The Promise controller shows good, even though not ideal, scalability with RAID0, whereas the HighPoint has huge problems reading large data blocks – its performance is awful with every array of the three.

The Promise behaves oddly and inexplicably with RAID5: the degraded array turns to be faster than the three-disk array. The HighPoint controller still shows very low performance.

The Promise produces wonders with RAID6, too. The degraded arrays are faster than the normal array at very large data blocks! And there is a very small difference between the minus-one and minus-two arrays. The HighPoint is overall similar but easier to explain: it looks like the degradation of the arrays on this controller brings their performance up to a more or less acceptable level (but they cannot overtake the competing controller’s arrays, of course).

Now let’s see what we have at writing.

IOMeter: Random Write, operations per second.

The HighPoint is faster at writing very small data chunks to mirror arrays but the Promise copes better with larger data chunks.

The same goes for RAID0. It’s good that both controllers show good scalability.

The three-disk RAID5 arrays are faster on each controller than the four-disk RAID5 arrays. The degraded arrays are almost as fast as the ordinary arrays: our practice agrees with the theory here. The HighPoint is ahead of its opponent everywhere.

The degraded arrays are considerably slower than the ordinary ones when it comes to RAID6. The minus-two arrays are even slower than the minus-one arrays. The controller seems to adjust its algorithms and works under higher load when this type of array degrades. And the HighPoint controller is against far faster than the Promise.

IOMeter: Random Write, MBps

The Promise works better with very large data chunks when writing to mirror arrays. It gets closer and then overtakes its opponent as the data chunk grows larger.

The HighPoint controller has some problems with RAID0: the two-disk array is ahead of the thee-disk one on some data chunks and shows record-breaking performance overall. The Promise is overall better than the opponent, though. And it behaves far more predictably.

The degraded RAID5 array is faster than the three-disk RAID5 and just a little slower than the full four-disk one on each controller. The Promise is good with every size of the data block whereas the HighPoint shows firmware defects on very large data blocks.

The same goes for RAID6: the HighPoint has problems with very large data chunks, and the degraded arrays are almost as fast as the normal ones.

Sequential Read & Write Patterns

IOMeter is sending a stream of read and write requests with a request queue depth of 4. The size of the requested data block is changed each minute, so that we could see the dependence of the array’s sequential read/write speed on the size of the data block. This test is indicative of the highest speed the array can achieve.

Both controllers deliver queer performance with the mirror arrays. The HighPoint’s performance depends greatly on the size of the data chunk. Ideally, it would be as fast as the opponent but in practice it only manages this with data chunks of certain sizes. We are sure of one thing only: the HighPoint is obviously better with very small data chunks. It seems to be able to glue a few small-chunk requests up into one large request. The Promise controller has a different problem: it is surprisingly slow with one drive. On the other hand, you will hardly buy this controller to attach only one HDD to it.

It is simpler with RAID0: both controllers show good scalability on large data blocks and deliver similar speeds. Take note that the HighPoint is again far faster when processing very small data chunks.

The speed of the HighPoint controller’s RAID5 depends greatly on the data block size again, so it is only faster than its opponents on very small data chunks (it will probably carry on this feature of its driver through the entire test session). And it is far slower than the Promise controller elsewhere. Interestingly, the degraded array is almost as fast as the normal four-disk one on the Promise controller whereas the HighPoint’s degraded array is far slower than the normal ones.

Almost every RAID6 array shows a huge dependence of read speed on the data chunk size. The two-disk RAID6 on the Promise controller is the only exception.

Both controllers behave well when writing to mirror arrays. The HighPoint is good with small data chunks while the Promise shows higher top speeds.

The HighPoint draws a zigzagging graph with the four-disk RAID0. Both controllers show good scalability, but the top speeds of the Promise’s arrays are always higher than those of the opponent controller’s.

The Promise shows higher top speeds with RAID5, too. The degraded arrays are almost as fast as the normal arrays.

The Promise controller seems to be fond of doing something odd with RAID6. This time around, both degraded arrays are faster than the normal one.

Multithreaded Read & Write Patterns

The multithreaded tests simulate a situation when there are one to four clients accessing the virtual disk at the same time – the clients’ address zones do not overlap. We’ll discuss diagrams for a request queue of 1 as the most illustrative ones. When the queue is 2 or more requests long, the speed doesn’t depend much on the number of applications.

We guess it is the clearest illustration of multithreaded reading algorithms possible. The Promise controller just drops its speed heavily when the number of read threads increases whereas the HighPoint is capable of directing the two threads to the two different disks in a mirror. As a result, the latter’s read speed grows up twofold at two threads relative to its speed at one thread. The HighPoint’s speed lowers when there are even more threads to be processed, but remains comparable to the speed at one thread.

The HighPoint is brilliant when reading in one thread from a RAID0 array while the Promise is slow. The two controllers become equals as the number of read threads is increased. Take note that the arrays made out of many disks have little advantage. The controllers’ performance scalability is low in this test.

There are odd results with RAID. The degraded array on the Promise controller is the only one to show a high speed at one thread. The others aren’t fast at all. The degraded array on the HighPoint controller joins the leader when the number of threads is increased.

The degraded RAID6 arrays are incredibly fast again on the Promise controller.

  • IOMeter: Multithreaded Write, RAID0
  • IOMeter: Multithreaded Write, RAID1+RAID10
  • IOMeter: Multithreaded Write, RAID5
  • IOMeter: Multithreaded Write, RAID6
  • The HighPoint is better at multithreaded writing to mirror arrays. It refuses to slow down at all. The Promise is good at two threads but has to drop the speed when there are more threads to be processed.

    We’ve got the same thing with RAID0: the Promise is very good at two threads but slow down at three and more threads. The HighPoint is indifferent to the increase in the number of write threads.

    Two threads seem to be the maximum for the HighPoint controller at writing. Take note that the degraded arrays are almost as fast as the normal ones.

    It’s the same when the controllers are writing to RAID6 arrays: the Promise has a limit of two threads. The HighPoint is indifferent to the number of threads. And the degraded arrays are as fast as the normal ones.

    Web-Server, File-Server, Workstation Patterns

    The controllers are tested under loads typical of servers and workstations.

    The names of the patterns are self-explanatory. The request queue is limited to 32 requests in the Workstation pattern. Of course, Web-Server and File-Server are nothing but general names. The former pattern emulates the load of any server that is working with read requests only whereas the latter pattern emulates a server that has to perform a certain percent of writes.

    The Promise has no chance in the File-Server pattern when working with mirror arrays. It is slower irrespective of the array type. An interesting fact, this controller delivers the same performance with the RAID1 and the single drive, but the HighPoint’s RAID1 is far faster than the single drive. The HighPoint seems to be able to fetch data from the disk that can produce them faster.

    The HighPoint is ahead with RAID0 at this load.

    The HighPoint is still in the lead even with RAID5. It loses more speed with the degraded array than its opponent does.

    The HighPoint controller wins with RAID6, too. It loses more speed than its opponent when the array degrades but retains its leadership due to the higher performance of the original array.

    The HighPoint controller increases its lead when there are no write requests.

    We’ve got the same leader with RAID0, too.

    The HighPoint has worse scalability than the opponent when it comes to RAID5 but it is still in the lead anyway. Take note of the big performance hit suffered by the degraded arrays.

    We’ve got the same leader with the normal RAID6 arrays but the degraded arrays are faster on the Promise controller at low loads. The performance of the degraded arrays is much lower than that of the normal arrays. The performance hit is bigger than with RAID5. This is logical as the controller has to do much more calculations.

    The pattern is different but the standings are the same: the HighPoint controller enjoys a huge advantage.

    The leader doesn’t change here, either.

    We’ve got the same standings with RAID5, too.

    There is one interesting thing here. You can see that the performance of the degraded arrays on the Promise controller does not grow up, but even lowers, when the request queue grows longer. That is, the higher the array’s load, the worse performance it delivers.

    The results do not change much when the test zone is limited for the Workstation pattern. So we won’t comment upon them again.

    Performance in FC-Test

    For this test two 32GB partitions are created on the virtual disk of the RAID array and formatted in NTFS and then in FAT32. After that a file-set is created of the hard disk. It is then read from the disk, copied within the same partition and then copied into another partition. The time taken to perform these operations is measured and the speed of the array is calculated. The Windows and Programs file-sets consist of a large number of small files whereas the other three patterns (ISO, MP3, and Install) include a few large files each.

    We’d like to note that the copying test is indicative of the drive’s behavior under complex load. In fact, the HDD is working with two threads (one for reading and one for writing) when copying files.

    This test produces too much data, so we will only discuss the results of the Install, ISO and Programs patterns in NTFS which illustrate the most characteristic use of the arrays. You can use the links below to view the other results:

    The Promise is better at creating files on mirror arrays. This is especially conspicuous with the RAID10 which is quite fast on the Promise but very slow on the Highpoint. The Promise has problems with the single drive, though. That’s not a big deal, but not good, either.

    It is simple and clear with RAID0: the Highpoint is better at creating large files while the Promise at creating small files.

    It’s vice versa with RAID5. The Highpoint is better with small files whereas the Promise, with large files. Take note that the degraded arrays do not lose much speed when working with files.

    The RAID6 standings are the same as with RAID5, but with one odd peculiarity. The Promise acts up with the degraded arrays again: they are faster on large files than the normal array. What did they write into the firmware we wonder?

    It is not so simple when we are reading files from the mirror arrays. The Highpoint fails with RAID10 whereas the Promise, with RAID1. And there is a small oddity again: the single drive is always faster than the RAID1 array on the Highpoint. Can the controller be waiting until data is surely written to both disks?

    The Highpoint doesn’t have problems when reading from RAID0 arrays, but these arrays all have similar speeds on the Promise. There is no scalability you might hope for.

    The Promise is surely better with RAID5, especially with the degraded array. The controller has miraculous firmware indeed.

    The same miraculous firmware algorithms work for the Promise when reading from RAID6 arrays. As a matter of fact, this is the consequence of both controllers having problems in ordinary modes. The problems are serious judging by the modest results. We guess the owner of the Promise should take one drive out right after they create a RAID6 array.

    It is at copying that the Highpoint suddenly recalled that RAID10 is supposed to be faster than RAID1. The results are only good with very large files, though. As for the Promise, RAID10 is the only array type that works normally on it.

    The Highpoint goes ahead on large files with RAID0, but the Promise is almost as fast as the Highpoint on small files.

    We’ve got miracles again with RAID: the degraded arrays are incredibly fast (it is because the ordinary arrays are fantastically slow, of course). This time the Highpoint shows such miracles together with the Promise.

    More miracles with RAID6. And the Promise is more miraculous than the Highpoint.

    The standings do not differ when we copy from one partition into another, and there are the same odd results with RAID5 and RAID6 arrays.

    Performance in WinBench 99

    We use WinBench 99 to record the data-transfer graph for each array:

    We’ll compare the data-transfer rates at the beginning and end of the virtual disks:

    The HighPoint is surprisingly worse with RAID1. Otherwise, the results are predictable.

    The graphs are exactly as they are expected to be with RAID0.

    The HighPoint is surprisingly slow with the degraded RAID5. The other results are normal.

    The HighPoint has problems with the degraded RAID6 arrays irrespective of the number of failed disks.

    Conclusion

    Both controllers we have tested today have a number of conspicuous peculiarities that should be taken into account when you are choosing a controller for a specific load. For example, the Promise SuperTrak EX8350 was better with RAID6 arrays and with real files, and at multithreaded reading, but its performance under typical server load was far from ideal. It is not good at multithreaded writing, either. You shouldn’t make the EX8350 write in more than two threads. But it loses less speed when the array becomes degraded and supports a backup battery which may be important for some users.

    The HighPoint RocketRAID 3220 coped well with server load and multithreaded writing but had problems when working with files, with degraded RAID5 and RAID6 arrays, and with very large data blocks. Neither controller was brilliant at sequential operations, so we would recommend you to consider other models if this load type is important to you. The developers surely have got a lot of work to do on these products, and we hope that other controllers from these brands (for example, from the later released series with SAS support) are free from the drawbacks we have mentioned.

    When it comes to the controllers’ operation with degraded arrays, there is of course a performance hit, sometimes very heavy. As we might expect, it is the speed of reading from such “half-dead” arrays that suffers the most. It means that the response speed of various databases stored on such arrays will drop considerably. However, if you keep this fact in mind and ensure a certain reserve of performance, an array with failed disks can go on working more or less well. You just should not keep the array in its degraded state for long but replace the failed disk as soon as possible. The next disk may fail much sooner than you can expect basing on statistical probability laws.

    This review may be one of our last reviews of RAID controllers following the current methodology. We are not going to change our benchmarks or abandon such test sessions altogether. The fact is that modern servers are transitioning to SAS drives with high spindle rotation speeds and, accordingly, with lower response time and higher performance. Even “ordinary” drives have already stepped beyond the 100MBps milestone in sequential operations. There have appeared affordable but very fast VelociRaptor drives from Western Digital. So, we are considering a change of our testbed because our rather old WD Raptor2 drives cannot now load advanced modern controllers well enough.