Promise SuperTrak EX8650 SAS RAID Controller Review

We upgraded our testbed for RAID controllers: now it includes SAS drives. The first one to be tested on this new platform is going to be a new solution from Promise – an 8-port SAS/SATA RAID controller.

by Aleksey Meyev
11/26/2008 | 04:03 PM

Hard disk drives have become substantially larger and faster recently. The amount of their cache memory is now calculated in tens of megabytes. Their traditional parallel interfaces ATA and SCSI have been ousted by the more perfect and faster serial interfaces SATA and SAS. While SATA was entering the market in a slow and steady way, SAS had a different story. It was kind of a ghost interface once: there were SAS controllers and SAS drives, but they had no popularity. The standard itself was free from any deficiencies. As opposed to the incompatible parallel interfaces (the desktop PATA was mechanically incompatible with the server SCSI), SAS controllers supported SATA drives, and the interface itself was handier due to slim cables you can lay out in your system case easily. SAS also offered increased bandwidth, of course. So, the only problem was the high price of SAS devices. The market of high-performance architectures is conservative (because equipment upgrades and errors are costly) and there was no real need for a really high bandwidth at the time the SAS interface appeared first. Therefore the new interface lacked the attention it deserved.

 

The situation was changing as controllers were getting cheaper, HDDs were getting faster, and the existing equipment was getting outdated. And at one moment the market reached a critical point after which SAS began a rapid conquest. It turned out that building uniform systems with a common interface was profitable: the same RAID controllers with a SAS interface could be used in top-performance workstations as well as in fault-tolerant storage systems based on SATA drives that offered high capacities at low cost. Well, SAS drives can even reached the latter type of systems now. For example, Seagate has announced a 1-terabyte SAS drive in its ES.2 series. In other words, it has a spindle rotation speed of 7200rpm, which is radically different from a typical SAS device that usually makes 10 or 15 thousand rotations per minute.

But let’s get back to our controllers and tests. The four Western Digital Raptor 2 drives with a spindle rotation speed of 10,000rpm that we used in our tests finally proved to be unable to load modern multi-port controllers seriously. The sequential speeds were too low and the amount of operations per second was not high enough to load the controllers’ processors. So, we decided to upgrade our testbed to keep up with the times. The new testbed has not changed much over the older one, but we now use eight Fujitsu MBA3073RC hard drives with a spindle rotation speed of 15,000rpm, 16MB of cache memory, a capacity of 73.5GB, and a SAS interface. Eight such drives deliver very high performance as you will see shortly and can put a heavy load on the controller.

Now let’s take a look at the first RAID controller we are going to test on the new testbed. It is an 8-port SAS RAID controller from Promise that belongs to the company’s newest product series.

Closer Look at Promise SuperTrak EX8650

Released in early 2008, the 600 series of Promise’s controllers marked a new level for the company that had not offered SAS-supporting controllers before. The series includes controllers to suit everyone’s taste: all-hardware (with an integrated processor) models with four, eight or 16 internal ports, two models with external ports (one has eight external ports and another, four internal and four external ports), and two simple models with software implementation of RAID0 and RAID1 arrays. You can easily distinguish them by the model name: the first numeral denotes the number of the controller’s ports, the second numeral is the series number, the third numeral reports the interface (every model of the series has 5 in this place, meaning PCI Express) and the fourth numeral stands for the number of external ports. The models with an integrated processor belong to the SuperTrak EX subseries while the software controllers belong to the FastTrak TX subseries.

 

The SuperTrak EX8650 controller we’ve got for our tests is not the senior model, yet it can satisfy most users with its eight ports and an 800MHz Intel XScale 81348 processor. By the way, you can note how the processor frequency has grown up. We used to see frequencies like 300-500MHz before, but now even the basic 4-port model of this series is equipped with a 667MHz processor while the 16-port model and the models with external ports have a 1200MHz processor. The increased performance of HDDs calls for higher performance of the XOR-processor so that the array would not be limited by the latter. And thanks to the ongoing progress, top-performance processors have become now considerably cheaper. They get hot at work, so the processor chip is covered with a massive heatsink on the controller.

It’s all right about memory. The controller has 256 megabytes of DDR2 SDRAM with error correction on board. The models with external ports have twice this amount, i.e. 512 megabytes, like the 16-port controller. Anyway, this amount of memory should be enough, especially if used effectively.

The rest of the device’s parameters are typical enough: a low-profile PCB, a PCI Express interface, two SFF-8087 connectors for drives (you can attach up to four drives to one such connector using special interface cables). Like becomes a serious controller, this one supports a battery backup unit, but the BBU is not included into the kit by default.

Of course, the controller supports all the popular array types that you can build out of eight drives, namely: RAID 0, 1, 1E, 5, 6, 10, 50, and 60. The three last types are two-level combinations of two types of arrays. It is a kind of an array built out of arrays: RAID1, RAID5 or RAID6 arrays (the first numeral of the two-level array indicates the type) are striped.

It’s all right about OS support, too. The manufacturer’s website offers drivers and useful software for Windows, FreeBSD and Linux for download.

Testbed and Methods

The following benchmarks were used:

Testbed configuration:

We used the latest BIOS and drivers for the controller and installed it into the mainboard’s PCI Express x8 slot.

The Fujitsu MBA3073RC hard disks were installed into the standard boxes of the SC5200 system case. The controller was tested with four and eight HDDs in the following modes:

Thus we wanted to cover all possible arrays types, yet not overcrowd the review with redundant data. The test time and the amount of diagrams would be enormous if we tested the same types of arrays built out of a different number of disks. So, we had tried to find a compromise and hopefully we succeeded.

For comparison’s sake, we publish the results of a single Fujitsu MBA3073RC hard disk as a kind of a reference point.

The controller was set at the Performance mode for maximum performance during the tests. This mode allows deferred writing and look-ahead reading for both the controller (in its own buffer memory) and the disks. Unfortunately, there was a problem we couldn’t avoid as the controller came to us in its basic kit, i.e. without a battery backup unit. Although the controller allows to check the Write Back checkbox in its settings to enable deferred writing, it refuses to perform such writing if there is no battery, reporting this honestly in the log files after each server reboot. As a result, deferred writing is performed by means of the cache memory of the HDDs, without the controller’s cache. The outcome of this will be shown below.

It is easy to see that deferred writing is indeed turned off. You can just take a look at the diagram of reading from the drive’s cache recorded in IOMark. We don’t use this program with RAID arrays, but it came in handy this time around (we checked out an eight-disk RAID0):

The controller’s performance with disabled caching may be interesting for people who use advanced UPSes, confide in their equipment and don’t want to buy the battery. Yet we have some gripes that the controller’s kit does not include a battery by default and that the status of deferred writing in the controller’s cache is not shown in an obvious way.

Performance in Intel IOMeter

Database Pattern

In the Database pattern the disk array is processing a stream of requests to read and write 8KB random-address data blocks. The ratio of read to write requests is changing from 0% to 100% throughout the test while the request queue size varies from 1 to 256.

We’ll be discussing graphs and diagrams but you can view tabled data using the following links:

We’ll discuss the results for queue depths of 1, 16 and 256.

We are discussing RAID0 and RAID10 separately from the checksum-based RAID5 and RAID6.

It’s all simple here: deferred writing is at work under minimum load and the array’s performance at writing depends on the number of disks in a stripe. Interestingly, the eight-disk RAID10 is somewhat slower than the four-disk RAID0. The mirrors of HDDs are somewhat worse than the single HDD in terms of performance.

It’s good that the RAID5 and RAID6 arrays go neck and neck in this test. The graphs of same-size arrays almost merge into each other, indicating that the controller is indifferent to the additional load (calculation of the second checksum).

The shape of the graphs does not impress, however. The lack of deferred writing into the controller’s cache (due to the lack of a BBU) kills its performance when writing to checksum-based arrays. Arrays of this type are usually equal to the corresponding single drive at the shortest queue, but here they suffer a terrible performance hit at high percentages of writes.

The results get rather odd when we increase the queue depth. It’s all right when the percentage of reads is high: the RAID10 arrays read from both disks in a mirror and are no different from RAID0 as the result. But what about performance scalability? The eight-disk arrays are not two times as fast as the four-disk ones as the theory suggests. Their advantage is especially small at high percentages of reads.

Writing is not quite good, either. The four-disk arrays have low-efficiency deferred writing as opposed to their eight-disk counterparts. Deferred writing can only be done in the HDDs’ cache, so perhaps the multi-disk arrays are better just because they have a larger total of cache memory?

Scalability is far from perfect with the checksum-based arrays, too. The results at reading suggest that: the performance growth from increasing the number of disks from four to eight is smaller than the performance of one disk! This is a problem indeed because these arrays are supposed to speed up nearly proportionally to the number of disks in them.

The arrays are doing better at writing thanks to the request queue. They do not set any performance records but do not look far worse than the single drive.

Interestingly, while the four-disk RAID5 and RAID6 are almost equals in terms of performance, the eight-disk RAID6 is somewhat worse than the eight-disk RAID5. We wonder if this is a small defect of the controller’s firmware or the controller finds it difficult to calculate two checksums for the eight-disk array.

When the request queue is very long, the arrays deliver higher performance and show better scalability. However, the four-disk array is not four times as fast as the single drive. The same is true for the eight-disk array. Note that deferred writing is not efficient here. This is due to the caching policy of Fujitsu’s SAS drives which do not take too much data into the cache at long queues.

Anyway, with all the drawbacks, you can see the new level of performance now. We could not reach 1000 operations per second with four Raptor 2 drives, but the four-disk array made out of 15,000rpm Fujitsu HDDs easily overtakes this barrier. The eight-disk arrays can deliver a few thousand operations per second.

We see problems with scalability again, but the arrays are finally not slower than the single drive at writing.

Judging by the gap between the eight-disk RAID6 and the RAID5, the controller’s processor is not powerful enough to computing two checksums for this amount of disks.

Disk Response Time

IOMeter is sending a stream of requests to read and write 512-byte data blocks with a request queue depth of 1 for 10 minutes. The disk subsystem processes over 60 thousand requests, so the resulting response time doesn’t depend on the amount of cache memory.

The four-disk arrays are almost equal to the single drive in terms of read response time, but the eight-disk arrays are considerably worse. The controller either has to spend more time to “recall” what address is stored on what disk or just lacks performance to process so many requests. Anyway, the small lag is obvious.

It’s logical at writing: the larger the total cache of the array, the smaller the response time. The dependence is almost directly proportional. And of course, it doesn’t work with the checksum-based arrays because they can’t just put data into the disks’ cache. These arrays suffered the most from the controller’s disabled cache and have a write response of 20 milliseconds and more. Typically, the write response of such arrays is comparable their read response, but not four to five times worse.

Take note that the four-disk RAID5 and RAID6 are similar in this test whereas the eight-disk RAID6 is considerably slower. The low performance of the controller’s processor shows up again.

Random Read & Write Patterns

Now we’ll see the dependence of the disk subsystems’ performance in random read and write modes on the data chunk size.

We will discuss the results of the disk subsystems at processing random-address data in two versions basing on our updated methodology. For small-size data chunks we will draw graphs showing the dependence of the amount of operations per second on the data chunk size. For large chunks we will compare performance depending on data-transfer rate in megabytes per second. This approach helps us evaluate the disk subsystem’s performance in two typical scenarios: working with small data chunks is typical for databases. The amount of operations per second is more important than sheer speed then. Working with large data blocks is nearly the same as working with small files, and the traditional measurement of speed in megabytes per second becomes more relevant.

We will start out with reading.

The short depth of the queue lowers the speed of reading in small blocks: the four-disk arrays are somewhat worse than the single drive and the eight-disk arrays are much slower. Funnily enough, the RAID10 arrays are a little faster than the RAID0.

Four-disk arrays are better than eight-disk ones when it comes to the checksum-based array types, too. It is good that the RAID5 and RAID6 arrays with the same amount of disks deliver similar performance, though.

The sequential speed is important for processing large data blocks, and the multi-disk arrays go ahead. The RAID0 is now better than the RAID10.

The same goes for this group of arrays: the RAID6 perform slower because they have to process two checksums rather than one, and the amount of useful data read per a time unit is smaller than with the RAID5.

And what about writing?

Writing in small data blocks depends on buffer memory, and the results are proportional to the number of disks in the array. The RAID10 are about two times slower than the RAID0 because the caches are not combined in a mirror because data must be written in sync to both disks in it.

Performance depends on checksum computations in this group of arrays. Every array is slower than the single drive, and the four-disk arrays are faster than the eight-disk ones.

We have problems with scalability on large data blocks where the sequential write speed is important, but the most surprising of all is the four-disk RAID10. It is suddenly slower with large data chunks than the single drive. The eight-disk RAID10 is free from this problem, so it must be due to the specific combination of load and the amount of disks in the array.

The checksum-based arrays catch up at the opportunity to write in full stripes, which lowers the controller’s overhead.

The eight-disk RAID5 and RAID6 go neck and neck. The four-disk RAID5 is expectedly good but the four-disk RAID6 is surprisingly slow with large data chunks. Yes, the controller obviously has problems with some array types built out of four disks.

Sequential Read & Write Patterns

IOMeter is sending a stream of read and write requests with a request queue depth of 4. The size of the requested data block is changed each minute, so that we could see the dependence of the array’s sequential read/write speed on the size of the data block. This test is indicative of the highest speed the array can achieve.

Well, the arrays boast nearly ideal scalability in terms of sequential reading. The eight-disk RAID0 has a speed of over 900MBps as a result! The graphs of the eight-disk RAID10 and four-disk RAID0 nearly coincide, which is another indication that it’s all right with linear operations.

Take note that the multi-disk arrays achieve their top speed on very large data blocks only. For example, the eight-disk RAID0 is almost no different from the four-disk one with standard 64KB data blocks – it just doesn’t work at its full speed then. Of course, you can’t expect your array to deliver maximum speed with small data blocks, but you should take this into account.

By the way, the controller is somewhat disappointing with very small data chunks, being slower than the single drive on the LSI controller.

Everything we’ve said in the previous paragraph is true for this group of arrays as well. That’s good because Promise’s controller was not so ideal in our last test session.

It is somewhat worse at writing. We have good scalability, yet the eight-disk RAID0 does not reach its top speed in this test: it would do so on even larger data chunks. It is somewhat odd that the eight-disk RAID10 is considerably slower than the four-disk RAID0.

The results of the four-disk RAID10 are poor. We’ve seen this thing above, but now we can be sure that it is due to low sequential speeds. We wonder what could be written into the firmware that it has performance hits on specific arrays only.

Here is the explanation of the second error we’ve seen above: the RAID6 arrays have problems now. By the way, the eight-disk arrays seem to need larger data chunks to reach their top speed.

Multithreaded Read & Write Patterns

The multithreaded tests simulate a situation when there are one to four clients accessing the virtual disk at the same time – the clients’ address zones do not overlap. We’ll discuss diagrams for a request queue of 1 as the most illustrative ones. When the queue is 2 or more requests long, the speed doesn’t depend much on the number of applications.

There is an odd performance ceiling at about 210MBps most of the arrays hit against at one thread. We gave a lot of thinking to this phenomenon and then looked at the table with the results of the multithreaded test (you can do this by clicking on the links above). And we found that the controller just didn’t have a long enough queue. When the queue gets longer, the speeds are higher. As a result, the fastest arrays can only show their best under high loads. For example, the four-disk RAID0 reaches its top speed at a queue depth of four requests while the eight-disk RAID0 doesn’t reach it even at a queue depth of eight requests.

But let’s get back to the multithreaded tests and see what we have when we add a second read thread. After all, we are interested in the effect this will have on speed, not in the specific numbers.

So, it is overall good enough with two threads expect that the four-disk RAID6 slows down suddenly. The other types of arrays slow down by less than 50%. The RAID0 and the eight-disk arrays of all types are especially good at processing two threads.

The arrays that were good at processing two threads speed up a little here while the others slow down. There are no dramatic changes, though.

There is again an odd barrier when the arrays are writing in one thread although this speed limit is set lower now. This is especially bad for the checksum-based arrays. Their speeds are ridiculously low, about 3MBps. The RAID10 arrays are slower than the single drive, too.

The checksum-based arrays improve somewhat with the addition of a second write thread. Interestingly, the eight-disk RAID0 improves its performance notably, too. The other arrays lose some speed, but not too much. The single drive slows down more than them.

And when we add even more threads, every array accelerates, save for the four-disk RAID10 (this array has problems again). The multiple threads must be similar to increasing the queue depth.

As for the problem arrays, they remain slow (slower than the single drive) at any combination of threads and queue depth.

Web-Server, File-Server, Workstation Patterns

The controllers are tested under loads typical of servers and workstations.

The names of the patterns are self-explanatory. The request queue is limited to 32 requests in the Workstation pattern. Of course, Web-Server and File-Server are nothing but general names. The former pattern emulates the load of any server that is working with read requests only whereas the latter pattern emulates a server that has to perform a certain percent of writes.

It’s all well in File-Server: good scalability and predictable results (the RAID10 is slower than the same-size RAID0 due to the write requests present in this pattern).

Interestingly, the performance of the four-disk arrays depends less on the queue depth.

This group of arrays is not as good as the previous one. It is easy to explain why the eight-disk RAID6 is slower than the RAID5 if you recall the problems at writing due to the lack of the processor’s performance. As for reading, the array just has fewer free disks because it has to store two checksums.

The RAID0 and RAID10 deservedly take first places.

When there are no write requests, the controller’s ability to read from both disks of a mirror results in the RAID0 and RAID10 arrays having almost identical performance.

Once again we can note the dependence of performance on the queue depth with the four-disk arrays: they do not accelerate any further starting from a certain depth of the queue.

The RAID6 arrays are almost as fast as the RAID5 due to the lack of write requests.

The four-disk arrays again cease to accelerate from a certain queue depth.

The overall ratings look nice and pretty, showing that reading is about the number of disks rather than on the type of the array.

The RAID10 is slowed down by write requests just as it should be. But it is unclear why the arrays have such poor scalability.

This amount of write requests coupled with the controller’s inability to cache them proves to be a hard trial for the checksum-based arrays. They are even slower than the single drive at short queue depths.

Take note that the eight-disk RAID6 is considerably slower than the RAID5 whereas the four-disk arrays are almost equal to each other.

Our formula gives heavier weights to the results obtained at short queue depths, so the RAID5 and RAID6 arrays prove to be slower than the single drive. Take note that the other arrays do not enjoy an overwhelming advantage here.

The general picture is the same when the test zone is limited to a 32GB partition but the speeds are higher now.

As a result, the “fast” arrays, especially the eight-disk RAID0, get further away from the single drive.

Performance in FC-Test

For this test two 32GB partitions are created on the virtual disk of the RAID array and formatted in NTFS and then in FAT32. After that a file-set is created of the hard disk. It is then read from the disk, copied within the same partition and then copied into another partition. The time taken to perform these operations is measured and the speed of the array is calculated. The Windows and Programs file-sets consist of a large number of small files whereas the other three patterns (ISO, MP3, and Install) include a few large files each.

We’d like to note that the copying test is indicative of the drive’s behavior under complex load. In fact, the HDD is working with two threads (one for reading and one for writing) when copying files.

This test produces too much data, so we will only discuss the results of the Install, ISO and Programs patterns in NTFS which illustrate the most characteristic use of the arrays. You can use the links below to view the other results:

SAS drives are always slow in the writing test on the LSI controller and we have already got used to that, but the results of the Promise controller without a BBU are downright disappointing. The RAID5 and RAID6 arrays are very slow. However, there is one interesting thing about their results: they show maximum speed not in the ISO pattern which has the largest files but in the Install pattern. It means that these arrays depend on the size of the files, and the dependence is not directly proportional.

Note also that the eight-disk RAID10 is always behind the four-disk RAID0 just as in the sequential read test. The difference is small, yet noticeable.

Like in the multithreaded test, there are low speeds at a small queue depth. As a result, most of the arrays deliver the same speed on large files, the four-disk RAID5 making the only difference.

The speeds decline when the file size is reduced, especially with the eight-disk arrays that are slower than the single drive. The four-disk RAID10 is surprisingly the fastest array of all, which is very odd.

Copying within the same partition is almost predictable but there are still problems with writing to the RAID5 and RAID6 arrays due to the disabled cache of the controller. The four-disk RAID10 is rather too slow and loses to the single drive – it has problems with writing, too.

It’s the same when copying from one partition to another but the speeds are lower and the eight-disk RAID10 is too bad with large files. Well, the results of the RAID0 arrays are not really good as they provide but a small advantage over the single drive. And that’s all because the controller works without a BBU.

Performance in WinBench 99

We use WinBench 99 to record data-transfer graphs:

Data-transfer graphs of RAID arrays on the Promise SuperTrak EX8650 controller:

We’ll compare the data-transfer rates at the beginning and end of the virtual disks:

The read speeds agree with the theory except that the eight-disk arrays are not much different from the four-disk RAID0.

Conclusion

Summing everything up, the Promise SuperTrak EX8650 is a good controller, but you shouldn’t use it without a battery backup unit unless you don’t care a bit about the speed of writing to your arrays. To be specific, this controller delivers good performance under server loads, can read fast from mirror arrays, and has a good implementation of RAID5 and RAID6.

Unfortunately, it is not free from drawbacks. For example, we were disappointed at poor scalability of arrays built out of many disks. The controller works better when there is a large amount of requests in the queue and has low speed with real files as the result. It also lowers its write speed inexplicably at some combinations of array type and the amount of disks. So, people from Promise have a lot of work to do yet on this model.

As for the missing BBU, we are going to retest the SuperTrak EX8650 with the battery as soon as we get one.