Adaptec 2410SA Serial ATA RAID Controller Review

We have already posted three reviews of four-channel SerialATA RAID controllers on our site. Now we’ve got hold of one more device like that, the Adaptec 2410SA. The manufacturing company claims it to be an ideal solution for workstations and entry-level servers where RAID arrays and support of four hard disk drives are important. Let’s check out how close to the ideal this solution is!

by Alexander Yuriev , Nikita Nikolaichev
03/11/2004 | 10:34 AM

We have already posted three reviews of four-channel SerialATA RAID controllers on our site.

 

Now we’ve got hold of one more device like that, the Adaptec 2410SA. The manufacturing company claims it to be an ideal solution for workstations and entry-level servers where RAID arrays and support of four hard disk drives are important. Let’s check out how close to the ideal this solution is!

Closer Look

Adaptec SerialATA RAID 2410SA controller is based on Intel 80302 I/O processor working at 100MHz frequency and featuring hardware implementation of the XOR operation for higher performance of RAID5. The device supports RAID arrays of levels 0, 1, 5, 10 and JBOD. Two Silicon Image Sil3112A chips are responsible for communication with SATA drives: each chip has two channels with a maximum data-transfer rate of 1.5Gbit/s. The controller has cache memory chips onboard – 64MB of PC100 ECC SDRAM. The interface of the 64-bit 66MHz PCI bus is intended for 3.3V and 5V voltages.

The controller comes in a stylish Adaptec package:

Besides the controller, the package includes a CD disc with drivers, a bracket for installing the card into low-profile system cases in addition to the default bracket of the controller, technical description and four 1-meter-long SATA cables:

The controller itself is implemented as a “low-profile” PCI card, which is quite logical considering its intended purpose – to serve in simple servers. The four SATA connectors are placed in a row in the notch in the upper edge so that the plugs of SATA connectors didn’t exceed the dimensions of the controller. Below them, they placed two Silicon Image Sil3112A chips and a flash BIOS chip, while Intel 80302 processor and two memory chips reside in the right part of the PCB.

The basic specifications look as follows:

Processor

Intel 80302 I/O processor

Memory

Integrated PC100 EDD SDRAM, no buffering, 64MB (3.3V)

PCI

64bit 66MHz PCI 2.2 bus interface, 3.3V and 5V

SerialATA

Two Silicon Image Sil3112A SATA controllers supporting four ports with up to 1.5Gbit/sec data transfer rate

RAID

0, 1, 5, 10 and JBOD

Form-factor

Low-profile PCI

Testbed and Methods

The testbed was configured as follows:

We tested the controller in two benchmarks:

We created one partition for the total capacity of the drive in WinBench 99. We carried out each of the WinBench tests seven times and took the best result for further analysis.

For comparing the arrays performance in Intel IOMeter, we used File Server and Web Server patterns.

These patterns serve for measuring the performance of the disk subsystem under a workload typical for file and web servers.

We also used a Workstation pattern, created by Sergey Romanov (aka GReY). It is based on the statistical data about the disk subsystem workload as given in the SR Testbed 3 description. The statistical data are gathered for the NTFS5 file system in three operational modes: Office, Hi-End and Boot-up.

This pattern shows how well the controller performs in a typical Windows environment.

Lastly, we checked out the controller’s ability to process sequential read/write requests of variable size and its performance in the Database pattern, which loads the disk subsystem with SQL-like requests.

Our controller had the firmware version 9964 and we used the driver version 4.0.0.5694.

The controller was installed into a PCI-X/133MHz slot (although the controller itself only supports PCI64/66MHz).

WD360GD (Raptor) hard disk drives were installed into the rails of the SC5200 system case and fastened at the bottom with four screws.

By default, we tested the controller having enabled lazy writing into the cache (WriteBack). We also enabled lazy writing for the hard disk drives.

We tested the influence of the controller caching mode – WriteBack (WB) or WriteThrough (WT) – on four-disk RAID0, RAID5 and RAID10 arrays. It turned out however that the algorithm of working with the cache buffer only affects the speed of the controller in one specific operational mode.

Moreover, the interface of the lazy write setup in the controller BIOS is made in such a way that when you change from WB to WT, you are by default offered to disable lazy write on all drives. In other words, it is supposed that when the user prefers WT, he wants to have an array with a maximum tolerance to failures and disabling lazy write on HDDs in the array contributes to this. But when you switch from WT to WB, the lazy write status doesn’t change on the disks! Well, really why should the controller think for you?

Having carried out a series of experiments we discovered that the controller caching mode influences the performance of the array much less than lazy write on the disks does. Just look at the diagrams:

These are the results of a RAID10 array in the DataBase pattern. Again, WriteBack and WriteThrough are the names of the controller caching mode and CacheOn/CacheOff refer to the lazy write algorithms of the drives in the array.

As you see, the status of the controller caching algorithm only affects the result when there are only read operations in the queue, while the influence of the disk cache on the performance is substantial in all modes.

We supposed that Adaptec 2410SA is not the only controller that may change the disk cache mode (lazy write) when you switch the lazy write of the controller on and off. Particularly, it’s impossible to explain the difference in speed of the 3ware 8500-8 Escalade controller in WriteBack and WriteThrough modes only by the influence of the controller cache, since it is rather small, only 2.25MB. In all probability, the disks caching mode changes as you switch between these two controller modes. That’s why, and in order not to bring confusion into terms we use, we always enabled lazy write for the controller cache buffer, while “WriteBack” and “WriteThrough” denote enabled and disabled lazy write on the disks in the array.

Performance in Intel IOMeter DataBase Pattern

So, we are sending a mixed stream of requests for reading and writing 8KB data blocks with a random address. By changing the ratio of reads/writes we can estimate how good the controller driver is at sorting such requests out.

The results of the controller in the WriteBack mode are listed below:

Let’s view the numbers as diagrams. The diagrams show the dependence of the data-transfer rate on the percent of writes among the requests. We will take the measurements for the queue being 1, 16 and 256 requests long. For better readability, we split the arrays into two groups:

Under the linear workload, in the random read mode, all arrays show similar performance. It is natural for RAID0, RAID5 and JBOD arrays to behave this way in this mode, but it also means that there is no requests alternation between the disks of a mirror couple on RAID1 and RAID10 arrays.

As there are more write requests in the queue, the disks can perform lazy writing more efficiently and the single disk as well as the arrays gains its speed. The performance of the RAID0 array grows in proportion to the number of drives in the array, but only due to the lazy write algorithms enabled on the disks.

The graph for RAID1 array nearly coincides with the graph for the single drive, which is indicative of the absence of a requests alternation algorithm. The graph of the second array with mirror couples –RAID10 – is less “smooth”. However, as the mirror couples are joined into RAID0 arrays here, the speed of the RAID10 under this workload is very close to that of the two-disk RAID0.

The performance of the RAID5 array should be getting lower as the writes percentage increases, since write requests slow this type of array down considerably. In our case, however, the arrays even increase their speed in some operational modes. This is probably due to the fact that each write operation actually produces two write requests for RAID5 arrays and increases the average drive workload. In such circumstances, the efficiency of the drive’s lazy write increases.

The picture changes as we increase the workload. The mirror RAID10 array never falters as it is going through the test producing a smooth and flat graph. It seems that the high speed in the higher-reads-percentage modes is achieved by alternating requests between disks of the same mirror couple (that’s what we see in the random read mode: the RAID10 is faster than the four-disk RAID0 and nearly doubles the result of the two-disk RAID0).

On the contrary, when there are more writes in the queue, RAID0 constituent of RAID10 array clicks into action.

Curiously, the request alternation technology doesn’t work for RAID1 array! The graph above has the same shape as the graph of the single drive. It’s really a mystery why different algorithms are applied to RAID1 and RAID10…

The longer queue puts all drives of the RAID5 arrays into play – they process read requests as fast as the RAID0 arrays. Of course, when there’s a higher share of writes, the RAID5 arrays slow down.

RAID0 arrays should be the fastest everywhere save for the random read mode and they do show a high level of performance. However, the four-disk RAID0 stumbles on the 10% writes point. You may remember that we saw the same performance degeneration with Intel SRCS14L, Promise FT S150 TX4 and 3ware 8500 controllers.

This repeatability of the slump makes one suspect that it is a “feature” of the lazy write algorithms of the hard disk drives we use.

RAID0 arrays show excellent scalability depending on the number of drives in the arrays. The performance slump vanished from the graph of the four-disk array, but the arrays overall perform much similar to the previous operational mode. Curiously, the speed of the single drive is very close to that of the two-disk RAID0, while RAID0 arrays of different number of drives form up accurate “stairs”. This means that the controller handles a single drive in a different way than RAID0 arrays.

It is interesting, but the four-disk RAID0, RAID5 and RAID10 arrays show very close speeds in the random read mode, although we might have expected RAID10 to have an advantage. This type of workload doesn’t give a mirror array any advantage due to the request alternation. This applies only to RAID10, as RAID1 has no alternation of read requests.

Now we will see the benchmark results for four-disk arrays with lazy write disabled for the hard disk drives.

To compare the speeds of the RAID arrays in different caching modes, we fill the table with ratios of the controller speed with lazy write enabled to its speed with lazy write disabled. The bigger number indicates higher caching efficiency in this mode. If the number is smaller than 1 (marked with red), lazy write for the drives in the array is harmful. If the number is above 1 (marked with blue), it brings a performance gain. If you see “1.0”, then the status of lazy write doesn’t influence the array performance.

It’s clear that disabling write request caching for the drives is negative for every array, although to a variable degree. A certain speed reduction in the random read mode is explained by the lack of write requests. In other modes, caching affects the performance of each array more when there are a lot of write requests in the queue and, usually, affects it less when the queue is longer.

Let’s now compare the results. We draw graphs for each array in WriteThrough and WriteBack modes for queues of 1, 16 and 256 requests.

By disabling caching for the RAID0 array we reduce its speed in all modes, save for the random read. The maximum performance loss equals to 528%! The only doubtful mode is the 16-request-long queue plus 10% writes. As I have mentioned above, we saw the same performance reduction in our previous test sessions, too. Now we are 100% sure that it is the HDD cache that is responsible for that.

Without disk caching, RAID5 array slows down just like RAID0 in all modes save for the random read. The absolute performance loss is somewhat lighter, only 221% at maximum. When we disable lazy write for the drives in the array, the performance of RAID5 declines steadily as the writes share increases.

The performance of RAID10 array depends heavily on the enabled lazy writing for the drives. If you disable it, you may encounter a performance loss of 226%!

The controller cache buffer showed its best in the random write mode only: the array where the disks have lazy write suddenly speeds up under the linear workload in this mode.

Performance in Intel IOMeter Sequential Read and Write Patterns

This pattern helps us explore the controller performance at processing streams of sequential read/write requests. The array receives a stream of read/write requests with a request queue depth of 4. Every minute the size of the data block changes, so we can see the dependence of the linear read/write speed on the size of the data block. The results (the correlation of the controller data-transfer rate and the size of the data block) are listed in the following tables:

We split the arrays into two groups and build diagrams:

The advantages of RAID arrays that consist of many HDDs become apparent when the data block is big enough, that is, when the request size is so big that the controller can break it into several smaller blocks and give them to the drives of the array that are working in parallel. The two-disk RAID0 reaches its maximum speed only on 512KB blocks. The arrays of three and four drives didn’t make it even on 1MB blocks.

The graph of the mirror RAID1 array, like in the DataBase pattern, is much similar to that of the single drive, while the graph of RAID10 coincides with the graph of the two-disk RAID0 until the performance slump at reading 128KB data blocks. It’s quite logical to infer that the algorithm of optimized reading form the mirror doesn’t work with sequential requests.

When we disabled the lazy write algorithms for the disks, we found the following results:

The lack of write requests should mean that the status of the lazy write algorithm (on/off) doesn’t influence the controller speed. Well, this is what we see, with some reservations: there are one or two block sizes for each of the arrays when the difference in speed of the same array with and without disk caching is higher than the measurement error. I couldn’t find a reasonable explanation to this fact…

Now, let’s explore the controller’s behavior during sequential writing:

We again organize the arrays into two groups:

RAID0 arrays behave much similar to what they did at sequential reading. Only the 256KB blocks are different as the two-disk RAID0 works faster than all other arrays with them. The graph of RAID1 goes just like the graph of the single drive. Sequential write requests form the most uncomfortable operational mode for RAID5 arrays from the performance point of view, so no wonder that the three-disk array loses to the single drive on 64KB blocks. Well, RAID10 also works slower than the two-disk RAID0.

Let’s disable lazy write for the drives in the arrays and compare the results with those we have got with enabled lazy writing.

100% write requests is the ideal case to illustrate the influence of lazy write of the HDDs on the array speed. When caching is disabled, all arrays suffer a performance hit, although it is quite negligible for RAID0s and RAID10 on <16KB blocks.

Performance in Intel IOMeter FileServer and WebServer Patterns

Let’s watch the controller handling the test that imitates the workload on the disk subsystem of a typical file or web server.

The File Server pattern comes first:

The same table, but in the graphical representation:

There are only 20% of write requests here, so all the arrays show high results. RAID0 arrays have good speed scalability depending on the number of drives in them. The four-disk RAID5 array is nearly as fast as the three-disk RAID0 array. The performance of the RAID10 approaches that of the four-disk RAID0 and this means that the algorithm of optimized reading from the mirror works well here. The speed of RAID1 is a little lower than of the single drive, as it doesn’t use the optimized reading algorithm.

Let’s compare the performance of the arrays using a rating system. We calculate the performance rating for each array by averaging the controller speed under each type of workload:

The four-disk RAID0 is on top, closely followed by RAID10. The three- and four-disk RAID5s are slightly slower than two- and three-disk RAID0s, respectively. The small writes share didn’t help much to the RAID1: it even fell behind the single drive.

Let’s see what we have after the lazy write algorithms have been disabled:

As you see, 20% writes is enough for the arrays to suffer a performance loss from disabling lazy writing to the drives.

That’s how it tells on the array performance:

So the performance hit amounts to 11-17%.

The next pattern imitates the work of the disk subsystem of a web server:

The graphs of the RAID0 arrays haven’t practically changed since the File Server pattern, while the speed of the RAID5 arrays has grown up significantly because the Web Server pattern with no write requests is the optimal operational mode for them. That’s also why the mirror RAID10 array becomes faster than the four-disk RAID0 in some cases. It uses the optimized mirror-reading algorithm, while RAID1 does not, as its results prove.

These are the ratings of the arrays, which we calculated according to the same rules as in the File Server pattern:

RAID10 and RAID5 arrays caught at the opportunity of having no write requests: RAID5s stepped up in the rating list and nearly reached the performance level of RAID0s of the same number of drives. RAID10 is the fastest, while RAID1 once again fell behind the single drive.

Curiously, RAID10 showed its maximum performance under a workload of 16 requests, outperforming RAID0, but slows down under higher workloads.

The status of lazy write shouldn’t affect the array performance in this test as it has no write requests and that’s exactly what we see here.

Performance in Intel IOMeter WorkStation Pattern

The WorkStation pattern imitates intensive work of a user in various applications in the NTFS5 file system.

The situation is ordinary with the RAID0 arrays – the more disks the array has, the faster it processes requests. The speed of RAID1 is close to the single drive, while RAID10 surpasses the three-disk RAID0 in a few cases only (frankly speaking, it outperforms the four-disk RAID0 too at request queue = 4, but that’s a lucky exception). RAID5 arrays don’t show highest speeds and work slower than the single drive with single requests. This should come as no surprise since the WorkStation pattern has a lot of random write requests, which greatly reduce the speed of RAID5.

We calculate the performance rating for the WorkStation pattern by the following formula:

Performance Rating = Total I/O (queue=1)/1 + Total I/O (queue=2)/2 + Total I/O (queue=4)/4 + Total I/O (queue=8)/8 + Total I/O (queue=16)/16 + Total I/O (queue=32)/32.

Random write requests made RAID5 arrays perform slower than the single drive, and the three-disk RAID5 even lost to RAID1. RAID0 arrays ranked up according to the number of their disks. The performance of RAID10 is lower than that of the three-disk RAID0 because of the numerous write requests, although it did quite well in some modes.

Let’s see the effects of lazy writing on the performance of the arrays in this pattern:

Without caching to the disks, the arrays rank up the same way, but perform 30-50% slower (depending on the array type).

Performance in WinBench 99

WinBench is going to be our last test today. This benchmarking suite helps to evaluate the disk subsystem performance in desktop applications. We ran WinBench tests two times for each file system: on a logical partition equal to the full capacity of the array and on a 32GB logical partition.

The NTFS file system comes first:

The following table compares the arrays in two integral tests: Business Disk Winmark and High-End Disk Winmark:

RAID0 and JBOD arrays stand according to the number of drives in the array. RAID1 is a bit slower than the single drive, like in the previous tests. RAID5 arrays show very poor results as the pattern abounds in write operations. RAID10 also performs pretty slowly.

Now we disable lazy write and repeat the tests:

All arrays suffer a performance hit in this case.

Now let’s check the controller performance in FAT32:

The speeds of the arrays in Business Disk Winmark and High-End Disk Winmark:

The arrays took the same places as in NTFS, but their absolute speeds have grown up. RAID10 outperformed the single drive – a nice surprise! A nasty exception is RAID1’s becoming the last.

Let’s compare these results to ones we have when disabling the lazy write algorithms of the hard disk drives:

When caching is disabled, all arrays suffer a performance hit.

The linear read speeds are equal for both file systems, so we draw a common diagram:

It seems like the three- and four-disk RAID0s should exchange their places. However, these results are true. At linear reading, the speed depends linearly on the number of disks, i.e. two-, three- and four-disk RAID0 arrays should provide a double, triple and quadruple speed of the single drive. We don’t see it here. Moreover, we see the four-disk RAID0 performing slower at the beginning of the disk than the three-disk RAID1! The dependence of the performance of RAID5 on the number of disks is obvious, although it is not proportional to the number of disks in the array. The linear speed of reading from RAID1 array is close to the speed of the single drive while RAID10 loses to the two-disk RAID0.

The linear read speed doesn’t depend on lazy write actually, and this was true for most modes. Curiously, the speed of reading the beginning of RAID0 array is higher when the disk caching is disabled.

These are the read graphs for each of the arrays:

Fault Tolerance

For checking out the controller’s ability to keep the data of the array safe in case when one of the drives crushes, we “simulated” a failure of one HDD in RAID1, RAID5 and RAID 10 arrays. The imitation was simple, as in the review of the Intel SRCS14L controller: we unplugged the power cable from the hard disk drive and tracked the array status from the Adaptec Storage Manager (Browser Edition) utility.

It took quite a bit of time for the array to be restored while still responding to users’ requests so I removed all workload from the controller (i.e. terminated the benchmark) and noted the time it took the controller to restore the array.

So the controller restored the data integrity in:

Conclusion

Adaptec SerialATA RAID 2410SA controller was overall successful in getting through our tests. RAID5 and RAID0 arrays are fast and show nice speed scalability: the array becomes faster if you add another drive to it. The controller provides the highest data security in fault-tolerant arrays (RAID1, RAID5 and RAID10).

Meanwhile, RAID1 array doesn’t use an optimized technique for reading data from the mirror and its performance is close to that of a single drive. That’s why RAID1 array can only be created on an Adaptec SATA RAID 2410SA controller for improving the data security, not speed. The second mirror array, RAID10, seems to be using a certain optimized algorithm for reading from the mirror and its performance results are quite predictable. This gives us some hope that the next version of the driver/firmware will enable this algorithm for RAID1, too.

As for the negative aspects of the controller, or rather of the controller’s current firmware, the cache buffer affects but very slightly the arrays speed. We know from our Adaptec 2400A Review that it’s possible to use the cache buffer more intensively.

Anyway, Adaptec SerialATA RAID 2410SA controller will suit nicely for workstations and entry-level servers, but we don’t know if it is the best possible solution until we review all the controllers from this series. So, stay tuned!

Appendix

The manufacturer’s website claims that Adaptec SATA RAID 2410SA controller is compatible with every widespread operation system. Particularly, the CD enclosed with the device contains drivers for the following OS’s:

Drivers that support these operation systems and driver updates are always available for download from the manufacturer’s website (although I didn’t find drivers for SCO UnixWare and Caldera Open Unix there, frankly speaking :).