We’ll test a new eight-channel SATA II RAID controller from Areca today and we guess most of you haven’t ever heard anything about this company, although you might have met its products. Areca controllers are selling in Europe under the Tekram brand for some reason. Their single difference from Areca’s original controllers is in the text on the box.
But we’ve got an original Areca without a trace of Tekram:
The eight-channel ARC-1220 controller is based on an Intel IOP333 processor (that works an impressive frequency of 500MHz!) and supports up to eight Serial ATA-II drives (with a data-transfer rate of 3GB/s). The controller comes with 128 megabytes of onboard memory (without an opportunity of adding more); the battery backup unit is optional. The controller allows building the following types of RAID arrays: 0, 1, 3, 5, 6, 10 and JBOD. It is designed as a PCI Express card (the PCI-X version of this controller has the model name of ARC-1120).
The controller can be switched into a SATA controller mode by its software. That is, you can disable all of its RAID functionality, but you will hardly want to do so because $700 seems a bit too much for just a Serial ATA controller.
And now we want to tell you about the new array type, RAID6, which is supported by the Areca controller. There’s in fact nothing extraordinary about RAID6. It is a further development of the RAID5 concept – distributed parity, but the check sum for each stripe is written to two disks rather than to one. It can be illustrated with the following figure:
Here, D1, D2, etc. denote data, and P1, P2 denote parity for the D1 and D2 data blocks, respectively.
Thus, the minimum amount of disks required to build a RAID6 array is 4. The useful capacity of a RAID6 array is (N-2)* disk capacity, which is of course smaller than with a RAID5 array consisting of the same number of disks.
The disk usage coefficient is the same in a four-disk RAID6 as in a RAID10, i.e. 0.5, but it becomes quite reasonable with RAID6 arrays that consist of more disks.
Why is it necessary to store a copy of a stripe check sum? In order to enable the array to endure a failure of two disks simultaneously without losing any data! For arrays consisting of inexpensive and, accordingly, not very reliable disks, the RAID6 array type seems to be a reasonable compromise between storage capacity and reliability. But what about its speed? Theoretically, it shouldn’t be slower than the speed of a RAID5 array since it is the checksum calculation that takes the most time whereas the extra write is easily made up for by the controller’s and/or disks’ cache memory. We’ll check this out right now.