Articles: Storage
 

Bookmark and Share

(3) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 ]

Closer Look at Adaptec RAID ASR-5805

First off, the fifth series is one of the largest controller families produced by Adaptec. Currently it includes as many as seven members ranging from the basic ASR-5405 to the 28-port ASR-52445. The nomenclature is not confusing at all once you know how to decipher the model names: the first numeral is the series number. It is followed by one or two numerals denoting the number of internal ports. Next go two numerals denoting the number of external ports and the interface type. Since every model is equipped with PCI Express x8, every name ends in “5”. Adaptec’s older models may have “0” at the end of the model name, which indicates PCI-X.

 

The unified architecture implies unified specifications. The four-port model has half the standard amount of memory while the other models each have the following: a dual-core 1.2GHz processor, 512 megabytes of memory, PCI Express x8 interface, support for an optional battery backup unit (BBU), support for up to 256 SATA/SAS devices (using appropriate racks and expanders). By the way, the previous series supported up to 128 devices only. It means that Adaptec has not only endowed the new series with faster processors but also improved the firmware.

Talking about firmware, there are array building limitations due to the peculiarities of the controllers’ architecture. Few users are going to encounter them, but anyway. The fifth series (and the second series, too, for that matter) supports up to 64 arrays on one controller (the third series supports only 24 arrays), up to 128 disks in one RAID0 array, and up to 32 disks in rotated parity arrays RAID5 and RAID6.

The series 5 controllers support nearly every popular type of a RAID array, namely:

  • Single disk
  • RAID0
  • RAID1
  • RAID1E
  • RAID5
  • RAID5EE
  • RAID6
  • RAID10
  • RAID50
  • RAID60

We guess a bit of clarification is necessary here. While the traditional RAID0, 1, 5, 6 and combinations thereof are well known to most users, RAID1E and RAID5EE may be less familiar to you. How do they work then?

RAID1E arrays are closely related to RAID1 as they use data mirroring, too. However, not the whole disks but certain blocks of the disks are mirrored in them. This approach allows building arrays out of three or more disks (a two-disk RAID1E is effectively an ordinary RAID1). The following diagram makes the point clearer (the letter M denotes a mirror block):

So, what do we have in the end? A RAID1E array has data redundancy and will survive the failure of one disk. As opposed to RAID1 and RAID10, it can be built out of an odd number of disks. A RAID1E offers good speeds of reading and writing. As the stripes are distributed among the different disks, a RAID1E is going to be faster than a RAID1 and at least comparable to a RAID10 (unless the controller can read data from both disks in the RAID1 mirrors simultaneously). You should not view RAID1E as an ideal type of an array, though. You lose the same disk capacity as with the other mirroring array types, i.e. 50%. RAID10 is better in terms of data security: the number of disks being equal, it can survive a failure of two disks at once in some cases. Thus, the main feature of RAID1E is that it allows building a mirroring array out of an odd number of disks and provides some performance benefits, which may come in handy if your controller cannot read from both disks in ordinary mirrors simultaneously.

A RAID5EE array is a far more curious thing. Like its basis RAID5, it follows the principle of ensuring data security by storing stripes with XOR checksums. And these stripes rotate, meaning that they are distributed uniformly among all the disks of the array. The difference is that a RAID5EE needs a hot-swap disk that cannot be used as a replacement disk for other arrays. This disk does not wait for a failure to happen, as an ordinary replacement disk does, but takes part in the operation of the working array. Data is located as stripes on all the disks of the array including the replacement disk while the capacity of the replacement disk is distributed uniformly among all of the disks in the array. Thus, the replacement disk becomes a virtual one. The disk itself is utilized but its capacity cannot be used because it is reserved by the controller and unavailable for the user. In the earlier implementation called RAID5E this virtual capacity was located at the end of each disk. In a RAID5EE, it is located on the entire array as uniformly distributed blocks. To make the point clearer, we suggest that you take a look at the diagram where P1, P2, P3, P4 are the checksums of the appropriate full stripes while HS is the uniformly distributed capacity of the replacement disk:

So, if one disk fails, the lost information is restored into the “empty” stripes (marked as HS in the diagram) and the array becomes an ordinary RAID5.

What are the benefits of this approach? The replacement disk is not idle but takes part in the array’s work, improving the speed of reading thanks to the higher number of disks each full stripe is distributed among. The drawback of such arrays is obvious, too. The replacement disk is bound to the array. So if you build a few arrays on one controller, you will have to dedicate one replacement disk for every of them, which increases your costs and requires more room in the server.

Now let’s get back to the controller. Thanks to high integration Adaptec has made the controllers rather short physically. The 4- and 8-port models are even low-profile. The BBU does not occupy an expansion slot in your system case but is installed at the back of the controller itself.

 

 

The BBU is actually installed on a small daughter card which in its turn is fastened to the controller with plastic screws. We strongly recommend you using a BBU because data is usually too important to risk losing it for the sake of paltry economy. Moreover, deferred writing is highly important for the performance of RAID arrays as we learned in our test of a Promise controller that worked without a BBU. Enabling deferred writing without a BBU is too risky: we can do so in our tests but would never run a RAID controller in this manner for real-life applications.

The list of supported operating systems is extensive and covers almost every existing OS save for the most exotic ones. Besides all versions of Windows starting from Windows 2000 (and including 64-bit versions), the controller can work under a few popular Linux distributions, FreeBSD, SCO Unix and Solaris. There are also drivers for VMWare. For most of these OSes there are appropriate versions of Adaptec Storage Manager, the exclusive software tool for managing the controller. The tool is highly functional and allows doing any operations with the controller and its RAID arrays without entering its BIOS. Adaptec Storage Manager offers easy access to all the features and provides a lot of visual information about the controller’s operation modes.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 ]

Discussion

Comments currently: 3
Discussion started: 05/28/09 08:00:12 AM
Latest comment: 01/19/10 03:14:14 PM

View comments

Add your Comment