Solid State Drives (SSDs) have been a subject of hot arguments since their very announcement. After all, it is the first serious attempt to challenge magnetic storage devices that have long remained unrivalled on the market. We’ll take two SSDs from Samsung to see what differentiates them from the traditional Hard Disk Drive. We’ll have some theory first.
In the hard disk drive information is stored on magnetic platters rotating at a high speed. This information is written to and read from the platters by a block of read/write heads. A microcontroller controls the movement of the heads relative to the platters, communicates with the external interface, and works with the cache buffer. Thus, the data-transfer speed depends on the rotation speed of the platters and on the areal data density. Of course, the interface has a limited bandwidth, too, but the interface speed of today’s HDDs is far higher than the speed of reading from the platters. So, the interface only matters for external HDDs which will not be discussed in this article. When processing a large number of small-size data blocks, the HDD’s performance is affected by the algorithms of working with the cache memory which are written into the HDD’s firmware. The reordering of write requests and look-ahead reading influence performance greatly. It’s because data can be stored in different parts of the platters and when the load is random or nearly random (which is typical for many applications from databases to any large collection of small enough files, e.g. your web-browser’s cache) it takes quite a lot of time just to move the heads to the necessary spot above the platter. A mixed load – when the drive has to process a number of requests to two or more zones of the platters – is also difficult. The heads have to be moved actively between the zones – it is then that you can hear the rattle of the heads (the platters do not produce much noise with their rotation).
It’s all different with Solid State Drives which store data in a few flash memory chips governed by a microcontroller. Flash memory is EEPROM or Electronically Erased Programmable Read-Only Memory. It consists of a number of cells based on NAND or NOR logic. This type of storage features a very low read time (the necessary cell only has to be found and read) but a rather high write time (the existing data must be erased from the cell before writing new data into it). The access time is increased greatly for multi-level cell organization. As opposed to the single-level cell design, when writing you have to read all data from the cell, modify them, erase the cell, and then write new data. However, MLC memory is cheaper to manufacturer and offers larger storage capacities.
So, what are the advantages of Solid State Drives? First of all, it is the low access time for read operations. Second, the consequence of the low read access time, an SSD features a very high speed of random reading, which is comparable to its sequential read speed. With flash memory, it takes about the same time to access sequential and random cells. Third, the lack of mechanical parts should ensure low power consumption, noiseless operation and protection against vibration. In fact, you can only render flash memory inoperable if you physically break it.
On the downside are the lower speed of sequential operations in comparison with the HDD and the limited lifecycle of the cells. Flash memory can only sustain about 100 thousand rewrite operations for each cell. That’s quite a lot, especially as modern controllers can distribute data along the drive to load all cells uniformly. However, this factor should be kept in mind until we have trustworthy information about the service life of SSDs.
What applications can be suggested for these rather expensive (in terms of the cost of storing 1GB of data) drives? First of all, it is industrial computers and advanced supercomputers. For the former, the high tolerance to vibrations is important. By the way, early flash-based drives were demanded exactly as storage devices with high tolerance to the environment. They were welcomed in military science, but for other applications their price was yet too high then. As the price was lowering, this storage type became interesting for other clients as well. Eventually, flash drives made it into retail.
As for large computing systems, there are a lot of benefits: high reliability, high random-read speed due to low access time, low heat dissipation and small dimensions. The latter two factors proved to be crucial for single-unit server platforms. For example, the Intel SR1550AL can accommodate six 2.5” drives. If you install SSDs, you get a superb RAID array capable of processing a huge number of operations per second. And it will only take one slot in the server rack! Notebooks, especially compact notebooks, can benefit from SSDs as well because the size and power consumption of the drive is important there. SSDs have already begun to steadily conquer that market, ousting small-capacity and slow 1.8” HDDs. We can recall the Apple MacBook Air and Toshiba Portege R500-10U as examples.
So, that’s the theory. How do things stand with modern SSDs in practice? Let’s check it out. The SSD class will be represented by two devices from Samsung designed in 2.5” form-factor with capacities of 32GB and 64GB. These SSDs will be compared with the best-in-class HDDs:
- Hitachi 7K200 (2.5” form-factor, 7200rpm spindle rotation speed, 16MB buffer, SATA interface, 200GB capacity)
- Samsung SpinPoint F1 (3.5” form-factor, 7200rpm spindle rotation speed, 32MB buffer, SATA interface, 1000GB capacity)
- Fujitsu MBA3300RC (3.5” form-factor, 15,000rpm spindle rotation speed, 16MB buffer, Serial Attached SCSI interface, 300GB capacity)
As a representative of quite another category we took a Gigabyte i-RAM, a device that uses RAM modules for storing data. We’ll describe it shortly.
The HDDs are quite familiar to us, but the new devices, including the i-RAM, will be discussed in more detail below.