Very good report showing a large amount of effort, but somewhat flawed by a few important details.
1) Use of non-representative test system. With all due respect, how many people are actually running Windows 7 on a 6+ year old single core Pentium 4, with 1 GB of memory and a 15 GB boot disk? And of those, how many are going to invest in the cost of a SSD?
This test system is very slow by modern standards, and does taint the results. The comments about the Marvell SATA3 drivers have been reported elsewhere also, but all driver efficiency issues are tainted by the slow P4 capabilities, and the older Intel IOCHR hub. You also mention that your WinRAR tests appear to be compute limited ... on an old P4.
Some of your reported IOP speeds for 4K IO are 1/3 the rate that I get on a 120GB OCZ Vertex2, and my friend gets on a Crucial C300 256GB. These low numbers are partially caused by the host setup/takedown time through the drivers, running on the slower, single core P4.
Should you not sanity-check your results before publishing? You are measuring several SSDs at ~10K IOPS that have published results from several other sites, as well as their marketing literature of 30K+ IOPS.
Again, I am not challenging your testing diligence, just the ramifications of using such a slow test system. I would suggest using a Quad-core system, with at least 8 GB of memory, running Win 7 64-bit would be more representative.
2) Use of very old firmware versions. Your article starts with a discussion about how fast the SSD industry is moving and improving .. almost monthly, with the release of new firmware versions and new products, yet you choose to use a OCZ Vertex 2 firmware version that is 6 months old? As posted in another comment, version 1.24 is the current version, and has been shipping in new SSDs since December. Other review sites have been publishing results on version 1.24, 1.23, .. and you are testing version 1.1, released on June 2010? I suspect that some of the other SSDs also have non-current firmware versions.
3) No discussion about performance degradation as the SSD "ages", and how various vendors handle the issue. How should you structure a performance evaluation benchmark such that aging effects are consistent across the vendors?
The short Xbit tests only show the "new" performance levels, assuming the SSDs are "new". Have all the SSDs been returned to initial factory state before the tests, or is an older SSD being indirectly penalized because you have been running tests on it for six months and other SSDs are fresh?
Advanced features such as good TRIM support and background garbage collection can mitigate aging impacts. Some of the performance issues with the Marvel SATA3 controller is that it does not pass the TRIM command, possibly amplifying aging impacts. Using a RAID stripe prevents TRIM from being passed, potentially impacting performance as the disks age, unless there is an excellent garbage collection implementation.
Therefore, certain styles of controllers (for example, SandForce, that have garbage collection) would be better suited for a RAID implementation than another vendor that does not support garbage collection. Do your tests prove or disprove this behavior?
4) Use of a known "slow" SATA3 controller (Marvel), when better alternatives exist. The statement:
"We don’t like the Marvell controller’s drivers, but we don’t have any alternative to it until SATA 3 is implemented in mainboards’ chipsets"
This is not true. There are several good high-performance SATA3 PCI Express cards on the market, many using controller chips from LSI Logic. Purchasing a good SATA3 add-on card may not be cost effective for the consumer, but for testing purposes it would be warranted.
5) Mixing SSD capacities without investigating if performance is capacity-sensitive. All these SSD controllers perform IO in parallel across multiple flash memory channels. "Value" versions often have depopulated channels, for example.
Specifically, the 240 GB Crucial C300 is the largest SSD tested, and is compared to 120 GB SSDs ... and comes out the winner. Is this because of a legitimate performance advantage, or having more flash chips and memory channels, or significantly greater empty space on the test disk? Or a combination of all three?
Well, Crucial's own performance information shows that the 120 GB version has significantly lower performance, especially on the write side. Many other web sites, and the SSD vendors themselves suggest under-provisioning as a method to improve performance. Your ~120 GB SSDs have about 8 GB of hidden space (6.7%), but the 240GB SSD has effectively 136 GB (128 + 8) of spare space (56%). Could this taint the results? We don't know from your report.
Blindly comparing a 120GB SSD with a 240GB disk (at 2+x the cost) can be misleading. Even within a given SSD capacity, there can be differences. For example, a 120 GB SSD could be built as 16 x 8GB or 8 x 16GB ... using 8 or 16 channels. You would expect that the 16 channel system would be faster, but probably cost more. X-Bit could provide a valuable "bit" of information by identifying the internal topology if and when they are significantly different. For example, the Intel X25-V series uses only 5 flash memory channels, where the Intel X25-M uses 10. Sandforce controller can use 8 to 16 flash channels, depending on implementation, and so forth. The Crucial C300 implementation of the 120 GB capacity version appears to use 1/2 the flash chips and impacts performance compared to the 240 GB version, rather than using the same number of flash chips, each with 1/2 the density. And so forth...
01/10/11 09:29:19 AM]