Articles: Storage
 

Bookmark and Share

(31) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 ]

FC-Test 2.0

As I said above, files in our FC-Test scripts aren’t processed randomly. The files are written to the disk sequentially (because the disk is empty before the test and is formatted between the cycles of work with the pattern) and are read from the disk in a sequential order, too. Thus, the drive works under almost ideal conditions as is clearly indicated by the results in the ISO pattern where it almost reaches its linear read speed.

This ideal can be seldom met in reality due to the so-called fragmentation. That is, a file may be split in several fragments rather than stored as a single whole on the disk. And of course a fragmented file is read at a somewhat lower speed than a file that is not fragmented.

We can’t imitate file fragmentation on the sector level, but we can imitate fragmentation on the file level. That is, we can make up a situation when the files are read in an order other than in which they are stored on the disk.

In other words, we’d want to read the files at random. It could be done with FC-Test 1.0 by preparing file-lists with a predetermined sequence of files processing and then using them to read the files from the disk. But this would be inconvenient and not quite illustrative.

Imitating a random reading of the files in IOMeter is not a task I would try to solve. Writing a correct pattern for such a test would take a math1ematic genius. And this genius would spend the rest of his life explaining that his pattern is indeed correct.

I’ll better use the new test utility, FC-Test 2, which is being beta-tested in our labs right now.

Besides other things, this test can read files with a specified degree of randomness and localness. That is, it is possible to specify the maximum length of the file chain to be read in one pass and the maximum length of the jump in the file-list.

I simplified the work of the script for this test. The length of the jump was limited to 100 files and the length of the file chain was varied from 1 to 10, and the length was fixed (the random-number generator was blocked).

The result is the dependence of the speed of random reading on the length of the file chain read in a single pass.

And these are the results I’ve got.

With rare exceptions the WD1500AHFD is always in the lead. The WD740GD-FLC0 wins the second place, and the WD740GD-FLA1 is third.

The fluctuation of the test results can be explained easily. We took the standard Programs pattern from FC-Test 1 and this pattern includes files of different sizes. So, the total amount of data in a chain of files doesn’t strictly depend on the number of files in the chain. The random read speed depends on the heads movement speed as well as on the amount of data that is read in a pass. It’s all clear with the first parameter – the higher the areal density is (and the more platters the drive has!), the less tracks the data of the pattern files occupy. The narrowing of the zone in which the heads are operating has a direct effect on the performance.

It’s more complex with the second parameter – the total size of the files read in one pass depends on what exactly files are selected.

Of course we could use same-size files to get perfectly scalable results, but would they have any practical sense?

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 ]

Discussion

Comments currently: 31
Discussion started: 05/05/06 05:21:29 PM
Latest comment: 05/26/08 11:35:13 AM

View comments

Add your Comment