by Nikita Nikolaichev
06/03/2004 | 07:50 PM
Two yeas ago our test laboratory acquired a new benchmarking tool called FC Test (see our article called X-bit labs Presents: FC-Test for Hard Disk Drives). The necessity of creating our own benchmark for hard disk drives had been felt for long. As you know, the WinBench 99 suite hasn’t been updated for several years and the file sets it uses don’t reflect the realities of the current day. Intel IOMeter can be made to simulate nearly any workload on the disk subsystem, but it is too abstruse and esoteric for the end-user. After browsing through numerous benchmarks, we found we had to write one by our own hands…
After a long and painful period of thinking about the type of the benchmark (synthetic or tests in real applications), we decided to create a test that would measure the speed of reading, writing and copying files in the Windows operating system. Our choice was due to the following two facts: these operations are quite understandable to any PC user and this OS family is the most widespread today.
So we wrote the test and our first attempt at using it when examining performance of hard disk drives confirmed the necessity to include the FC Test into our list of benchmarks (we first used FC-Test in our Western Digital External Hard Disk Drive Review).
Since then, we ran the FC Test an infinite number of times and the ream of reports became too high as to fit into the desk’s drawer. What reports do I mean? The test just couldn’t save its results into a log file – you had to put the numbers down on paper manually and then type this data into Excel spreadsheets. Of course, custom-made forms in the same Excel made the process easier somewhat (and saved paper!), but we live in an age of technology after all!
So after the drawer couldn’t take in all the reports, we revolted! Another cause for that, besides the above-mentioned lack of a log file, was that you had to sit behind the testbed computer, write down the results each 3-5 minutes and then run the test further. As you understand, this was not a way to reach the maximum efficiency of work. It’s just impossible to focus on the review itself when the benchmark calls for your attention every few minutes. That was even harder when the testing session was simultaneously run on several computers: aching legs, giddiness and other things…
Our revolution was a success and our two demands were taken to consideration:
Thus, these two problems were met in the development of the new version of the FC Test.
So this is the new incarnation of the FC Test:
The interface is left barren and low-key – as a regular work tool, it doesn’t need any embellishments. The deepest menu element is the New item:
Here you can create a new pattern (a description of a set of files) or a file-list. A file-list differs from a pattern in its being fixed to a definite logical disk. It describes a set of files that are really on the drive, while a pattern…Well, the screenshots below explain themselves:
As you may guess, the left screenshot shows a pattern file opened in the Notepad, while the right one contains a file-list. The pattern just says it includes 100 files, 1MB each, but the file-list not only identifies the precise location of the files (the logical disk D), but also calls each file by its name. If you are creating a file-list by scanning a specific folder on the drive, all files in this folder, and all files in the subfolders, will be included into the file-list. The relative path to every file will also be stored in the file-list.
For a fast calculation of the amount of data described by the file-list, it stores the size of each file independently.
The Open menu item was also redesigned:
Now it offers a file filter for supported file types (i.e. files with .ptn and .fls extensions) and contains icons of quick access to places of a probable location of the desired files.
Now, new to the FC Test, the menu contains the Run Script item.
After choosing this item, you can select a file with an algorithm of the benchmark’s operation and put it in execution. The script is stored in .SPT files. If you use relative file paths, the directory with the script is considered as the current directory. It is also in this directory that the log-file is created (it has the same name as the script plus the .log extension). The script’s progress is recorded in this file in the CSV format.
And that’s how a script may look from the inside:
As you see, we deal with a simplest command processor.
Each command takes one line of the script; blanks at the beginning and the end of the line are ignored. Operands are separated by blanks. If an operand contains a blank, it should be taken into single or double quotes. An operand with single quotes should be taken into double ones and vice versa. Empty lines or lines that start from the “#” symbol are ignored.
First of all, let me enumerate all supported commands and then we will try to figure out how we can combine them.
Comment puts a comment into the log file.
Usage: comment <quoted text>
Compress reads files from the list and creates a new file, whose size is in the <compression ratio> times smaller than the total size of the files in the list. This command can create a temporary list with the name of the newly-created file.
Usage: compress <list><target file><compression ratio>[<target list>]
Copy copies files as specified in the list into the specified directory. It can create a temporary list of the copied files
Usage: copy<list><target dir>[<target list>]
Create creates files in the specified directory according to the pattern. This command can create a temporary list of created files.
Usage: create <pattern><target dir>[<target list>]
Clean deletes files from a temporary file-list and throws the list out of the memory. This command doesn’t work with lists loaded from a file or written into a file.
Usage: clean <file-list>
Decompress reads data from the source file and creates new files according to the pattern in the specified directory. The amount of the read data is in the <compression ratio> times smaller than the total size of the files in the pattern. The source file should be large enough for the operation to end successfully. This command can create a temporary list of the created files.
Usage: decompress <pattern> <source file> <target dir> <compression ratio> [<target list>]
Pause is used for stopping the script for the specified number of seconds.
Usage: pause <seconds>
Read reads files from the list.
Usage: read <file list>
Reboot reboots the computer. After the restart, the script is continued from the next line.
Recycle clears the temporary file-list from the memory. It doesn’t work with lists loaded from a file or written into a file. Files, whose names are in the list, are not deleted.
Usage: recycle <file-list>
Save saves a list or pattern with the specified name. After that, the temporary list is no longer a temporary one. The old name of the temporary list is freed for use.
Usage: save <list | pattern><target file>
System executes a command or application with the defined arguments and waits till the execution is complete.
Usage: system <command> [<arguments>]
As you see, the set of commands supported by the FC Test has expanded.
We added a few commands for automation of the test process (Save, Recycle, Reboot) to the existing commands like Create, Read and Copy. The Pause command was required to better the “convergence” of the test results. It serves to stop the test for a while after a system reboot, so that the operating system could load itself up completely. Of course, this process takes different time with different computers, and we measured it experimentally, by measuring the time interval between a FC Test reboot (the FC Test starts up automatically after a system restart in the batch mode) and the end of the OS’s accesses to the hard disk drive. And, of course, we doubled this time, just in case J.
The System command allows expanding the scope of the benchmark practically infinitely. We can wake up any external application using this command, thus enhancing the capabilities of the FC Test as our needs require. Right now, we use this command for formatting hard disks.
The Compress and Decompress commands allow testing the speed of compression/decompression of files. In fact, the process of compression/decompression itself is simulated – the processor doesn’t perform any calculations as our goal is in creating a certain load onto the disk subsystem.
So how does the load on the disk subsystem at archiving differ from the load at copying? – Asymmetry!
When we’re copying files, the hard disk writes and reads the same amount of data. But when we’re archiving, we write less data than we read (if the data have been really compressed). The ratio of the amount of the original data to the resulting amount is called the compression ratio.
We suppose that the performance of a hard disk drive may differ in “asymmetrical” modes from its performance in the copy tests. How big this difference is and why it occurs? We’ll discuss it later…
Right now, let me only say that the Decompress command allows simulating the decompression process. In other words, the hard disk will be reading one large file and writing many small files…
One of the most important commands – Comment – helps to read a report file, even after you’ve tested something a year ago. One of its useful qualities is in adding a “carriage return” sign to the CSV-file, resulting in the following formatting:
This is the result of testing a Secure Digital card – we measured speeds of writing and reading three file sets. One file was 100MB big, then 10 files were 10MB each, and lastly, we had 100 files, 1MB each.
In the test report, you can see a formatted table with results where each line contains the write and read speeds for each file set.
Thus, my dream about a test that would write reviews by itself, fully automatically, seems to be closer to reality. We only have to train the benchmark to draw diagrams.
Now let’s try to think of types of equipment that we can use the FC Test with.
Well, we have been long using the FC Test for testing the performance of hard disks. By the way, this is the only test in which speeds of different hard disk drives differ among themselves more than by a measurement error (we’ll talk about the measurement error shortly). In case you’ve forgotten, we use five file sets in our HDD reviews:
We measure the speed of writing each file set on the disk, the speed of reading these files from the disk, the speed of copying the files within one logical volume (32GB size) and the speed of copying the same files to another 32GB logical volume.
We reboot the system after each speed measurement to avoid interference on the side of the OS, which caches the files in the RAM.
If we deal with a drive with below 64GB capacity, we just partition it into two logical volumes of the same size.
We test RAID controllers in the same way as hard disk drives. In fact, we are testing the speed of the logical disks we create on a RAID array, made out of several HDDs.
Flash drives generally have smaller capacities than hard disk drives, so there’s no sense in measuring their speed in large patterns (ISO, Windows and Programs). Moreover, there’s no need to measure the speed of copying within a flash drive, as users don’t do that often.
Thus, we limit ourselves with tests of writing files to the drive and reading them from it. As for the file size, we use the following scheme. There are three file sets for flash drives below 128MB capacity:
As you understand, the FC Test can be used on USB drives, CompactFlash cards and so on.
Well, why can’t we use the FC Test for estimating the performance of optical drives with optical media? Of course, you don’t write much on a CD-ROM disc J, but you may try to do that with CD-RW discs (in the UDF format), if necessary. We won’t do that, though, as there are numerous benchmarks for measuring the speed of burning CD-R/RW etc.
Instead, we will read from the discs! You may know about the CD WinBench test – its point is in reading certain file sets from a disc. Each file set – as far as I remember they are files of various games – is stored in a separate folder and is read from the disc by the test.
The only drawback of this test is its age. It is old and uses old files (files used to be smaller, you know). But we have the FC Test now! It can easily supplement the CD Bench and, what’s important, you will have the option of changing file sets.
How do you create your own test for CD-ROM drives?
Just take a disk with files, for example a disk with Windows 2000 Professional. Poke it into the drive under question and load the FC Test up. After that, choose the New menu and say you’re interested in creating a file list, then point to the CD disc (or to any folder on this disc).
In a few seconds, we have a list of the files inside the requested folder.
Note that if the selected folder contains subfolders, the files in these subfolders are also included into the file-list. The file-list also keeps the full paths to the files as you can see in the screenshot.
Then, click the Read Files button and the FC Test starts reading all the files, specified in the file-list, from the media and calculates the average read speed.
With the particular disk I took, the reading of all files (354.71MB total size) took 94.3 seconds. The average read speed equals 3.76MB/s. In the ordinary “X” measurements of CD-ROMs, it is 25.077x speed.
Why is it more correct to measure the speed of reading files? Because CDs are never read linearly (this only occurs when you’re grabbing audio tracks). In real applications, files are requested from CDs and it’s important to measure not the linear speed alone, but also the file reading speed. As you may guess, it will depend on the access time of the drive as well as the algorithms its cache buffer uses.
That’s possible, too. We can attach computers to the network, share a disk on the “server” and map this disk on the “client” computer as a network drive. Then, we can check out the speed of working with the network drive. Just in case, we should make the pause between system restarts a bit longer. If you want to test 1Gb cards, you must make sure that the disk subsystem of the server can provide a stream of 125MB/s and higher. In this case, the network card will really be a bottleneck.
In a nutshell, the results repeatability exceeded all our brightest hopes!
Of course, it greatly depends on the “stability” of the tested object. For example, the repeatability is close to the ideal with flash drives. With hard disk drives, we had exactly the same dispersions as in our examination of the PCMark04 benchmarking suite (see our article called PCMark04: Benchmark for Hard Disk Drives?). Thus, copying files is no less exact than playing traces of the disk subsystem’s activity.
Of course, the test shouldn’t be run just one time if you want to have correct results. As for the number of repeats, it’s your own choice.
Right now, the new version of the FC Test 1.0 is being beta-tested and we don’t offer it to the public. Some minor changes are to be implemented into the interface and the realization of the log file. We have also planned a handful of features that help to use the FC Test more efficiently.
So far, we are willing to offer the current version of the test to all our colleagues from other hardware test laboratories. You can request it from me by e-mail.