Xbit Labs Presents: LCD Monitors Testing Methodology Indepth

This article is an explanation of methods we use to test LCD monitors. It provides a list of tested parameters with remarks on their meaning, and a description of the measuring instruments. We will also offer you reference to all our previous articles discussing the testing approach we use, so that you could have all the clues to our extensive testing techniques in one place.

by Oleg Artamonov
06/10/2007 | 10:28 PM

This article is an explanation of methods we use to test LCD monitors. It provides a list of tested parameters with remarks on their meaning, and a description of the measuring instruments.

 

Our methods are being regularly improved and updated so some of them may not have been used in our older reviews of LCD monitors. It means the method was developed after the review had been published. The list of changes to this article can be found at its end.

Additional Links

This article provides a basic description of LCD monitor parameters to give you some bearings as you are reading our reviews. If you want to have a deeper knowledge of an issue, you can read the following full-size publications:

If you want to view test results of a particular LCD monitor, you may try to find it in the detailed list of tested monitors that will be coming out shortly.

Testbed

We connect the tested monitor to a PC with a Sapphire Radeon X1650 graphics card (for 20” and larger monitors – the choice of the card is due to the necessity to use Dual-Link DVI mode for 30” models) or a Sapphire Radeon X600 (for 19” models). We use a digital connection (DVI-D) unless the monitor offers an analog input only.

Ergonomics

When applied to LCD monitors, ergonomics means the options you have to adjust the position of the monitor case. These are determined by the employed stand and may include tilt (it is usually some 5 degrees forward and 10-15 degrees backward), rotation around the vertical axis, screen height adjustment, and portrait mode.

The rotation around the vertical axis can be implemented in two ways: with a rotating circle in the sole of the base (e.g. in Samsung monitors) or with a joint in the vertical pole of the stand (e.g. in Dell monitors). The latter implementation is somewhat better since the base of the stand remains motionless.

The screen height adjustment is usually implemented through changing the length of the vertical pole of the stand although there exist more sophisticated designs like two- or three-joint folding stands in Samsung’s and some other brands’ monitors. Anyway, the height adjustment range we mention in our reviews is the distance from the surface of the desk to the bottom of the matrix as shown in the photograph.

It’s good if the stand can be fixed in the bottommost position. If it doesn’t, and the monitor just lies on its stand, the stand will stretch to its full length with a terrible rumble as soon as you try to lift the monitor up from the desk or out of its box. The fixation can be implemented as a button (Dell, HP) or a wire pin inserted into a hole in the stand (Samsung, ViewSonic). It is only intended for carrying the monitor. The screen is not fixed in any other position save for the bottommost one.

Portrait mode is available on many monitors, yet you should be aware that widespread TN matrixes are practically unsuitable for it. Their narrow vertical viewing angles become horizontal viewing angles in this mode, which is downright unacceptable in terms of image quality.

Most monitors also permit to replace their native stand with standard VESA-compatible mounts that come in a lot of varieties from simple wall mounts to intricate stands with multiple joints, adjustments and degrees of freedom. The mount is fastened to four threaded holes at the monitor’s back panel placed in the corners of a square with a side of 75 or 100 millimeters.

Another ergonomics-related aspect is how easy it is to control the monitor. This covers the onscreen menu design, the position of the control buttons, and quick access to certain functions. Quick access means that you can access, say, the brightness setting with a single press of a button, without entering the menu proper. A PC monitor is usually set up once and for all. After that, the user can occasionally adjust brightness and contrast (for example, when switching from work to games or to compensate for any changes in the ambient lighting), and it’s nice to have quick access to such frequently used features.

And finally, some monitors offer preset modes like LG’s f-Engine, Samsung’s MagicBright and NEC’s DV Mode. These are switched through with a single button. There is usually one ordinary user-defined mode and a number of factory-set modes which cannot be changed. The modes may differ not only in brightness and contrast but also in other settings like color temperature or color saturation. It is better to have only brightness/contrast presets because monitors that change more settings often have distorted colors in such predefined modes.

These are the modes offered by the Samsung SyncMaster 215TW. In the Custom mode you can select any values you like and then quickly switch to a factory preset with a press of a button on the front panel. This switching does not reset you own settings – you’ll have them again as soon as you return to the Custom mode.

Color Reproduction

Color Gamut

The term “color gamut” defines how many of the colors a human eye can perceive can be reproduced by the monitor. Our eye can recognize the so-called optical spectrum – electromagnetic radiation with the wave lengths varying between 380 and 700 nm. Our brain perceives it as a spectrum of primary colors from purple to red. Other colors are created as a mix of radiation waves of different lengths, while the continuous spectrum (the one with waves of all lengths in it) is perceived as white light.

It is technically impossible to design a device that would be able to synthesize in real time a random spectrum to create desired colors, but luckily, we don’t need an exact replica of the original spectrum in order to see this or that color. Our eye features only three types of color receptors (red, green and blue) and our brain uses their signals to determine the color. It means that all we need to do is stimulate these receptors in proper proportion, which can be done via three sources of monochromatic light (red, green and blue) with controlled intensity.

Unfortunately, we will only be able to almost cover the color gamut our eye can perceive: there will remain some smaller areas in the color range that we can actually see, but our device cannot reproduce. To even greater regret, the less monochromatic the spectrum produced by each light source of our device is, the larger is the range of colors it can never reproduce.

They usually use the so-called CIE-diagram to describe illustratively the color gamut. A horseshoe shaped area on it represents the entire color spectrum available to the human eye. Around the edges of this “horseshoe” there are primary colors, closer to the center – mixed colors up to white. If we place the dots corresponding to color coordinates of the three light sources of our device (in a specific example these would be coordinates of each of the three subpixels of our monitor’s matrix RGB-filter), they will form a triangle. This triangle indicates the range of colors that the device in question can reproduce. The size of this area is called color gamut.

The size of this area has no connection to the color representation in the monitor: 18 bit (262 thousand colors), 24 bit (16.7 million colors), etc. These numbers only show how precisely we can define the color inside the triangle, but they don’t allow us to get beyond its boundaries. To expand the triangle we will need to somehow change the monitor luminosity spectrum thus shifting the triangle vertex coordinates. Contemporary LCD monitors use different types of backlight to achieve this effect: regular CCFL bulbs provide the smallest color gamut, while the new bulbs with improved phosphors provide considerably larger color gamut. The maximum color gamut can be achieved if the backlight bulbs get replaced with light emitting diodes.

There are a few types of color gamut. The most frequently mentioned ones are sRGB, AdobeRGB and NTSC (they correlate quantitatively as follows: sRGB < AdobeRGB < NTSC). sRGB is standard color gamut for CRT and LCD monitors, most of the models out there feature it. To be more exact, though, contemporary LCD panels usually exceed sRGB a little in the green color spectrum, you can see it on the picture above, but overall the difference is not dramatic. sRGB gamut equals 72% of the NTSC gamut, and the actual gamut of most LCD monitors is around 75% NTSC. Monitors with better backlight bulbs can reach almost 97% NTSC gamut, while monitors with LED backlight already hit 114% NTSC and this number will most likely continue to grow.

How can we benefit from increasing color gamut? The monitor can reproduce clearer primary colors that the models with smaller gamut cannot ever offer: “clear” red has less yellow, “clear” blue – less green, etc. It would be incorrect to claim that increase in color gamut improves color reproduction precision. Firstly, the latter is currently determined by multiple parameters and large color gamut is primarily a nice addition to them, but in no way a determinative factor. Secondly, images (or to be more exact – image files) usually contain no info on the color gamut they have been created for, which means that a priori they are optimized for sRGB as the most widely spread color gamut type. However, since the “clear” green color will be different in monitors with sRGB and NTSC gamut, then the latter will actually reproduce the images optimized for the former one, hence the image will not be reproduced correctly. And vice versa.

However, if we are talking about professional application, the increasing color gamut may cause some other less obvious issues, which we’d better leave for professionals to handle. For home users larger color gamut brings clearer more natural-looking colors on the screen of your monitor. Although I have to stress once again that gamut itself doesn’t guarantee precise color reproduction and is a nice addition to the other monitor specifications that should be on a high level as well.

Gradients

The lack of banding in smooth color gradients is a parameter describing the monitor’s quality of color reproduction. This is evaluated subjectively. A few horizontal gradients (from black to red, from black to blue, etc) are displayed on the monitor and the reviewer makes sure they don’t look striped at any values of brightness and contrast.

The banding is indicative of inaccurate processing of image data by the monitor’s electronics. Some say it is an indication of the monitor’s using an 18-bit matrix, but this is not so. The banding disappears on some monitors at the factory settings, so it is important to check it out at different values of brightness and contrast.

Gamma Curves

Gamma curves are graphs that show the dependence of the signal coming from the graphics card and the brightness of the pixel on the screen. This is an exponential dependence and the exponent is referred to as gamma . According to the sRGB standard, modern home monitors must have a gamma of 2.2.

The measurements are made with a ColorVision Spyder calibrator. The gamma curves are measured individually for red, blue and green and are drawn in the diagram with the corresponding color. The black curve is the ideal curve for gamma 2.2.

Below is an almost ideal result: the curves are all rather smooth and lie in a dense group. You don’t often see such an idyllic picture, though.

For example, in this diagram the curves lie close to each other but go much lower than the ideal theoretical curve. It means that the halftones in the sagging section of the curves will look much darker on the screen than they should be (the eye will perceive this as a higher-contrast image). Conversely, when the real curves go higher than the theoretical one, you’ve got a whitish picture. When the curves don’t go close to each other, it’s a problem, too.

An even more critical problem you can identify by means of the gamma curve diagram is a loss of details in lights or darks. It’s when the monitor reproduces all light halftones as pure white or all dark halftones as pure black, respectively. You can identify this in the diagram if the gamma curves coincide with the X-axis or have a characteristic bend as shown in the picture below (in the top right part of the diagram):

Such image defects usually occur when the brightness and contrast settings are reduced (loss of darks) or increased (loss of lights) beyond a certain value.

Color Temperature

Color temperature is measured with the Spyder calibrator at the default brightness and contrast and on four levels of gray, from a dark gray to pure white.

We have to do this because many LCD monitors have very different temperatures of white and gray due to their gamma curves having inaccurate shapes. It means that if you set your monitor up so that white looks really white on its screen, you suddenly see that gray looks bluish. You try to set the monitor up once again, this time basing on gray, but now you see that white has acquired a red-yellow hue. Alas, it is impossible to correct this without a hardware calibrator like our Spyder.

The gamma curves do not give you enough information about color temperature (the curves for different colors are normalized to be displayed in one diagram so that their bottom left and top right points coincided), and we measure the color temperature separately and publish the results as a small table whose rows are levels of gray we measure the temperature of and whose columns are the color temperature modes offered by the monitor.

Two things are important here: the average temperature for each mode (e.g. if the monitor is set to yield a temperature of 8000K, you are likely to think its image too cold) and the above-mentioned difference of temperatures of different levels of gray. For the latter parameter, a difference of 100K and lower is ideal. A difference of 1000K is acceptable and a difference of over 1000K is bad.

Brightness Uniformity

The brightness uniformity is a parameter typical of LCD monitors – it is considerably less important for CRTs. The problem is that when the entire screen is filled with the same color its brightness in different screen spots may be different, for example, the corners may be darker than the center. There may be multiple reasons for that: starting with uneven backlighting and finishing with unevenness of the matrix itself (for instance, small warps that occurred during matrix installation into the case).

The peculiarities of LCD matrices design imply that this brightness unevenness may vary for white and black colors. For example, if we take TN panels, the white color on them corresponds to 0 voltage on liquid crystal cells, when the crystals line up along special grooves on the inside of the panel glass plates. For the block color the voltage is sent to the cells and the crystals twist at a certain angle according to the applied electric field. So, there are two factors that determine the positioning of the crystals at the correct angle, which results in different types of unevenness for white and black colors.

To measure the brightness uniformity with a sensitive photo-sensor we take the readings for screen brightness with 3cm increments for 19-inch monitors and proportionally larger increments for larger models. For example, here is the grid marking for this sort of measurements for a 20-inch monitor:

The measurements are taken in two modes: when only white color is displayed and only black. After that the deviation in % for each spot is calculated for both data arrays: the white color deviates below the maximum value, while the black – above the minimal. The obtained deviation results are used to build two brightness diagrams showing the brightness distribution over the entire screen surface which are then applied to the schematic monitor image (for more illustrative picture). This will help you get a better idea what this unevenness we talked about looks in reality on your display.

Here it is important to understand that we do not try to emulate the exact look of the monitor on these diagrams – these are just diagrams with some reference colors, and not monitor photos. Some reviewers may use different colors – green, yellow, orange, red, etc - for different deviation levels (with 5% or 10% increments), however, we believe it makes things very hard to perceive, because you will have to remember all the way through the article that the yellow color stands for darker areas on the screen, while red – for lighter areas. That is why in our reviews we will use the closest to natural representation: lighter areas will be colored lighter, while darker areas – darker. However, we changed the brightness scale to make the images more illustrative, i.e. if the brightness of two dots on the diagram differs by the factor of 3, it doesn’t mean that in reality their brightness is also 3 times different – please check the scale showing the actual brightness deviation percentage and the colors we use.

So, the diagrams above serve to estimate the brightness uniformity: the distribution over the monitor screen, what areas are darker – corners or center, etc. For the sake of quantitative comparison between different monitor models we always provide percentage values in the text of the review: average deviation and maximum deviation. At first we calculate the arithmetic mean of the screen brightness and then find the average and maximum deviations.

Speaking of particular numbers, if the deviation is within 5% it is considered a good result, within 7-8% - acceptable, and over 8% poor. As for the maximum deviation, the range is more lenient: if it doesn’t exceed 15% - good, 20% - acceptable, over 20% - poor.

I would also like to say that the degree of brightness deviation may vary greatly – more than any other parameter – between particular samples of the same monitor. Unfortunately, it’s not easy to check this parameter out when shopping. The folks at the store are unlikely to turn out the lights and give you the opportunity to scrutinize the monitor in such conditions.

Response Time

Response time is perhaps the most famous parameter of LCD monitors, yet it is not a simple one.

There are currently two methods to measure it: ISO 13406-2 and the so-called GtG method. The first method is to measure the total time it takes to switch a pixel from black to white and back to black and the problem is that for many existing matrix manufacturing technologies this measurement method yields the minimum possible value.

The GtG method is a more honest one. It measures the average time it takes to perform transitions between all the possible pixel states (GtG stands for Gray to Gray, meaning all transitions between halftones). So, if the matrix yields a very fast black-white-black transition but is slow on halftone transitions, the fast transition will account for an insignificant part of the total and the GtG average will describe the matrix’s real speed truthfully.

Although the manufacturers use the method that provides the best result for the particular monitor (e.g. a response time of 5 milliseconds ISO is specified for modern monitors with RTC-less TN matrixes while monitors with RTC have a specified response of 4 milliseconds GtG, but the practical difference between such monitors amounts to 300-400% rather than 25% as the numbers seem to suggest), we measure response time according to the GtG method as the more objective and illustrative measure of the matrix speed.

To perform the measurement, a photo-sensor is attached to the monitor screen that is tracking any changes in brightness of the pixels.

From a technical point of view, this photo-sensor consists of a Vishay BPW21R photodiode and a current-voltage converter based on a low-noise operation amplifier AD795JR from Analog Devices. The photo-sensor is linear in a brightness range of 0.5 to 700 candelas per sq. m and has its own response time of about 0.3 milliseconds, which allows testing any modern LCD monitor with a high degree of precision. To work with lower levels of brightness, the photo-sensor circuit includes an amplification switch that increases its sensitivity tenfold.

The sensor sends its signal to a Velleman PCSU1000 oscilloscope which records any changes in brightness of the pixels under the sensor over time with a resolution of hundredths of a millisecond. A special program processes the oscillograms and calculates the time it took to change the brightness, i.e. the response time.

The result of the measurements is shown in our articles as a 3D histogram whose one X-axis shows the initial state of the pixel and the other X-axis shows the final state of the pixel. The height of the appropriate column shows the time it took to switch between these two states.

In most cases it suffices to have the average of all the measured transitions (i.e. the response time value as measured according to the GtG method) – and we publish this average, too.

Talking about specific numbers, a monitor with a response time of less than 5 milliseconds GtG can be considered very fast today. A response time of 10 milliseconds GtG is fast, a response of over 10 milliseconds GtG is rather slow. For comparison, older PVA and MVA matrixes without response time compensation would have a response time of over 15 milliseconds if measured according to the GtG method. Their specified response time, according to the ISO method, was 16 or 25 milliseconds, but as you remember, the ISO method measures the time it takes to perform a black-white-black transition whereas the GtG method averages all transitions to one direction only.

Thus, monitors with a specified response time of 4 milliseconds GtG are fast whereas monitors with a specified response time of 5 milliseconds ISO are slow (as explained in the previous paragraph).

RTC Error

Everything has its price, unfortunately. Endowing modern matrixes with superb speed, the Response Time Compensation mechanism has one drawback. It works by sending a special overdrive impulse to the LCD cell (follow the link at the beginning of this article to read more about how RTC works). If the overdrive impulse is too strong, there may appear a new kind of image artifacts that cannot occur on matrixes without RTC: a dark object moving on a light-gray background is leaving a short white trail (in the traditional “ghosting” effect the trail is dark, not light).

We measure the value of the RTC miss in percent. For example, if the pixel brightness should have changed from 0 to 100, but was actually increased to 150 due to an RTC miss and then returned to 100, the value of the miss is 50%. The results are shown as a 3D histogram, just like the response time diagram, separately for each transition.

These numbers have two meanings. First, the grosser the miss, the more different the pixel brightness is at its peak from the desired level and, accordingly, the more conspicuous it is for the eye. Second, the grosser the miss, the longer it takes the pixel to fall back to the desired level, which makes the artifact more conspicuous, too.

Of course, we publish the average value of the RTC errors. If it is below 5%, you are unlikely to notice any RTC artifacts. If within 5-10%, there are artifacts, but not very annoying. If above 10%, the artifacts are going to be visible to the eye.

However, the level of annoyance depends on your personal perception here, like with many other things.

Viewing Angles

We currently don’t measure viewing angles due to technical reasons, yet one note is necessary.

Like with the response time parameter, there are two methods to measure the viewing angles of an LCD matrix: by a reduction of the contrast ratio to 10:1 or to 5:1. Obviously, the latter method yields bigger numbers due to a milder limiting condition and it is generally used to measure viewing angles of TN matrixes that have traditionally had narrow viewing angles. So, if you are comparing a TN-based monitor with specified viewing angles of 160 degrees with a monitor on a *VA or S-IPS matrix with specified viewing angles of 178 degrees, keep it in mind that the angles are measured according to different methods. If measured according to the “honest” method with a contrast drop to 10:1, the viewing angles of TN matrixes are going to shrink to 130-140 degrees.

The poor vertical viewing angle of TN matrixes looks like this (the screen displays horizontal gradients from the TFTTest program):


View of the screen straight ahead


View of the screen slightly from beneath


View of the screen slightly from above

As you can see, when viewed from below (and the angle is not too big – it roughly corresponds to a man sitting at a distance of 60cm from the monitor deflecting his head by 10cm up and down from the central point) the top part of the image gets dark as if the displayed gradients were shifted to the right and had a bigger dark part, which is not the case.

This is the reason why TN matrixes suit but badly for portrait mode. The poor viewing angle when viewed from below becomes a poor viewing angle from the right or left when you turn the screen around into the portrait orientation, which is downright unacceptable.

This strong distortion of the image as you change your angle of view is typical of TN matrixes only. S-IPS, MVA and PVA matrixes are much better from this aspect.

Brightness and Contrast Ratio

The monitor’s brightness and contrast ratio are measured with a ColorVision Spyder calibrator at three combinations of settings: 1) maximum brightness and contrast, 2) factory settings, and 3) a 100nit level of white. The last combination is not something recommended, yet we specify at which settings it is achieved. It serves as a common reference point so that we could compare different monitors and check out how the monitor’s parameters change in comparison with the factory settings which usually produce a twice higher level of white. For practical purposes, the brightness of 100 nits corresponds to typical settings for working with text in a well-lit office room.

The calibrator measures the levels of black and white. The latter is what is referred to as the monitor’s brightness. The ratio of the two is the monitor’s contrast ratio. The results are listed in a table:

As for specific numbers, you need a brightness range of 80-150nits for home use. All modern monitors cover this range easily. A brightness of 200 nits and higher may be necessary if you play games or watch movies in a brightly lit room, up to direct sunlight.

As opposed to brightness, you can’t have too much of contrast ratio. The higher it is, the blacker the monitor’s black seems, which is especially conspicuous when you are watching a movie in semidarkness. Moreover, a high contrast ratio partially conceals any irregularities in the backlight. In our tests, a contrast ratio of 200:1 is acceptable. 400:1 and higher is good.

We want to note that you should not compare the results of our measurements (using a calibrator) with the specified numbers. The calibrator is not actually meant for measuring contrast, so its error can be quite high. However, our results are quite sufficient for a comparative test.

And finally, the dynamic contrast technology has become popular recently. When it is enabled, the monitor is adjusting the backlight brightness depending on the current image, reducing that brightness for dark images and vice versa. The ordinary contrast ratio is the ratio of white to black at the same backlight intensity, while the dynamic contrast is calculated as the ratio of white at the maximum backlight intensity to black at the minimum backlight intensity. In other words, it equals the ordinary contrast ratio multiplied by the backlight adjustment range. We measure only static contrast in our tests but some manufacturers declare only dynamic contrast for their newer models – so don’t be surprised when you see TN-based monitors with a specified contrast ratio of 2000:1 and even higher.