Besides the matrix color depth, there’s another property that affects the color reproduction of an LCD monitor. It is gamma compensation. When talking about brightness and contrast above, I simplified the matter saying that the correlation between the input signal and the pixel’s brightness is linear (L = B+x*C), but it is not actually such. This is a power dependence and it looks like this: L=B+xgamma*C), where gamma is some constant.
Gamma compensation can be said to have originated and to exist due to historical rather than technical reasons. In fact, the natural dependence between the input and output signal of the cathode-ray tube is close to a power dependence with an exponent of 2.5. Operating systems for the PC platform used to have no color management systems at all at first, so gamma=2.5 is traditionally considered the standard for the Wintel platform.
The Apple Macintosh, however, was traditionally employed for printing, processing photographs, performing color correction and so on, and the value of gamma was slightly adjusted on that platform – reduced to 1.8. Of course, for the user to see the original colors of the image on the screen, it should be first processed with the function i=Il/gamma, where i is the resulting brightness, L is the original brightness of the picture and gamma is the gamma of the system this picture is being processed for.
Thus, the picture viewed by the user will be described by the following formula: L=B+( Il/gamma)gamma*C=B+I*C. That is, the user will see the original i, but corrected with respect to the contrast C and brightness B of the monitor. Since the value of gamma varied between the platforms, pictures had to be compensated in a different way, and a picture prepared on a Mac would look too dark on the PC and vice versa – a PC-oriented image would be too light on the Mac. That’s why about a decade ago Microsoft, Hewlett-Packard et al. came up with the sRGB standard, “A Standard Default Color Space for the Internet”, which set the value of gamma to 2.2 (more precisely, the gamma curve in the sRGB specification is a combination of two independent functions, but with gamma=2.5 it can be described with an acceptable degree of precision with a single function).
Thus, sRGB-compliant images look equally well (or, as skeptics put it, equally bad) on Macs and on older PCs (with gamma=2.5). Right now the sRGB specification is de jure as well as a de facto standard, and modern monitors are generally calibrated for gamma=2.5.
You may be wondering why this gamma compensation is necessary from the technical point of view. The proponents of compensation say that it allows increasing the precision of reproduction of darks (of course, at the expense of lights): the human eye has a logarithmic characteristic of sensitivity, i.e. it more easily perceives a change of dark tone than a change of light tone of the same value. Thus, we can improve the precision of darks at the expense of lights. But a theoretical calculation says that gamma being equal to 2.2, a precision equivalent to 9-bit encoding is only achieved for 7% of the darkest color tones, and equivalent to 10-bit encoding for 3% (there’s no sense talking about 11-bit precision of reproduction of darks, since such colors don’t practically differ from pure black); meanwhile the color precision degenerates for 75% of lights.
This can be compared to the losses inflicted by saving an image in a middle-quality JPEG format (JPEG also brings in geometrical artifacts, but this is irrelevant for my topic). Looks like everything’s all right, yes? We’ve improved the precision of darks at the expense of lights – we’ve got what we want?