The use of texture anisotropic filtering allows achieving much higher textures clearness at a large distance from the object and at small angles of inclination. This way the scenes built with anisotropic filtering become more life-like and close to reality. It is remarkable that this function can be enabled almost in any 3D applications, because nothing is required from the gaming engine or system components: all additional work is done by the graphics processor.
One of the most pleasing peculiarities of the NVIDIA GeForce3 / GeForce3 Titanium / GeForce4 Titanium was excellent anisotropic filtering quality, and one of the most discouraging ones – extremely high performance drop when this function was activated.
No wonder, that NVIDIA paid due attention to anisotropic filtering when they developed new CineFX architecture and GeForce FX graphics chips trying to reduce the performance drop without too big quality losses.
As we have already seen during NVIDIA GeForce FX 5800 Ultra test session, the company managed to cope with this task very well: GeForce FX chips support three different anisotropic filtering modes offering different speed and quality. Even in the fastest mode, the performance drop is 2-3 times lower than it used to be while the image quality remains more than acceptable.
This progress was achieved due to the chip developers who introduced a few improvements into the anisotropic filtering algorithm (see our NVIDIA GeForce FX 5800 Ultra Review for more details) and software developers who prepared a number of driver optimizations. The general idea of the software optimization implies the reduction of the maximum anisotropy level for textures or separate polygons, which definitely leads to performance increase. Look here: why should the whole scene undergo anisotropic filtering, if we could leave aside those textures or polygons, which do not need it, as this texture is so blurred that no filtering will improve its clearness, and that polygon is at such a big angle that you cannot see anything on it anyway?
The implementation of anisotropic filtering by NVIDIA GeForce FX 5900 Ultra is very interesting from the “driver” point of view in the first place, because its hardware algorithms were completely borrowed from NV30 and all performance improvements can only result from software optimizations.
So, let’s start with tri-linear filtering. The screenshots below are taken from a small test program displaying a cone with the base in the screen surface and a remote vertex. The sides of the cone are covered with “chess” texture. Besides that MIP levels are highlighted:
And now here are a few enlarged fragments with forced 8x anisotropic filtering in the driver:
You can clearly see that in Balanced and Performance modes tri-linear filtering has degraded almost to pure bi-linear filtering: smooth transitions between the MIP-levels got down to narrow bands 2-3 pixels wide. Besides that, the level of detail in Performance mode has been partially lowered: the first MIP-level border has moved closer.
Since the tests of NVIDIA GeForce FX 5800 Ultra with 42.68 driver tri-linear filtering has become much worse. To prove this point we offer you a few more screenshots taken from our NVIDIA GeForce FX 5800 Ultra Review:
It is evident that further degradation of tri-linear filtering in 43.80 driver is a way to increase GeForce FX chips performance when both: anisotropic filtering and tri-linear filtering are used.
However, there will hardly be any more changes in the driver concerning tri-linear filtering, as this optimization resource has been completely exhausted. Even now you can sometimes see the MIP-levels borders in the Balanced and Performance modes. So there is only one option left: to disable tri-linear filtering as we see by ATI R300/R350 in Performance mode of anisotropic filtering. Hopefully, NVIDIA will not do the same thing.