Vertex Pipelines, Vertex Shaders 3.0
Enforcing the NV40’s pixel processor, NVIDIA didn’t forget about the “geometrical force” of the new GeForce. New graphics chips have twice more vertex pipelines – six against three in the GeForce FX 5950 Ultra. New games feature the more complex models, huge amounts of polygons per scene, so the doubled vertex-processing performance of the new GPU will surely be put to good use.
In the test section of the review we will try to estimate how fast the new GeForce is in games that demand a high speed of processing geometry data.
The functionality of the NV40’s vertex processors grew along with their performance. NVIDIA claims the new GPU to fully support version 3.0 vertex shaders. Their length is now practically infinite like that of pixel shaders (well, it is limited by the DirectX 3.0 shaders specification) and the shader can now have truly dynamic loops and branches. It is now during the execution of the shader that the choice of what code is executed for the particular vertex is made, rather than during compilation.
Vertex Frequency Stream Divider is an interesting feature of the NV40’s vertex processors. Using this frequency divider, the vertex processors can read data from streams and update the input parameters of the vertex shader not for each processed vertex, as before, but less frequently, with an adjustable frequency.
NVIDIA offers the following example of using this feature: it’s possible to read streaming data (for example animation data) at a definite frequency and use the same model-geometry data, describing a soldier, to create a whole army of such soldiers that wouldn’t be clones of each other, but would each have a unique appearance and animation.
The last but not least in the list, the vertex processors of the NV40 can read and use texture-extracted values during shader execution. The extracted texture sample can be employed for geometry deformation, for instance. This process is known as Displacement Mapping. Such mapping allows for a higher level of realism and interactivity: one or several static or updated displacement maps can be used to create an illusion of changing, “real” water surface.
There’s only one thing I’m sad about: having “trained” the NV40 to work with textures, NVIDIA engineers forgot to introduce support of automatic creation of new vertexes. For example, the vertex processors of Matrox’s Parhelia chip can create new vertexes on their own, basing on the parameters of the existing vertexes, thus making the geometrical description of models more precise. The GPU divides original triangles that describe the model into smaller ones (this process is known as tessellation). The tessellation degree may depend on the distance from the viewer, while the division algorithm of the Parhelia doesn’t produce any sudden changes in the model appearance. This is called adaptive tessellation. You can learn more about displacement mapping and adaptive tessellation in our Matrox Parhelia review.
Clearly, the use of adaptive tessellation with displacement maps can help the graphics processor render more realistic scenes. Regrettably, the new generation of graphics processors from NVIDIA doesn’t support hardware tessellation.