Articles: Graphics
 

Bookmark and Share

(16) 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 ]

Now let’s see what happens with DirectX9 Pixel Shaders:

Wow! Even NV30, which used to be at least twice as slow as R300, is now running neck and neck with the rival: NVIDIA GeForce FX 5800 Ultra and ATI RADEON 9700 Pro are almost equally fast now! And NVIDIA GeForce FX 5900 Ultra working at a lower chip frequency outperforms ATI RADEON 9800 Pro!

Well, we can admire NVIDIA’s persistence in combating weak spots of its chips by optimizing the drivers and thus achieving much better results in 3DMark03.

No doubt that Pixel Shader 2.0 test from 3DMark03 is not an illustrative example of NVIDIA’s pixel processors performance. Luckily, the polygon fillrate test also allows checking the pixel shaders performance:

The simplest Pixel Shaders 1.0 and 2.0 are performed equally fast by all chips, with the results being in clear correspondence with the graphics chips working frequencies: NVIDIA GeForce FX 5900 Ultra falls behind NVIDIA GeForce FX 5800 Ultra proportionally to their frequencies difference, as well as ATI RADEON 9800 Pro outpaces RADEON 9700 Pro according to their core frequencies difference.

In case of a more complex Pixel Shader 2.0, ATI chips performance is quite predictable: forced lower precision of floating point calculations (16bit per component instead of 32bit) doesn’t tell on the result, because R300 and R350 feature fixed calculations precision, which is always equal to 24bit per component.

NVIDIA chips showed more interesting results. With forced 50% precision of floating point calculations, NVIDIA GeForce FX 5800 Ultra and NVIDIA GeForce FX 5900 Ultra perform according to their clock frequencies, i.e. the chips perform equally fast here.

The most interesting things start happening when we shift to 32bit per component: while NVIDIA GeForce FX 5800 Ultra naturally slows down, NVIDIA GeForce FX 5900 Ultra shows even better results!

Well, the results obtained let us suppose that NVIDIA has really increased the FPU performance in the new NV35, as they claimed. But the changes touched only upon 32bit precision: GeForce 5900 Ultra doesn’t slow down when shifting to full precision calculations.

Vertex Pipelines: T&L, Vertex Shaders

The first test is High Polygon Count from 3DMark2001 SE package:

The results show that NVIDIA GeForce FX 5900 Ultra outperforms ATI RADEON 9800 Pro, but yields to NVIDIA GeForce FX 5800 Ultra. The performance difference between the two NVIDIA solutions corresponds exactly to the difference in their clock speeds, which indicates that there haven’t been any changes made to Vertex processors of the new NVIDIA GeForce FX 5900 Ultra.

Vertex Shaders tests prove this supposition: NV35 Vertex Shaders unit hasn’t been enhanced in any way and remained the same as that of NV30.

This way, the lag behind ATI chips got even worse now: NVIDIA GeForce FX 5900 Ultra works at a lower clock frequency than the predecessor, while ATI RADEON 9800 Pro runs at a higher frequency than RADEON 9700 Pro.

 
Pages: [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 ]

Discussion

Comments currently: 16
Discussion started: 05/13/03 09:57:07 AM
Latest comment: 07/19/05 08:29:14 PM

View comments

Add your Comment