NVIDIA: Test Time
NVIDIA has a hard time transitioning to the 0.13-micron tech process along with developing the new chip. December, January and February pass by without a sign of an NV30-based card in the market. It is only in March (four months after the announcement!) that the monsters are spotted on the shelves around the world.
It also transpires that the difficulties with the NV30 are really colossal and the chip yield is utterly low. The chip goes into mass production after the yield is only 6-10 dies per wafer; its cost is close to $60 instead of the expected $10-20! Anyway, NVIDIA is quick to shut down the production of this mockery of a revolution. The lifecycle of the NV30 was short – only about 100,000 samples produced.
Besides other things, NVIDIA for the first time in the industry employs fast and expensive DDR-II memory working at the incredible frequency of 500MHz (1000MHz DDR). Relying on the frequency alone, the company refuses a 256-bit memory bus ATI implemented in its RADEON 9700/9700 PRO. It is revealed later that the 128-bit bus was not a result of haste with the graphics chip, but the original plan for the NV30 approved by the company’s executives years before the announcement.
The GeForce FX 5800 Ultra seemed to usher us into a new era, the era of cinematic-quality 3D. None of that. The flexibility and programmability came at a high cost: the new chip is miserable at executing pixel shaders, losing to ATI’s GPU hopelessly. One of the reasons for that is the NV30’s working most of the time in the inconvenient mode (4 pipelines x 2 texture-mapping units) and rarely switching to the 8x0 formula (see our GeForce FX 5800 Ultra Review).
The huge frequencies cannot help the inefficient architecture out. Moreover, the heat dissipation becomes too high for an ordinary cooler to handle. The exclusive FlowFX cooling system can do the cooling of the GeForce FX 5800 Ultra, but at the cost of the user’s ears. Some jeering advocates of silence and ATI Technologies arrange collages like “GeForce FX 5800 Ultra – vacuum cleaner” or “GeForce FX 5800 Ultra – hair-dryer”.
This complex and sophisticated GPU is clocked at 500MHz. Its flexibility and capabilities cover all the basic DirectX 9 specifications and go far beyond. Still, it is not the revolution NVIDIA had promised back in 2002, but rather the biggest disappointment of the entire 2002 and the beginning of 2003.
One of the advantages of the NV3x series shows up soon as NVIDIA unveils a full range of DirectX 9-compatible GPUs for every market sector. A couple of new chips, the GeForce FX 5600 and 5200, appear as soon as March 2003. The former was intended as a replacement of the out-dated GeForce4 Titanium series and the latter – to become the first value GPU to be compatible with DirectX 9.
The NV31 is architecturally alike to the NV30, but has half the number of the pipelines. The chip is manufactured by the same 0.13-micron process and works at 350MHz, while the resulting memory frequency reaches 700MHz. This GPU is later found incapable of effectively competing with the rival, RADEON 9600 PRO, so NVIDIA transforms it into a faster version clocked at 400MHz. This helps to improve the speed of the 5600 Ultra somewhat, but gives no advantage over the competitor.
The NV34 (GeForce FX 5200) is a simpler solution to occupy the seat of the GeForce4 MX. A cheaper 0.15-micron tech process is employed to manufacture this chip. To reduce the surface area of the die, NVIDIA resorts to draconian measures cutting down everything possible. The memory controller suffered most, being practically deprived of the data compression algorithms, which degraded the performance of the chip. Notwithstanding its poor speed, the NV34 anyway provided support of DirectX 9, while the value VPUs from ATI Technologies – RADEON 9000 and 9200 (RV250 and RV280) –supported DirectX 8.1 only.