by Anton Shilov
06/02/2008 | 08:46 AM
Intel Corp. will reveal more details about its first standalone graphics processor in the decade at Siggraph conference in mid-August, 2008. However, even ahead of the planned international conference dedicated to graphics and various interactive technologies the world’s largest maker of x86 central processing units (CPUs) unveiled quite a number of things regarding its code-named Larrabee product.
The chipmaker says that Larrabee uses multiple in-order x86 CPU cores that are augmented by a wide vector processor unit, as well as fixed-function co-processors, which are most likely to be texture addressing units, texture filtering units, render back ends and other necessary parts of a modern graphics processing units (GPUs). According to Intel, incorporation of in-order Atom-like x86 cores into a massively parallel stream processing unit provides “dramatically higher performance per watt and per unit of area than out-of-order CPUs on highly parallel workloads” and also “greatly increases the flexibility and programmability” of the architecture as compared to standard GPUs.
At Siggraph Intel will present “Larrabee: A Many-Core x86 Architecture for Visual Computing” white-paper that introduces the Larrabee many-core visual computing architecture, a new software rendering pipeline implementation, a many-core programming model, and performance analysis for several applications.
Earlier this year Intel confirmed that Larrabee is not a pure special-purpose processor or a pure graphics processor, but a product that combines many functions. Still, it will feature fixed-function cores that are traditional to GPU, which will allow the chip to render currently available games.
“There’s only one way to render the huge range of DirectX and OpenGL games out there, and that’s the way they were designed to run – the conventional rasterization pipeline. That has been the goal for the Larrabee team from day one, and it continues to be the primary focus of the hardware and software teams. We take triangles, we rasterize them, we do Z tests, we do pixel shading, we write to a frame-buffer,” said Tom Forsyth, an engineer from Intel Visual Computing group.