by Anna Filatova
09/27/2006 | 09:04 AM
We continue our reports from the IDF Fall 2006 sessions. nd this article is devoted to the keynote by Justin Rattner from Research and Development group.
He encouraged the audience to explore the implications of having tremendous computing power at your fingertips that brings you freedom from a lot of things.
The demand on today’s datacenters is enormous:
But this is juts the tip of the iceberg. People want to be able to open their ultra-mobile device at any time and get the information they need. They might use the location info to find out what services are available, like to have shared calendars across internet, play variety of online games with growing complexity. A lot of the growth is being driven by new software frameworks. Another thing that is emerging as part of this trend is the delivery of applications on demand instead of installing them on the devices. The benefits are of course reduced cost and improved reliability. All these capabilities are driving the demand for datacenters up.
However, there are certain problems that need to be solved to enable successful construction and operation of these enormous datacenters.
The first important parameter is energy efficiency. Namely we are talking here about the significant power losses as a result of numerous conversions that take place inside of the system PSU. That is why Intel considered a simple but efficient modification of the contemporary power supply for datacenters to be quite handy:
The typical datacenter power system is delivering only a third of the power. The problem is that there are too many conversions, that’s where the efficiency is lost.
The new type of PSU uses different conversion scheme, and as we can see from the picture below the advantages of the DC power vs AC power are evident:
We can get 60% per megawatt improvement:
The security of the datacenters is another important milestone. There are two scenarios of data encryption. Here they are:
In the first case we cannot perform data inspection because the packets are encrypted and no key is used to decrypt the data and run the checkups. The second alternative allows to decrypt the data in the end of each stage and inspect the traffic.
Here is how it works in practice:
There are three keys to achieving this goal: reaching teraops performance, terabytes bandwidth and terabits I/O. These three objectives are the base of Intel’s terascale research program.
Yesterday we have already reported about the prototype wafer for 80-core chips and Justin Rattner revealed a little bit more details to that. 80 cores produce one teraflop of computer performance. It delivers energy efficiency of 10 gigaflops per watt:
Adequate memory bandwidth for this computing power – involves stacking of the memory chips directly underneath the processor chips. There are 256MB of static ram for each core that can deliver 2 trillion bytes per second data transfer rate.
The last piece is the I/O bandwidth. Intel continues to make advancements in electrical signaling. Today the research has arrived at optical signaling stage. The project launched 5 years ago. In 2004 they already introduced a 1Gbit/s modulator. The 10Gbit/s modulator has been recently demonstrated to the public. But the research continues, so there will be more achievements to come.
Another breakthrough in the silicon photonics - one electrically pumped hybrid-laser:
Intel can combine 4 lasers like that on a single die:
For the first time they have publicly demonstrated 4 hybrid silicon lasers at work. The lasers were individually tuned and linked into the modulators. The signal is then fed into a single optical fiber (that replaces numerous cables) and additional photo-detectors are used to recover the signal.
Of course we all understand the significance of this development: we all know that photonics has incredible capacity. This is a more affordable compatible approach for the technology that will offer higher data transfer rates.