Articles: Memory
 

Bookmark and Share

(2) 
Pages: [ 1 | 2 | 3 | 4 ]

Let's dwell a little upon the latencies. It's a sore spot of today's DRAM architectures. You can't underestimate it, as we see by RDRAM example. High latencies during memory access result in poor performance in benchmarks. DDR may look better than RDRAM in this respect, but no more. Just look: during the few years from the times of i486 to Pentium 2, the CPU clock-rate grew by more than 10 times as well as the peak memory bandwidth. But the memory access latencies got five times higher! So, although it's not evident, the industry is moving in the direction shown by Rambus: bandwidth vs. latencies.

The balance between the two should be maintained, though. And there are measures taken: those Posted CAS and "write latency = read latency - 1 ". The experience in developing such specifications as VCM (Virtual Channel Memory) RAM and ESDRAM Lite is also being put to use. On the whole, in both cases they suggest creating a small cache (8Kbit per bank) that would allow doing without tRCD, which slows down the memory subsystem at higher frequencies, and avoiding time penalties when getting onto the wrong page or bank. But here we can see no progress yet. At least, by the available DDR II chips the tRCD value equals at least 15ns.

And now the last point concerning latencies. There are no half-clock latencies by DDR II. Combined with the command conflicts restrictions and four-packet continuous data transfer, this reduces a bit the cost of testing the end products and thus reduces their prime cost. Moreover, the specs claim that the layouts of the chips and DIMM modules are optimized for value mainboards with fewer slots.

One more lesson, industry learned when watching the tortures of Rambus, is power consumption requirements. Well, the requirements are quite obvious, Rambus just didn't bother to meet them. We should acknowledge, though, that DDR DIMM modules do have heatsinks, which first appeared by RIMMs. On the whole, this was one more task, DDR II developers confronted.

It all came out very straightforwardly here. First, as we have already mentioned above, prefetch 4x allows reducing chips core frequency to more than acceptable values and the direct correspondence between the clock-rate and voltage still holds true. Second, the input voltage was also lowered from the today's 2.5 to 1.8V.

So, we have come to the last important issue dealing with the architecture, which is still very interesting: the developers consider the possibility to introduce the idea of autocalibration. The process begins with writing certain data set into the chip via a slow write protocol. The slow protocol should be used after chip initialization, as it is not calibrated yet and thus doesn't allow writing in four-bit "shots". So, there goes a command that turns on expanded registers set and begins the process of tuning up by changing the resistance of the circuit. Then, the system tries to read the pre-written data set. If more tuning is necessary, the calibration procedure is repeated. The same tuning is done for timings. So the point is: this given module is tuned up to this given system environment! Rather a subtle way of combating circuits' instability, which grows alongside with their complexity. Unfortunately, this aspect of the specification is still quite vague.

Well, the samples of 512Mbit DDR II chips presented by Micron, Samsung and Elpida give us some more or less clear idea of the mass market up to the second half of 2003:

Frequency: 533MHz;
Structure: 32x4, 16x8, 8x16;
Core/chip voltage: 1.8V;
Power consumption: 304mW;
Package: FBGA;
Extra features: external circuits resistance regulation.


DDR II prototype by Samsung

Well, looks quite nice. We have got a product that combines both increased frequency/performance (although we might wish the growth to be higher) and reduced power consumption. It's most important as the current trend is to make computer devices and peripherals as small as possible: take PDAs as an example. FBGA instead of today's TSOP-II allows placing the chips closer to one another on the PCB and provides better overall signal stability. It's also a more unified variant, which is important, too. According to Semico Research, three years ago 69% of all DRAM chips went to PCs, while in a couple of years it will only be 46%. All the rest will go to communication equipment, consumer electronics and mobile devices.

 
Pages: [ 1 | 2 | 3 | 4 ]

Discussion

Comments currently: 2
Discussion started: 03/19/03 08:23:21 AM
Latest comment: 12/10/04 04:51:54 PM

View comments

Add your Comment