Articles: Memory

Bookmark and Share

Pages: [ 1 | 2 | 3 | 4 ]

Well, the compatibility is important, but it's not all. Otherwise, we could better stick to DDR, as it's 100%-compatible with itself. The general aim remains the same: performance growth. There are some things done to improve it. First, and most evident, the clock-rate is a little higher. This "a little" stems from the same architecture: you can't wait for a sudden frequency boost from nearly the same thing.

Anyway, after 200MHz (400MHz) PC3200 DDR the clock frequency of the first mass DDR II chips will begin with 200MHz/266MHz. With the standard 64bit memory bus, this gives us 3.2-4.3GB/sec bandwidth. Of course, the frequencies will be growing: 333MHz (667MHz) chips are on schedule already, so we are to see 5.4GB/sec quite soon. And keeping in mind that there will be no single-channel chipsets even in the PC market by the time DDR II arrives, the two memory channels will provide up to 10.8GB/sec bandwidth! That will be enough for future processors as well as for AGP 3.0 and PCI-X. By the way, the bus bandwidth of Prescott CPUs, the next x86 generation CPUs from Intel, will be exactly 5.4GB/sec. A perfect hit.

DDR II prototype by IBM and Infineon

This frequency growth became possible due to the improved manufacturing technologies used, as well as due to some evolutionary changes in the core. Such as the doubling of the data packets prefetch, which is now equal to 4 instead of 2. So, the core is functioning at a four times lower speed than the memory bus does. Let's take a 266MHz chip with the resulting external bus frequency of 533MHz as an example. Its core frequency will be 133MHz, which makes no problem for any DRAM maker today.

One more similar innovation is that the commands can now be executed on any rising edge of the wavefront if they are not in conflict with the preceding ones. As a result, the command bus now can work at a twice lower frequency than the data bus. To cut it short, everything doubles inside the doubled overall frequency.

Before going on further, let's recall some basic concepts. The data amount stored in a chip is represented as a combination of several divisions or banks, which are in their turn split into pages. A page is a two-dimensional array (table). One of the key parameters that determine memory performance at the same bandwidth are CAS (column address strobe) and RAS (row address strobe). They stand for the number of clock cycles necessary to access the required column and row, respectively. Their crossing gives us the memory cell to be read or written to.

Now among the architectural changes we can mention such things as Posted CAS or the formula sounding as: "write latency = read latency - 1", which help to utilize the bus more efficiently. The time interval between requests to the column and row of the data array (RAS-to-CAS delay, tRCD) is at least 13ms for DDR, which lead to a loss of about four clocks at the frequencies of DDR II chips. The Posted CAS mode and additive latency concept were introduced in order to combat these losses. They allow executing CAS and RAS commands with certain overlapping, practically without any pauses.

Pages: [ 1 | 2 | 3 | 4 ]


Comments currently: 2
Discussion started: 03/19/03 08:23:21 AM
Latest comment: 12/10/04 04:51:54 PM

View comments

Add your Comment