Without getting into the ideological argument about the need for the 64-bit desktop computing, I just list a few useful things this technology can provide: more virtual memory, larger addressing space for physical memory, benefits to specific applications. For example, using 64-bit operands in cryptography often leads to fewer operations (sometimes, the number of operations is reduced by several factors), i.e. the calculation time.
We should also keep it in mind that the amount of system memory in modern computer systems is slowly approaching the addressing space limitation of 32-bit processors (4GB with flat memory addressing). In other words, it is better to make sure that there is sufficient bit depth in advance rather than make up hasty and frantic decisions later. My own opinion on the 64 bit issue is simple: why not? It cannot do you any harm, but can sometimes help you a lot. So, the users don’t actually have any complaints about the 64 bit, but it transpires that someone else has as I will tell you shortly.
Thus, most of the today’s advanced processors (by brands, not sales volumes or quantities) are actually 64-bit ones. Nearly all RISC (MIPS, Alpha, HP PA_RISC), post-RISC (Itanium and, with certain reservations, processors from Transmeta) and some x86 processors (Opteron and Athlon 64) fit into this category. In fact, it is only the Pentium 4 that remains the stronghold of the 32-bitness in the modern processor world (the Xeon can be considered a 36-bit processor). With this background, it is the more interesting to watch the situation around the SPEC CPU 2004 tests. Some informed sources say that the SPEC committee has practically agreed upon the set of algorithms to include into this test. There are over 40 subtests that require about 2GB (!) of system memory and about a day to perform one pass. The testing procedure will become more intensive, as you see. At the same time, there will be no algorithms in SPEC CPU 2004 that ensure a significant performance boost from the 64 bit! That’s a real surprise. They say such algorithms are of less relevance, although right now we see 64-bit processors emerging for the desktop computer. Right now, every processor manufacturer offers 64-bit models. Right now, only one processor maker does not offer 64-bit processors in its solutions for the mass market. I guess you understand who I am talking about. Yes, it is sad, but Intel took an understandable, but unpleasant position: “We don’t like this game”. Under the company’s pressure (and because of its veto right), the SPEC committee has approved of a set of algorithms that don’t provoke anything, save for mild amusement.
Of course, Intel can stand for its own interests being one of the financial sources – perhaps the biggest one – for the committee, but I had hoped that they wouldn’t have gone beyond a certain limit. The hopes were misplaced. The situation around the new version of SPEC CPU 2004 resembles much (I would even say “too much”) the story with the “optimization” of SYSMark 2002: after that optimization the benchmark fell in love with one platform and became too… controversial to the others. I do fear the SPEC tests may repeat the same story. Until now, they have been highly respectable cross-platform performance tests specialists could rely upon. If this situation changes, there will be practically no benchmark to replace SPEC CPU – no other benchmark can boast the same unanimous recognition and such a big results database as this one.
I do hope there will be nothing like that, and the SPEC committee never publishes the proposed set of tests. Otherwise, the SPEC tests would become a marketing tool, rather than an adequate reflection of algorithms efficiency used in modern programming. I don’t think we need such marketing methods – they are too dirty. Moreover, good products wouldn’t really need such “support”.