<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: Arial; font-size: 12pt; color: #000000'><P>All,</P>
<P> </P>
<P>Yes, the stacked DRAM stuff is interesting. Anyone visit the siXis booth at</P>
<P>SC08? They are stacking DRAM and FPGA dies directly onto SiCBs (Silicon</P>
<P>Circuits Boards). This allows for dramatically more IOs per chip and finer</P>
<P>traces throughout the board which is small, but made entirely of silicon. They</P>
<P>promise better byte/flop ratios and more total memory per unit volume.</P>
<P> </P>
<P>rbw<BR><BR><BR><BR>----- Original Message -----<BR>From: "Eugen Leitl" <eugen@leitl.org><BR>To: info@postbiota.org, Beowulf@beowulf.org<BR>Sent: Friday, December 5, 2008 7:48:43 AM GMT -05:00 US/Canada Eastern<BR>Subject: [Beowulf] Multicore Is Bad News For Supercomputers <BR><BR><BR>(Well, duh).<BR><BR>http://www.spectrum.ieee.org/nov08/6912<BR><BR>Multicore Is Bad News For Supercomputers<BR><BR>By Samuel K. Moore<BR><BR>Image: Sandia<BR><BR>Trouble Ahead: More cores per chip will slow some programs [red] unless<BR>there’s a big boost in memory bandwidth [yellow<BR><BR>With no other way to improve the performance of processors further, chip<BR>makers have staked their future on putting more and more processor cores on<BR>the same chip. Engineers at Sandia National Laboratories, in New Mexico, have<BR>simulated future high-performance computers containing the 8-core, 16‑core,<BR>and 32-core microprocessors that chip makers say are the future of the<BR>industry. The results are distressing. Because of limited memory bandwidth<BR>and memory-management schemes that are poorly suited to supercomputers, the<BR>performance of these machines would level off or even decline with more<BR>cores. The performance is especially bad for informatics<BR>applications—data-intensive programs that are increasingly crucial to the<BR>labs’ national security function.<BR><BR>High-performance computing has historically focused on solving differential<BR>equations describing physical systems, such as Earth’s atmosphere or a<BR>hydrogen bomb’s fission trigger. These systems lend themselves to being<BR>divided up into grids, so the physical system can, to a degree, be mapped to<BR>the physical location of processors or processor cores, thus minimizing<BR>delays in moving data.<BR><BR>But an increasing number of important science and engineering problems—not to<BR>mention national security problems—are of a different sort. These fall under<BR>the general category of informatics and include calculating what happens to a<BR>transportation network during a natural disaster and searching for patterns<BR>that predict terrorist attacks or nuclear proliferation failures. These<BR>operations often require sifting through enormous databases of information.<BR><BR>For informatics, more cores doesn’t mean better performance [see red line in<BR>“Trouble Ahead”], according to Sandia’s simulation. “After about 8 cores,<BR>there’s no improvement,” says James Peery, director of computation,<BR>computers, information, and mathematics at Sandia. “At 16 cores, it looks<BR>like 2.” Over the past year, the Sandia team has discussed the results widely<BR>with chip makers, supercomputer designers, and users of high-performance<BR>computers. Unless computer architects find a solution, Peery and others<BR>expect that supercomputer programmers will either turn off the extra cores or<BR>use them for something ancillary to the main problem.<BR><BR>At the heart of the trouble is the so-called memory wall—the growing<BR>disparity between how fast a CPU can operate on data and how fast it can get<BR>the data it needs. Although the number of cores per processor is increasing,<BR>the number of connections from the chip to the rest of the computer is not.<BR>So keeping all the cores fed with data is a problem. In informatics<BR>applications, the problem is worse, explains Richard C. Murphy, a senior<BR>member of the technical staff at Sandia, because there is no physical<BR>relationship between what a processor may be working on and where the next<BR>set of data it needs may reside. Instead of being in the cache of the core<BR>next door, the data may be on a DRAM chip in a rack 20 meters away and need<BR>to leave the chip, pass through one or more routers and optical fibers, and<BR>find its way onto the processor.<BR><BR>In an effort to get things back on track, this year the U.S. Department of<BR>Energy formed the Institute for Advanced Architectures and Algorithms.<BR>Located at Sandia and at Oak Ridge National Laboratory, in Tennessee, the<BR>institute’s work will be to figure out what high-performance computer<BR>architectures will be needed five to 10 years from now and help steer the<BR>industry in that direction.<BR><BR>“The key to solving this bottleneck is tighter, and maybe smarter,<BR>integration of memory and processors,” says Peery. For its part, Sandia is<BR>exploring the impact of stacking memory chips atop processors to improve<BR>memory bandwidth.<BR><BR>The results, in simulation at least, are promising [see yellow line in<BR>“Trouble Ahead<BR><BR>_______________________________________________<BR>Beowulf mailing list, Beowulf@beowulf.org<BR>To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf<BR></P></div></body></html>