[Beowulf] How much RAM per core is right?
Greg Lindahl
lindahl at pbm.com
Fri Jul 18 11:43:49 PDT 2008
On Thu, Jul 17, 2008 at 11:47:16AM -0400, Gus Correa wrote:
> How much memory per core/processor is right for a Beowulf cluster node?
It really depends on your apps. Some people spend more than 50% of
their $$ on ram, others only need a few hundred megabytes per node. A
few years ago, it was the case that 1 GB/core was a number that many
clusters used, but I suspect it's crept up since them.
> In any case, at this point it seems to me that
> "get as much RAM as your money can buy and your motherboard can fit" may
> not be a wise choice.
> Is there anybody out there using 64 or 128GB per node?
Sure, because their problems call for it. For example, many CFD
computations are just trying to find steady-state airflow around an
object. These computations don't run for very many timesteps, with a
very big grid, and huge messages.
Now in your case it sounds like you know how much RAM to buy, given
your experience on your existing machine. You can project to your new
cluster: "I have $X. If I bought P cores, 1 GB/core, that gives me an
N*N*L grid, it will take H hours to finish a 1000 year run. OK, that
finished too quickly. So I'll buy fewer cores and more memory, run a
bigger grid, that takes longer..." Iterate until done.
BTW, you said it was N**4: isn't the vertical direction treated very
differently from lat/lon?
-- greg
More information about the Beowulf
mailing list