[Beowulf] Slection from processor choices; Requesting Giudence
laytonjb at charter.net
laytonjb at charter.net
Thu Jun 15 15:34:32 PDT 2006
I sort of agree :)
We've seen dual-core do very well on most CFD applications. For instance
switching to dual-core on Fluent only results in about a 5% loss of performance.
On other CFD codes the difference is on the noise. So I would recommend
I echo Michaels' comments about 1 GB per core. I think this is kind of the
minimum and you should think about 2 GB per core.
As for the interconnect.... this gets a bit more involved. IMHO the choice depends
upon how many cores you will use in the a single job. If you run a small number
of cores, then GigE is just great. You can even try something like GAMMA if you
want to reduce the latency. I've gotten some pretty good results with GAMMA vs.
plain-jane GigE. But be careful of the GigE NICs you choose. The Intel NICS are
so much better than the standard Broadcom NICs. Plus they are tunable so you
can adjust the drivers to your code.
At the next level I would recommend Level 5 (now Solarflare). Thier low-latency
GigE NICs are pretty nice. One of the coolest things is that you don't have to rebuild
your codes to use. This REALLY helps with ISV applications. One example CFD code
I tested with Level 5 showed that it got about half of the IB performance gain
(relative to GigE) at less than 1/3 the cost. So for this code it was a
Other codes do very well on other networks vs. IB. I can't say too much since the
results were done by my company. :)
However, I'm not always convinced that IB is the right way to go with CFD codes.
Yes, you get the ultimate performance. Yes it allows codes to scale better (but
I've seen good CFD codes scale to over 80% on GigE at 200 CPUs). But I'm not
convinced it's a price/performance winner, particular if your job only runs a
reasonable number of processors in a single job.
Anyway - my 2 cents :)
> Dual cpu single core, opteron. Make sure that the 1G of RAM are enough
> for your application.
> Also consider a low latency interconnect, i.e. infiniband because I have
> seen cases where
> CFD exchanges a lot of small messages.
> Michael Will
> SE Technical Lead
> Penguin Computing
> -----Original Message-----
> From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org]
> On Behalf Of Mikhail Kuzminsky
> Sent: Thursday, June 15, 2006 6:07 AM
> To: amjad ali
> Cc: beowulf at beowulf.org
> Subject: Re: [Beowulf] Slection from processor choices; Requesting
> In message from "amjad ali" <amjad11 at gmail.com> (Thu, 15 Jun 2006
> 04:02:12 -0400):
> >Hi ALL
> >We are going to build a true Beowulf cluster for Numerical Simulation
> >of Computational Fluid Dynamics (CFD) models at our university. My
> >question is that what is the best choice for us out of the following
> >choices about processors for a given fixed/specific amount/budget:
> > 1.
> > One processor at each of the compute nodes
> > 2.
> > Two processors (on one mother board) at each of the compute nodes
> > 3.
> > Two Processors (each one dual-core processor) (total 4 cores on the
> > board) at each compute nodes.
> > 4.
> > four processor (on one mother board) at each of the compute nodes.
> > Initially, we are deciding to use Gigabit ehternet switch and 1GB of
> >RAM at each node.
> I've heard many times that memory throughput is extremally important in
> CFD and that using of 1 cpu/1 core per node (or 2 single cores Opteron
> having independed memory channels) is in some cases better than any
> sharing of memory bus(es).
> Mikhail Kuzminsky
> Zelinsky Institute of Organic Chemistry
> >Please guide me that how much parallel programming will differ for the
> >above four choices of processing nodes.
> >with best regards:
> >Amjad Ali.
> Beowulf mailing list, Beowulf at beowulf.org To change your subscription
> (digest mode or unsubscribe) visit
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf