[Beowulf] Intro question

Robert G. Brown rgb at phy.duke.edu
Wed Dec 3 07:33:27 PST 2008

On Wed, 3 Dec 2008, malcolm croucher wrote:

> Its gonna be used for computational chemisty , not academic but more private
> / entrepreneurship. I been doing a lot of research in this area for a while
> and was hoping to do some more on my own.

Any idea of the specific software you plan to use?  Or do you plan to
write your own.  There are lots of people on-list that can help you e.g.
estimate the likely task granularity if you identify your toolset (only,
not what you hope to invent:-).  Basically, more important than the case
you plan to put your system(s) in is the balance between computation
(computer cores at some given clock), memory (bandwidth and contention
between cores and memory), and interprocessor communications both within
a system (between one core/thread and another) and between systems
(network based IPCs).  Each of the pathways from a core outward has an
associated cost in latency and bandwidth, and very different investment
strategies will yield the best bang for a limited supply of bucks for
different "kinds" of parallel problems.  So the very first step of
cluster engineering is typically to analyze you tasks' patterns of
computation, memory access and interprocessor communication.  Once that
is known, it is usually possible to identify (for example) whether it is
better to have fewer processors and a faster network or more processors
and a slow network.  Since a really fast network can cost as much as two
or more cores and since one has to balance network needs against ALL the
cores per chassis, this can be a significant tradeoff.  Ditto for tasks
that tend to be memory bound -- in that case one might want to opt for
fewer cores per box to ensure that each core can access memory at full
speed with minimal lost efficiency due to contention.


> On Wed, Dec 3, 2008 at 2:11 AM, Robert G. Brown <rgb at phy.duke.edu> wrote:
>       On Tue, 2 Dec 2008, Lombard, David N wrote:
>             An acoustic concern. A 1U is quite a bit louder than
>             the normal desktop as
>             (1) they use itty-bitty fans and (b) there's no
>             incentive to make them
>             quiet, as nobody is expected to have to put up with
>             their screaming...
> A good point.  I actually like Greg's suggestion best -- consider
> (fewer) 2U nodes instead -- quieter, more robust, cooler.  Perhaps
> four,
> but that strongly depends on the kind of thing you are trying to do --
> tell us what it is if you can do so without having to kill and we'll
> try
> to help you estimate your communications issues and likely
> bottlenecks.
> For some tasks you are best off getting as few actual boxes as
> possible
> with as many as possible CPU cores per box.  For others, having more
> boxes and fewer cores per box will be right.
> The reason I like four nodes with at least a couple of cores each is
> that if you don't KNOW what you are likely to need, you can find out
> (probably) with this many nodes and then "fix" your design if/when you
> scale up into production.  Otherwise you buy eight single core node
> (if
> they still make single cores:-) and then learn that you would have
> been
> much better off buying a single eight core node.  Or vice versa.
>   rgb
>       --
>       David N. Lombard, Intel, Irvine, CA
>       I do not speak for Intel Corporation; all comments are
>       strictly my own.
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
> Robert G. Brown                            Phone(cell): 1-919-280-8443
> Duke University Physics Dept, Box 90305
> Durham, N.C. 27708-0305
> Web: http://www.phy.duke.edu/~rgb
> Book of Lilith Website: http://www.phy.duke.edu/~rgb/Lilith/Lilith.php
> Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977
> --
> Malcolm A.B Croucher

Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu

More information about the Beowulf mailing list