Reliability of Beowulf

Toof Chanon tooff2 at
Mon Sep 16 05:48:36 PDT 2002


        I am a newbie to Beowulf and Computer
Architecture context. I don't understand how Beowulf 
PC cluster could manage huge memory usage.  Please
correct if I am wrong.  For each PC, a Beowulf's node
could provide only 32-bit address array A(n) ; n <
(2**31-1)/nb; nb = number of byte, e.g. real*8 array, 
A(268435455), which depend on (fortran) compilers.
Currently, 2-3 GB physical ram is PC limit. Suppose
that I have 2 nodes, then 4 GB is all share physical
ram. Ok, I knew that there is linux kernel which
supports > 2 GB ram. Do I need a special compiler
which supports > 2GB address also? If yes, I guess
such a compiler may be too expensive. No advantage for
going Beowulf. I believe that Beowulf is a set of
cheap and robust thing. My point is,

How could Beowulf cluster manage its huge memory > 2GB
by using a common 32-bit extent

Is this true that Beowulf provides only parallel
processing jobs but it does not tackle a huge problem
such a big array requirement/management? We have to
turn to supercomputer again. 

I have a stupid question because I am a student who is
seeking for a high performance machine for doing
research. Beowulf is only one worth, I go. 

So I would like to give your suggestion to
convince my supervisor go Beowulf for a big problem. 

Any suggestion will be appreciated


Toof Chanon

Do you Yahoo!?
Yahoo! News - Today's headlines

More information about the Beowulf mailing list