[Beowulf] Distributed FS (Was: copying big files)

Carsten Aulbert carsten.aulbert at aei.mpg.de
Thu Aug 14 07:26:52 PDT 2008

Hi Mark

Mark Hahn wrote:

> the premise of this approach is that whoever is using the node doesn't
> mind the overhead of external accesses.  do you have a sense (or even
> measurements) on how bad this loss is (cpu, cache, memory, interconnect
> overheads)?  if you follow the reasoning that current machines are
> pretty 'fat' wrt IB bandwidth and cpu power, there's still a question
> of who does the work of raid/fec - ideally, it would be on the client
> side to minimize the imposed jitter.

As always: It depends. All our nodes run on single GigE but mostly their
computations are non-MPI and even local to their core, i.e. the
bandwidth should not be a problem. Of course you add more heat to the
system, e.g. 1000 extra disks might be around 10 kW sustained, but OTOH
you gain a lot, provided you can efficiently use these extra disks. I
need to look into PVFS, if this would provide a kind of uniform
namespace (and maybe some kind of automatically duplicated files) that
would already be perfect. But I need to read first.



Dr. Carsten Aulbert - Max Planck Institute for Gravitational Physics
Callinstrasse 38, 30167 Hannover, Germany
Phone/Fax: +49 511 762-17185 / -17193
http://www.top500.org/system/9234 | http://www.top500.org/connfam/6/list/31
-------------- next part --------------
A non-text attachment was scrubbed...
Name: carsten_aulbert.vcf
Type: text/x-vcard
Size: 414 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20080814/003b3e07/attachment.vcf>

More information about the Beowulf mailing list