[Beowulf] Computation on the head node

Joe Landman landman at scalableinformatics.com
Sun May 18 17:37:34 PDT 2008


Vlad Manea wrote:
> Hi Joe,
> 
> The codes are not commercial, they are developed together with
> Mike Gurnis's  group at Caltech (CIG). The codes are parallel and use
> MPICH. I use a small 24 ports Gigabit switch
> from Netgear for the moment (it is full-duplex and supports jumbo frames...).
> I had a very tight budget, but next year if one of our big proposal will be accepted
> I would go for a good switch. Can you give me a TIP here? Thx!

Procurve 2900-48G.  Excellent properties.  A bit pricey, but overall an 
extremely good switch.

> The test cases are not intended to be large
> rather they are at most few GB in size (if they are 3D cases; 2D cases
> are much smaller). The compute nodes are DELL PowerEdge SC1435
> with 4AMD opterons and 8 GB RAM.  NICs are on board, probably not
> the best solution I think...I can probably negotiate with DELL
> for an upgrade here to Intel PRO 1000PT Dual NIC, Cu, PCIe...
> 
> Vlad
> ______________________
> 
> Joe Landman wrote:
>> Vlad Manea wrote:
>>> Hi Joe,
>>>
>>> Thanks. Probably I will go with ROCKS.
>>> For the moment I have 5 machines with 4 AMDs on them
>>> and I will use one as headnode and the other 4 as comp. nodes.
>>> The cluster will be dedicated to run fluid dynamics codes.
>>
>> Hi Vlad:
>>
>>   Are these locally developed codes or commercial ones?
>>
>>> As for IO, I use a Gigabit switch with 48 Gb backplane bandwidth,
>>> which probably might be sufficient for a while...
>>
>>   Possibly.  Which switch are you going to use?  You are looking (for MPI) to 
>> optimize the port-port latency (and make sure you have good NICs on the 
>> units).  For file IO, if these are fluent runs, how large are the case files?  
>> We have customers with 20+ GB sized files these days.
>>
>>> I also intend to use both NICs on my servers.
>>> However the cluster I intend to build is more experimental
>>> and will be probably limited to 32 (64? if $$ available...) nodes.
>>
>>   Ok.  I might suggest focusing some of your money on the gigabit 
>> infrastructure (general case) ... good gigabit switches can have a positive 
>> impact upon multiple other subsystems.
>>
>> Joe
>>
>>>
>>> Vlad
>>> _____________
>>
>>
> 
> 
> -- 
> *Dr. Vlad Constantin Manea*
> *Professor of Geophysics*
> Computational Geodynamics Lab. <http://www.geociencias.unam.mx/geodinamica>
> Centro de Geociencias,
> Campus UNAM, Juriquilla,
> Blvd Juriquilla 3001,
> Juriquilla, Querétaro, 76230,
> México.
> phone: +52 55 5623 4104/ext.133
> fax: (55) 5623-4129


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web  : http://www.scalableinformatics.com
        http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 866 888 3112
cell : +1 734 612 4615



More information about the Beowulf mailing list