[Beowulf] Purdue Supercomputer

Mark Hahn hahn at mcmaster.ca
Wed May 7 06:33:06 PDT 2008


> everything was going. This morning, we hit the last few mis-installs. Our DOA 
> nodes were around 1% of the total order..

one advantage of having the vendor pre-rack is that they usually also
pre-test.  did you consider having dell pre-assemble the cluster, and 
reject it for cost reasons?

> The physical networking was done in a new way for us.. We used a large 
> Foundry switch and the MRJ21 cabling system for it. Each racks gets 24 nodes, 
> a 24 port passive patch panel, and 4 MRJ21 cables that run back to the

if I understand, this means each node has a 1Gb link to a large switch,
right?  I'm a little surprised this was cost-effective - what is the intended
workload of the cluster?  (I mean given that Gb is usually considered
high-latency and low-bandwidth.)  I'd be curious to hear about your
consideration of both 10G and IB.



More information about the Beowulf mailing list