[Beowulf] adding a cluster to a traditional supercomputer

Gerry Creager N5JXS gerry.creager at tamu.edu
Thu Feb 9 20:44:45 PST 2006

While we're discussion interesting variations, is anyone here familiar 
with the IBM T5+ hardware, and the potential for 8x16-way nodes with 
16GB/node... nominally shared memory but supposedly reconfigurable 
(dynamically) for NUMA?

Thoughts, experiences, suspicions are all welcome.

Thanks gerry

Florent Calvayrac wrote:
> Dear list,
> Some colleagues have at last decided to give up the upgrading of their
> traditional shared-memory supercomputer, which I have always  found a
> waste of money with its 32 cores which gives it less power
> than our 100+ cluster for 6 times the price.  I find that traditional 
> big iron is only
> justified above 128  or 256 processors where the excellent memory coupling
> gives an advantage on certain codes, and only if it is not shared among 
> n+1 users.
> (so the science has to be good...)
> Anyway, those colleagues now want to spend some money on a cluster and 
> are new to the field.
> They wish  to keep their current setup (excellent AC, UPS, dust 
> free-floor..)
> and above all  their very good file system on the front node of their
> "old" 32-way SMP which comes with a cluster file system,
> SAN storage with fiberchannel,. etc
> They want to go for a bunch (depending on final budget)
> of racked Myrinet or Infiniband connected 4-way Opteron dual core nodes 
> (some
> users need OpenMP on 8 cores), but
> the question is open for the filesystem ;  should they rely on NFS over
> Gigabit ethernet and attack their "old" fileserver (which is excellent),
> with some fears for some heavy I/O codes, add fiberchannel ports to
> some (or all) nodes on which the I/O bound codes would be dispatched with
> a good PBS setup (best solution but expensive), or add another, 
> dedicated NFS server with
> several gigabit ethernet cards, each for serving only parts of the new 
> cluster,
> this "new" fileserver itself getting its files via CFS, fiberchannel, or 
> even NFS
> from the "old" fileserver ?
> They invited me to the discussion, but having not much practice on I/O 
> bound
> code, I could not really answer to those questions.
> Any comments, suggestions, ideas ?
> thanks in advance
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf

Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University	
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843

More information about the Beowulf mailing list