Opinions needed on new system

Douglas Eadline deadline at plogic.com
Thu Jul 20 12:24:58 PDT 2000


On Thu, 20 Jul 2000, Jack Wathey wrote:

> 
> I'm no hardware guru, so I won't comment on your specific configuration,
> but I do have a general suggestion.  If you have not yet written the
> parallel application that you intend to run on the cluster, then consider
> buying only 2 nodes, or just scrounge up two old PC's and connect them
> with a crossover cable.  Do as much designing and coding of your
> application software as possible on that minicluster.  This could easily
> take weeks or months, depending on the complexity of your problem.  It
> will give you time to learn whatever message-passing software you plan
> to use.  During that time, prices will probably fall, and better hardware
> will probably come on the market.  When your code is fully up and running
> on your minicluster, then go out and buy the real thing.

Also, you can even write parallel code on a single system, then move it to
a cluster.  While you will not be able to try out communication
performance, you can get quite a lot done on a single machine. I have
prototyped, PVM and MPI codes on my laptop. 

Doug

> 
> Best wishes,
> Jack
> 
> 
> On Wed, 19 Jul 2000, Jeromy Hollenshead wrote:
> 
> > We are thinking of setting up a cluster of computers and I have spec'd out
> > the following to be sent out for bids.  Is there anything obviously wrong
> > with my setup. We already have a 19inch rack and 48 port HP switch that we
> > are going to use.
> > 
> > Does anyone have any suggestions on the type of network card to ask for?
> > Specific chipsets that work well under linux.  I have seen the How-to and it
> > seemed any of the 100 Mbit PCI cards should work.
> > 
> > We are planning to write our application in FORTRAN.  Are the Portland Group
> > compiler/profiler/debugger  suitable for this, or, are there better options.
> > 
> > Does anyone have any experience with the new Thunderbird Processors?
> > 
> > Any suggestions?  We are only able to spend around $22,000 USD.
> > 
> > Thanks,
> > 
> > Jeromy
> >  
> > 
> > 
> > 
> > 
> > Host System
> > -----------------
> > AMD Athlon 750 Thunderbird Processor
> > 512M ECC SDRAM (PC133)
> > ATX Motherboard (KT133 chipset)
> > (2) x 10/100 Ethernet adapter
> > Matrox G400 AGP graphics card
> > RAID 5, 4x 9 GB SCSI hard drive, 10,000 rpm
> > 19 inch monitor (1280x1024)
> > keyboard, mouse ( logitech or microsoft), CDROM ( > 32X)
> > 3.5" floppy drive
> > 19" rack mount enclosure with hot-swapable drives for RAID
> > Chasis Slides
> > Cables TP Networking Cat 5  7'
> > Cables to connect floppy, hardrives, and cdrom
> > 
> > 
> > Eight Nodes - Each includes
> > ---------------------------------
> > AMD Athlon 750 Thunderbird Processor
> > ATX Motherboard  (KT133 chipset)
> > 512M ECC SDRAM (PC133)
> > 10/100 Ethernet adapter
> > (open pci slot for future ethernet card)
> > EIDE hardrive - (10-20 gig)  DMA/66
> > 3.5" floppy drive, graphics card
> > 19" rack mount enclosure
> > Chasis slides
> > Cables TP Networking Cat 5  7'
> > Cables to connect floppy,  and hardrives
> > 
> > 
> > Software/System ( pre-loaded)
> > ---------------------------------------
> > Red Hat Linux 
> > PVM and MPI installed
> > Integration of Host system with Nodes
> > Portland Group Compilers
> > (PGHPF) parallel FORTRAN for clusters
> > (PGDBG) symbolic debugger
> > (PGPROF) performance profiler
> > Batch system (such as PBS)
> > 
> > 
> > _______________________________________________
> > Beowulf mailing list
> > Beowulf at beowulf.org
> > http://www.beowulf.org/mailman/listinfo/beowulf
> > 
> 
> 
> _______________________________________________
> Beowulf mailing list
> Beowulf at beowulf.org
> http://www.beowulf.org/mailman/listinfo/beowulf
> 

-- 
-------------------------------------------------------------------
Paralogic, Inc.           |     PEAK     |      Voice:+610.814.2800
130 Webster Street        |   PARALLEL   |        Fax:+610.814.5844
Bethlehem, PA 18015 USA   |  PERFORMANCE |    http://www.plogic.com
-------------------------------------------------------------------





More information about the Beowulf mailing list