CCL:Largest Linux Cluster? (fwd)
Craig Tierney
ctierney at hpti.com
Thu Jan 24 14:54:57 PST 2002
On Thu, Jan 24, 2002 at 02:44:59PM -0700, Art Edwards wrote:
> On Thu, Jan 24, 2002 at 10:17:28AM -0800, alvin at Maggie.Linux-Consulting.com wrote:
> >
> > hi art
> >
> > On Thu, 24 Jan 2002, Art Edwards wrote:
> >
> > > On Thu, Jan 24, 2002 at 05:55:24PM +0100, Eugene Leitl wrote:
> >
> > ...
> >
> > > > Can anyone tell me what is currently the largest linux-based workstation
> > > > cluster that has been successfully deployed and is being used for
> > > > computational chemistry studies? (largest = number of nodes regardless of
> > > > the speed of each node).
> > > >
> > > Sandia National Laboratories has C-Plant that runs Linux in addition to several
> > > layers of home-grown OS on several thousand nodes. The basic node is a DEC
> > > ev6 with myranet (sp). They use no local disk, opting for a huge disk farm.
> >
> > do you happen to know how they manage the huge disk farm???
> > - resumably raid5 systems...
> > - are each raid5 sub-system dual-hosted so that the other cpu
> > can getto the data if one of the cpu cant get to it
> > - does all nodes access the "disk farm" thru the gigabit ethernet
> > or dual-hosted scsi cables ??
> > - how does one optimize a disk farm ?? (hdparm seems too clumbsy)
> >
> > -- in the old days.... 1980's ... there used to be dual-hosted
> > disk controllers where PC-HOST#1 and PC-HOST#2 can both access the same
> > physical CDC/DEC/Fujitsu drives
> > - wish i could find these dual host scsi controllers for todaysPCs
> That is part of the home-grown software. There are parallel IO ports that require
> special calls. I'm a user, not a developer so that is the extent of my expertise.
Sandia's IO system does not fall into 'todaysPC' category.
You don't need a dual ported scsi controller if you have a really
big system. Why not just install 8 Fibre cannel cards in one machine and stripe
across them? Then install 8-16 (or how many you want) gigE cards to provide the
bandwidth to the ENFS servers that provide the IO to the nodes.
Craig
>
> Art Edwards
> >
> > have fun linuxing
> > alvin
> > http://www.Linux-1U.net .. 8x 200GB IDE disks -->> 1.6TeraByte 1U Raid5 ..
> >
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
--
Craig Tierney (ctierney at hpti.com)
More information about the Beowulf
mailing list