[Beowulf] High Performance for Large Database
Michael Will
mwill at penguincomputing.com
Mon Nov 15 08:47:57 PST 2004
On Monday 15 November 2004 05:26 am, Laurence Liew wrote:
> The current version of GFS have a 64 node limit.. something to do with
> maximum number of connections thru a SAN switch.
Does this mean 64 nodes with direct SAN access or 64 client nodes?
64 IO nodes could support a larger cluster than just 128 nodes IMHO.
Michael
> I believe the limit could be removed in RHEL v4.
>
> BTW, GFS was built for enterprise and not specifically for HPC... the
> use of SAN (all nodes need to be connected to a single SAN storage)..
> may be a bottleneck...
>
> I would still prefer the model of PVFS1/2 and Lustre where the data is
> distributed amongst the compute nodes
>
> I suspect GFS could prove useful however for enterprise clusters say 32
> - 128 nodes where the number of IO nodes (GFS nodes with exported NFS)
> can be small (less than 8 nodes)... it could work well
>
> Cheers!
> Laurence
>
> Chris Samuel wrote:
> > On Wed, 10 Nov 2004 12:08 pm, Laurence Liew wrote:
> >
> >
> >>You may wish to try GFS (open sourced by Red Hat after buying
> >>Sistina)... it may give better performance.
> >
> >
> > Anyone here using the GPL'd version of GFS on large clusters ?
> >
> > Be really interested to hear how folks find that..
> >
> >
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>
--
Michael Will, Linux Sales Engineer
NEWS: We have moved to a larger iceberg :-)
NEWS: 300 California St., San Francisco, CA.
Tel: 415-954-2822 Toll Free: 888-PENGUIN
Fax: 415-954-2899
www.penguincomputing.com
More information about the Beowulf
mailing list