[Beowulf] HPC and SAN

Rivera, Angel R Angel.R.Rivera at conocophillips.com
Wed Dec 29 10:04:26 PST 2004


I would not be quite so quick to discount a SAN. We have
just received ours and I am adding to our cluster after 3 
months of testing.  I have worked hard for almost a year
to get one in.

You can build as much complexity as you want into it-but
does not have to be this deep dark hole some might want
you to believe it is.

For us, it gives us a consolidated location for the disks
with sufficient spares, and the Linux heads we get and can
monitor. 

-ARR

-----Original Message-----
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org]
On Behalf Of Michael Will
Sent: Wednesday, December 29, 2004 11:06 AM
To: beowulf at beowulf.org
Cc: Leif Nixon
Subject: Re: [Beowulf] HPC and SAN


On Wednesday 29 December 2004 01:11 am, Leif Nixon wrote:
> Guy Coates <gmpc at sanger.ac.uk> writes:
> 
> > The only time SAN attached storage helps is in the case of storage
node
> > failures, as you have redundant paths between storage nodes and
disks.
> 
> And the added complexity of a fail-over mechanism might well lower
> your total MTBF.

Speaking from experience?

The expectation when building a fail-over system is that the systems
mtb-total-f
is higher even though the mtb-partial-f is shorter (more parts that can
fail).

Of course the probability, that the failover logic / software is the new
single-point-of-failure,
is not zero either. 

Michael
-- 
Michael Will, Linux Sales Engineer
NEWS: We have moved to a larger iceberg :-)
NEWS: 300 California St., San Francisco, CA.
Tel:  415-954-2822  Toll Free:  888-PENGUIN
Fax:  415-954-2899 
www.penguincomputing.com

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf




More information about the Beowulf mailing list