PVFS and Software RAID

Georgia Southern Beowulf Cluster Project gscluster at hotmail.com
Mon Jan 22 10:32:50 PST 2001


Hello,

Another option is to use Network Block Device.  It will allow you to add 
several block devices (read: your 9GB drives for example) across a network 
and treat them as a single block device.  This is supposed to allow a sort 
of network RAID support for redundancy.  I forget the web address for the 
NBD (its been in the kernels since 1.2 or 2.0) so please do a google.com 
search or look in the kernel source tree under Documentation/nbd.txt.  Good 
luck, as I have similar desires.

Thanks,
<><><><><><><><><><><><><><><><><><>
Georgia Southern University
Beowulf Cluster Project
gscluster at hotmail.com


>--------
>
>As far as I know, neither approach will work.  The only way I know to 
>implement
>redundancy is to put two drives in each node, use software RAID to create a
>redundant system on each node, and then use PVFS to get parallel data
>distribution.
>
>This will protect you from disk failure, but not from node failure.  In the
>event
>of node failure you would have to repair or replace the node and bring it 
>back
>up with disks intact.  This prevents data loss, but does not insure high
>availability.
>
>If the software RAID drivers have recently gained the ability to do 
>mirroring
>across the network, I didn't know that and I'd like to hear about it.
>
>Walt
>
> > I'm sure this problem has been discussed here, but maybe we could go 
>over
> > it again.
> >
> > Problem: I have four boxes with a 9 GB file system (/dev/hda7) in each
> > box. I would like to use them all to create one 18 GB file system that 
>is
> > redundant. Is it better to use PVFS on two pairs of drives to create 2 
>18
> > GB file systems and then use Software Raid to make these redundant. Or 
>is
> > it better to use Software Raid to make 2 9 GB redundant file systems and
> > then create the 18 GB file system with PVFS?
> >
> > Kevin.
> >
> > *********************************
> >     Online and On Demand HPC
> >       Pay per Use CPU Time
> >   www.tsunamictechnologies.com
> > *********************************
> >
> >
> > _______________________________________________
> > Beowulf mailing list
> > Beowulf at beowulf.org
> > http://www.beowulf.org/mailman/listinfo/beowulf
>
>--
>Dr. Walter B. Ligon III
>Associate Professor
>ECE Department
>Clemson University
>
>
>
>_______________________________________________
>Beowulf mailing list
>Beowulf at beowulf.org
>http://www.beowulf.org/mailman/listinfo/beowulf



_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com





More information about the Beowulf mailing list