[Beowulf] GPFS on Linux (x86)

Craig Tierney ctierney at hypermall.net
Thu Sep 14 15:32:13 PDT 2006

Joachim Worringen wrote:
> Brian Dobbins wrote:
>>   I'd certainly welcome hearing more about peoples experiences with
>> parallel file systems in general (though perhaps in a new thread?), as
>> despite traditionally having low I/O requirements,  I'm sure we'll be
>> heading that way in the future as well.
> HLRS in Stutgart (Germany), together with HWW, has a good filesystem 
> workshop each year (for the 5 past years). I attended the last two. I 
> think its full name is "HWW/HLRS Global Filesystem Workshop". 
> Unfortunately, very little info on this is on the web; all presentations 
> are distributed via password-protected ftp. I don't know the exact 
> copyright of this stuff, though.
> However, this give the opportunity customers to speak out in their 
> presentations. Vendors do the "usual" stuff there, of course, but on a 
> good technical level.
> Anyway, for some real world experience: I remember that University of 
> Cologne was very happy with Panassas: installed within a few hours, very 
> good performance for them (general computing center usage, not "very 
> large installations"), easy to expand, just works.

These are the things that I like to see discussed.  I know that Los 
Alamos has a large investment in Panasas, because now it 'just works'. 
But it didn't always 'just work'.  One issue with Panasas is that it can 
be quite expensive  (which is relative) and you have to use their 
hardware.  Getting it to work was probably much easier because it uses 
their hardware.

However, what if you know you need better access time or more reliable 
better disks (SCSI is still more reliable than SATA in most cases) and 
want to invest in 10k or 15k SCSI disks?  What if you care more about a 
scalable metadata engine than just raw bandwidth (or vise-versa)?  What 
if you want to run the Panasas traffic over a higher performing network 
or avoid TCP/IP because of its overhead?  Panasas isn't as flexible as 
some of the software based solutions.  However, maintaining a system 
like this is probably much easier than the software based solutions.


More information about the Beowulf mailing list