[Beowulf] Building new cluster - estimate

Mikhail Kuzminsky kus at free.net
Tue Aug 5 09:34:22 PDT 2008


In message from Matt Lawrence <matt at technoronin.com> (Mon, 4 Aug 2008 
19:35:47 -0500 (CDT)):
>On Mon, 4 Aug 2008, Joe Landman wrote:
>> I haven't seen or heard anyone claim xfs 'routinely locks up their 
>>system'. 
>> I won't comment on your friends "sharpness".  I will point out that 
>>several 
>> very large data stores/large cluster sites use xfs.  By definition, 
>>no large 
>> data store can be built with ext3 (16 TB limit with patches, 8 TB in 
>> practice), so if your sharp friend is advising you to do this ...
>
>He currently works for a phone company, so the amount of data is 
>quite large, but the usage pattern is probably quite different.  As 
>far as skill level, I would rate him much higher than any of the 
>folks I work with as far as being a sysadmin.

I work w/xfs for HPC since 1995: I used xfs w/SGI SMP servers under 
IRIX, and then on Linux/x86 clusters. I didn't have any hang-ups 
because of xfs.

But xfs is optimal for work w/large files; when you work w/a lot of 
relative small files, xfs isn't the better choice.

The question about fragmentation itself is more interesting. We have 
in xfs filesystem a set of small files (1st of all, input data) in 
addition to large (usually temporary) files. So the fragmentation may 
be present.

xfs has a rich set of utilities, but AFAIK no defragmentation tools (I 
don't know what will be after xfsdump/xfsrestore). But which modern 
linux filesystems have defragmentation possibilities ?

Mikhail Kuzminsky
Computer Assistance to Chemical Research Center
Zelinsky Institute of Organic Chemistry
Moscow      




More information about the Beowulf mailing list