[Beowulf] CephFS
Olli-Pekka Lehto
olli-pekka.lehto at csc.fi
Fri Apr 10 08:19:38 PDT 2015
On 09 Apr 2015, at 22:14, Joe Landman <landman at scalableinformatics.com> wrote:
> On 04/09/2015 11:24 AM, Tom Harvill wrote:
>>
>> Hello,
>>
>> Question: is anyone on this list using CephFS in 'production'? If so,
>> what are you using
>> it for (ie. scratch/tmp, archive, homedirs)? In our setup we use NFS
>> shared ZFS for /home,
>> Lustre for /work (performance-oriented shared fs), and job-specific tmp
>> on the worker nodes
>> local disk.
>>
>> What I really want to know is if anyone is using CephFS (without
>> headache?) in a production
>> HPC cluster in place of where one might use Lustre?
>
> Not yet ... performance is still somewhat off where it needs to be to replace Lustre. I do think this will be changing rapidly though. CephFS is still in the not-quite-fully baked state, but it is evolving rapidly.
>
> We'd run some HPC type trading benchmarks on it 2 years ago (https://stacresearch.com/news/2013/10/28/stac-report-kdb-31-scalable-informatics-ceph-storage-cloud ). Ping me offline if you want more info.
>
> Disclosure: we are not disinterested observers; having business relations with all the FS builders. So take what I write here with a few kg of NaCl.
We’re bringing Ceph into production on our compute cloud in the next few months. Still no plans to replace our main Lustre system in the near future, though.
In an environment that needs to adapt to evolving user needs, trading some performance for the flexibility that Ceph offers does not seem like a bad deal.
O-P
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 2017 bytes
Desc: not available
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20150410/0d3c9f16/attachment.bin>
More information about the Beowulf
mailing list