[Beowulf] Question on hgh performance, low cost Fileserver
Imran Khan
Imran at workstationsuk.co.uk
Tue Nov 15 05:48:09 PST 2005
Arvind,
I think you should try some thing like Terragrid, Terragrid uses a cache
coherent implementation of iSCSI, to make a standard Linux filesystem behave
as a parallel filesystem.
The advantages this brings are:
1. Standard Linux filesystem, and tools, so no re-training
2. Small code base of only 40,000 lines of c, which means
easy to support software
3. Unlike, PVFS, GFS, Lustre etc, TerraGrid does not use a
Meta Data Controller, so scales linearly.
4. TerraGrid is the only CFS solution with a 24X7
resilient option!
5. "Terrabrick", allows you to start with one brick and
expand as you need to.
6. Increased reliability by support for diskless cluster nodes
Regards
Imran
_____
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On
Behalf Of ar 3107
Sent: 10 November 2005 05:36
To: beowulf at beowulf.org
Subject: [Beowulf] Question on hgh performance, low cost Fileserver
We are looking into designing a low cost, high performance storage system.
Requirements as below:
- Starts at 3TB, should scale up by adding more servers to say 10-12TB
- Use commodity technologies (x86_64, IB, GE, Linux), preferably all OSS
components
- Provide high I/O which scales with addition of storage nodes.
- To be used for hosting user home dirs so reliability is important
- The HPC cluster starts with 6 AMD64 nodes and is expected to scale to
1000+nodes in a year.
- Preferably without FC/SAN
We do have experience with IBM GPFS, PVFS (1,2), NetApps, PolyServe but not
with GFS and LUSTRE.
PVFS is not reliable enough for home dirs (OK for scratch), GPFS cannot do
RAID5 like striping across nodes, needs SAN for RAID1 like mirroring (cost
$$$) , polyserve is too expensive (per CPU pricing)
Is GFS or Lustre suitable for the above needs? Any other commercial slution?
I would like to know the experiences and suggestions from the advanced users
on this list.
Regards,
Arvind
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20051115/43ad09a0/attachment.html>
More information about the Beowulf
mailing list