[Beowulf] Query regarding Linux clustering for CFD
Lombard, David N
david.n.lombard at intel.com
Fri May 20 13:03:27 PDT 2005
From: ritesh gupta on Tuesday, May 17, 2005 11:14 PM
>
> Hi,
>
> I plan to use a linux cluster ( red hat linux) for CFD
> application-- computational fluid dynamics.For the
> setup i plan to use 6 nodes each with single cpu and
> 2GB RAM with onboard dual gigabit network cards for
> connectivity.Out of these 6 nodes ,one node will act
> as the master node.All the nodes will have two
> internal hard disks.
CFD codes are generally latency sensitive--both memory and network. DP
nodes will also work quite well, assuming you provide sufficient memory
for each CPU--this is model dependent.
Which CFD code? If an ISV code, you really need to consult the vendor
for their recommendations on MPI, distro versions, &etc.
Consider running the compute nodes diskless; there are tradeoffs here,
but CFD works well with diskless nodes. Also, if you have the switch
ports, consider using both NICs on all nodes, one for application
communications and one for NFS traffic.
If not running diskless, configure the disks as you prefer, as they'll
not likely be significant to application performance.
> For the interconnect we plan to either use gigabit
> switches and for the servers it will be mostly HP
> Proliant Servers.
This is a small enough cluster that the network is not likely an issue.
HP has good switches.
> I will be downloading and installing the LAM-MPI
> software from the site.
See the "Which CFD code" question above. At any rate, LAM/MPI is a good
choice...
> I wish to know do i need to have any other software
> for clustering or the lam mpi software can provide the
> all the features for the scientific computation??
It depends. How many people will be using the cluster? If only you, or
a small number of people, you could just manually schedule the jobs.
Otherwise, consider a resource manager, like Torque, SGE, or other to
schedule the cluster.
Also, using OSCAR, NPACI Rocks, Warewulf, Clustermatic, Scyld, or one of
the other OSS or commercial clusters stacks may be easiest unless one of
your goals is to build the cluster from scratch.
--
David N. Lombard
My comments represent my opinions, not those of Intel Corporation.
More information about the Beowulf
mailing list