Hello,<br>If you dont want to indulge into cluster details and want to use it as "black box" kind of tool, Then I would suggest you to use some commercial cluster scheduler/manager like Aspen Beowulf Cluster, PlatformLSF, Scali Manage etc.. or <br>
you may go for ROCKS , OSCAR etc<br>This way perhaps you could have easier simpler life.<br>Also have a look in using PETSc (that make use of MPI internally).<br><br>regards,<br>Amjad Ali. <br><br><div class="gmail_quote">
On Fri, Feb 22, 2008 at 8:50 PM, John P. Kosky, PhD <<a href="mailto:jpkosky@sps.aero">jpkosky@sps.aero</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
My company is taking it's first foray into the world of HPC with an<br>
expandable architecture, 16 processor (comprised of quad core Opterons),<br>
one header node cluster using Infiniband interconnects. OS has<br>
tentatively been selected as SUSE 64-bit Linux. The principal purpose of<br>
the cluster is as a tool for spacecraft and propulsion design support.<br>
The cluster will therefore be running the most recent versions of<br>
commercially available software - initially for FEA and CFD using COMSOL<br>
Multiphysics and associated packages, NASTRAN, MatLab modules, as well<br>
as an internally modified and expanded commercial code for materials<br>
properties prediction,with emphasis on polymer modeling (Accelrys<br>
Materials Studio). Since we will be repetitively running standard<br>
modeling codes on this system, we are trying to make the system as user<br>
friendly as possible... most of our scientists and engineers want to use<br>
this as a tool, and not have to become cluster experts. The company WILL<br>
be hiring an IT Sys Admin with good cluster experience to support the<br>
system, however...<br>
<br>
Question 1:<br>
1) Does anyone here know of any issues that have arisen running the<br>
above named commercial packages on clusters using infiniband?<br>
<br>
Question 2:<br>
2) As far as the MPI for the system is concerned, for the system and<br>
application requirements described above, would OpenMPI or MvApich be<br>
better for managing node usage?<br>
<br>
ANY help or advice would be greatly appreciated.<br>
<br>
Thanks in advance<br>
<br>
John<br>
<br>
John P. Kosky, PhD<br>
Director of Technical Development<br>
Space Propulsion Systems<br>
<br>
<br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div><br>