<DIV>Thanks to your suggestions. I am summarizing the questions and answers here.</DIV>
<DIV> </DIV>
<DIV>1. cluster usage</DIV>
<DIV> </DIV>
<DIV>The cluster will be used solely for running numerical simulations (number crunching) codes written in fortran90 and MPI, which involves finite difference calculations and fast fourier transforms for solving 3D Navier-Stokes equations. It calls mpi_alltoall a lot (for FFTs) as well as other mpi_send/recv, so communication is intensive. THe problem is unsteady and 3D, so computation is also heavy. A typically run can take 1-2 weeks using 8-16 nodes (depending on the problem size).</DIV>
<DIV> </DIV>
<DIV>We have been OK with a "hybrid" 25-node (COMPAQ Alpha & Dell Xeon 2.4GHz) cluster running right now using a 3Com 100 Mbps (ethernet) switch and a LAM/MPI library. </DIV>
<DIV>I will post some benchmarks later.</DIV>
<DIV> </DIV>
<DIV>2. Many people recommended Opteron (or at least encourage a test run on Opteron) because it seems to be more cost effective. I picked Xeon because of the following reasons:</DIV>
<DIV> </DIV>
<DIV>(1) free Intel FORTRAN 90 compiler, which is also used for other individual workstations in our lab and some supercomputers that we have access to (we are kind of trying to stay away from the hassle of switching between compilers when writing new codes)</DIV>
<DIV> </DIV>
<DIV>(2) We have a few users to share the cluster, so we have to get "enough" nodes</DIV>
<DIV> </DIV>
<DIV>(3) Xeon seems to be more common, so it's easier to get consultanting or support</DIV>
<DIV> </DIV>
<DIV>BTW, what are the common fortran 90 compilers that people use on Opteron? Any comparison to other compilers?</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>3. My MPI code periodically writes out data files to local disk, so I do need hard disk on every node. Diskless sounds good (cost, maintenance,etc), but the data size seems too big to be transferred to the head node (well technically it can be done, but I would rather just use local scratch disk).</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>4. Managed or unmanaged?</DIV>
<DIV> </DIV>
<DIV>People already recommended some switches that I will not repeat here. However, I am still not clear about "managed" and "unmanaged" switches. Some vendors told me that I need an managed one, while other said the opposite. Will need to study more...</DIV>
<DIV> </DIV>
<DIV>5. I only have wall-clocking timing of my code on various platforms. I don't know how sensitive it is to cache size. I guess the bigger cache, the better, because the code is operating large arrays all the time.</DIV>
<DIV> </DIV>
<DIV>I will post more summary here if I find out more information about these issues. Thanks.</DIV>
<DIV> </DIV>
<DIV>SCH</DIV>
<DIV><BR><BR><B><I>SC Huang <schuang21@yahoo.com></I></B> wrote:</DIV>
<BLOCKQUOTE class=replbq style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #1010ff 2px solid">
<DIV>Hi,</DIV>
<DIV> </DIV>
<DIV>I am about to order a new cluster using a $100K grant for running our in-house MPI codes. I am trying to have at least 36-40 (or more, if possible) nodes. The individual node configuration is:</DIV>
<DIV> </DIV>
<DIV>dual Xeon 2.8 GHz</DIV>
<DIV>512K L2 cache, 1MB L3 cache, 533 FSB</DIV>
<DIV>2GB DDR RAM</DIV>
<DIV>gigabit NIC</DIV>
<DIV>80 GB IDE hard disk</DIV>
<DIV> </DIV>
<DIV>The network will be based on a gigabit switch. Most vendors I talked to use HP Procurve 2148 or 4148.</DIV>
<DIV> </DIV>
<DIV>Can anyone comment on the configuration (and the switch) above? Any other comments (e.g. recommeded vendor, etc) are also welcome. </DIV>
<DIV> </DIV>
<DIV>Thanks!!!</DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV> </DIV>
<P>
<HR SIZE=1>
Do you Yahoo!?<BR><A href="http://us.rd.yahoo.com/mail_us/taglines/aac/*http://promotions.yahoo.com/new_mail/static/ease.html">Yahoo! Mail Address AutoComplete</A> - You start. We finish.</BLOCKQUOTE><p>
<hr size=1>Do you Yahoo!?<br>
<a href="http://us.rd.yahoo.com/mail_us/taglines/aac/*http://promotions.yahoo.com/new_mail/static/ease.html">Yahoo! Mail Address AutoComplete</a> - You start. We finish.