<div dir="ltr">John,<br><br>Thanks for your comments.<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d">
<br>
><br>
> > 2. reasonably fast interconnect (IB SDR 10Gb/s would suffice our<br>
> > computational needs (running LAMMPs molecular dynamics and VASP DFT codes)<br>
> > 3. 48U rack (preferably with good thermal management)<br>
><br>
> "thermal management"? servers need cold air in front and unobstructed<br>
> exhaust. that means open or mesh front/back (and blanking panels).<br>
><br>
</div>Agreed. However depending on the location if space is tight you could<br>
think of an APC rack with the heavy fan exhaust door on th rear, and<br>
vent the hot air.<br>
<div class="Ih2E3d"></div></blockquote><div><br><br>Space is not tight. Computer room is quite spacious but air conditioning is rudimental, no windows or water lines to dump the heat. It looks like a big problem, therefore, consider to put the system somewhere else on campus, although this is not quite convenient.<br>
<br> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div class="Ih2E3d"><br>
> > - 2x Intel Xeon E5420 Hapertown 2.5 GHz quad core CPU : 2x$350=$700<br>
> > - Dual LGA 771 Intel 5400 Supermicro mb :<br>
> > $430<br>
<br>
</div>I'd recommend looking at the Intel Twin motherboard systems for this<br>
project. Leaves plenty of room for cluster head node, and RAID arrays, a<br>
UPS and switches.<br>
Supermicro have these motherboards with onboard Infiniband, so no need<br>
for extra cards.<br>
<br>
One thing you have to think about is power density - it is no use<br>
cramming 40 1U systems into a rack plus switches and head nodes - it is<br>
going to draw far too many amps. Think two times APC PDUs per cabinet at<br>
the very maximum. The Intel twins help here again, as they have a high<br>
efficiency PSU and the losses are shared between two systems. I'm not<br>
sure if we would still have to spread this sort of load between two<br>
racks - it depends on the calculations.<br>
<br>
You also need to put in some budget for power - importantly - air<br>
conditioning.<br>
<div class="Ih2E3d"><br>
</div></blockquote><div><br>Many thanks, this is very exciting opportunity. I can get 20 1-U units in 42U rack, a lot of space for thermal management and other infrastructure items. Do you know any system integrators that can build 40-node cluster from Supermicro twin units? Are there similar solutions for AMD cpus?<br>
<br> </div></div><br></div>