Mark,<br>
Would it be feasible to downclock your three nodes? All you physicists
know better than I, that the power draw and heat production are not
linear in GHz. A 1 GHz processor is less than half the cost per tick
than a 2GHz, so if power budget is more urgent for you than time to
completion then that might help; continue running all of your nodes,
but slower. But I've never done this myself. OTOH as a mathematician I
don't have to :-) See <a href="http://xkcd.com/435/">http://xkcd.com/435/</a> ("Purity")<br>
Peter<br><br><div><span class="gmail_quote">On 7/2/08, <b class="gmail_sendername">Mark Kosmowski</b> <<a href="mailto:mark.kosmowski@gmail.com">mark.kosmowski@gmail.com</a>> wrote:</span><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I'm in the US. I'm almost, but not quite ready for production runs -<br> still learning the software / computational theory. I'm the first<br> person in the research group (physical chemistry) to try to learn<br>
plane wave methods of solid state calculation as opposed to isolated<br> atom-centered approximations and periodic atom centered calculations.<br> <br> It is turning out that the package I have spent the most time learning<br>
is perhaps not the best one for what we are doing. For a variety of<br> reasons, many of which more off-topic than tac nukes and energy<br> efficient washing machines ;) , I'm doing my studies part-time while<br> working full-time in industry.<br>
<br> I think I have come to a compromise that can keep me in business.<br> Until I have a better understanding of the software and am ready for<br> production runs, I'll stick to a small system that can be run on one<br>
node and leave the other two powered down. I've also applied for an<br> adjunt instructor position at a local college for some extra cash and<br> good experience. When I'm ready for production runs I can either just<br>
bite the bullet and pay the electricity bill or seek computer time<br> elsewhere.<br> <br> Thanks for the encouragement,<br> <br> Mark E. Kosmowski<br> <br> On 7/1/08, ariel sabiguero yawelak <<a href="mailto:asabigue@fing.edu.uy">asabigue@fing.edu.uy</a>> wrote:<br>
> Well Mark, don't give up!<br> > I am not sure which one is your application domain, but if you require 24x7<br> > computation, then you should not be hosting that at home.<br> > On the other hand, if you are not doing real computation and you just have a<br>
> testbed at home, maybe for debugging your parallel applications or something<br> > similar, you might be interested in a virtualized solution. Several years<br> > ago, I used to "debug" some neural networks at home, but training sessions<br>
> (up to two weeks of training) happened at the university.<br> > I would suggest to do something like that.<br> > You can always scale-down your problem in several phases and save the<br> > complete data-set / problem for THE RUN.<br>
><br> > You are not being a heretic there, but suffering energy costs ;-)<br> > In more places that you may believe, useful computing nodes are being<br> > replaced just because of energy costs. Even in some application domains you<br>
> can even loose computational power if you move from 4 nodes into a single<br> > quad-core (i.e. memory bandwidth problems). I know it is very nice to be<br> > able to do everything at home.. but maybe before dropping your studies or<br>
> working overtime to pay the electricity bill, you might want to reconsider<br> > the fact of collapsing your phisical deploy into a single virtualized<br> > cluster. (or just dispatch several threads/processes in a single system).<br>
> If you collapse into a single system you have only 1 mainboard, one HDD, one<br> > power source, one processor (physically speaking), .... and you can achieve<br> > almost the performance of 4 systems in one, consuming the power of.... well<br>
> maybe even less than a single one. I don't want to go into discussions about<br> > performance gain/loose due to the variation of the hardware architecture.<br> > Invest some bucks (if you haven't done that yet) in a good power source.<br>
> Efficiency of OEM unbranded power sources is realy pathetic. may be 45-50%<br> > efficiency, while a good power source might be 75-80% efficient. Use the<br> > energy for computing, not for heating your house.<br>
> What I mean is that you could consider just collapsing a complete "small"<br> > cluster into single system. If your application is CPU-bound and not I/O<br> > bound, VMware Server could be an option, as it is free software<br>
> (unfortunately not open, even tough some patches can be done on the<br> > drivers). I think it is not possible to publish benchmarking data about<br> > VMware, but I can tell you that in long timescales, the performance you get<br>
> in the host OS is similar than the one of the guest OS. There are a lot of<br> > problems related to jitter, from crazy clocks to delays, but if your<br> > application is not sensitive to that, then you are Ok.<br>
> Maybe this is not a solution, but you can provide more information regarding<br> > your problem before quitting...<br> ><br> > my 2 cents....<br> ><br> > ariel<br> ><br> > Mark Kosmowski escribió:<br>
><br> > > At some point there a cost-benefit analysis needs to be performed. If<br> > > my cluster at peak usage only uses 4 Gb RAM per CPU (I live in<br> > > single-core land still and do not yet differentiate between CPU and<br>
> > core) and my nodes all have 16 Gb per CPU then I am wasting RAM<br> > > resources and would be better off buying new machines and physically<br> > > transferring the RAM to and from them or running more jobs each<br>
> > distributed across fewer CPUs. Or saving on my electricity bill and<br> > > powering down some nodes.<br> > ><br> > > As heretical as this last sounds, I'm tempted to throw in the towel on<br>
> > my PhD studies because I can no longer afford the power to run my<br> > > three node cluster at home. Energy costs may end up being the straw<br> > > that breaks this camel's back.<br> > ><br>
> > Mark E. Kosmowski<br> > ><br> > ><br> > ><br> > > > From: "Jon Aquilina" <<a href="mailto:eagles051387@gmail.com">eagles051387@gmail.com</a>><br> > > ><br> > > ><br>
> ><br> > ><br> > ><br> > > > not sure if this applies to all kinds of senarios that clusters are used<br> > in<br> > > > but isnt the more ram you have the better?<br> > > ><br>
> > > On 6/30/08, Vincent Diepeveen <<a href="mailto:diep@xs4all.nl">diep@xs4all.nl</a>> wrote:<br> > > ><br> > > ><br> > > > > Toon,<br> > > > ><br> > > > > Can you drop a line on how important RAM is for weather forecasting in<br>
> > > > latest type of calculations you're performing?<br> > > > ><br> > > > > Thanks,<br> > > > > Vincent<br> > > > ><br> > > > ><br> > > > > On Jun 30, 2008, at 8:20 PM, Toon Moene wrote:<br>
> > > ><br> > > > > Jim Lux wrote:<br> > > > ><br> > > > ><br> > > > > > Yep. And for good reason. Even a big DoD job is still tiny in<br> > Nvidia's<br>
> > > > ><br> > > > > ><br> > > > > > > scale of operations. We face this all the time with NASA work.<br> > > > > > > Semiconductor manufacturers have no real reason to produce<br>
> special purpose<br> > > > > > > or customized versions of their products for space use, because<br> > they can<br> > > > > > > sell all they can make to the consumer market. More than once,<br>
> I've had a<br> > > > > > > phone call along the lines of this:<br> > > > > > > "Jim: I'm interested in your new ABC321 part."<br> > > > > > > "Rep: Great. I'll just send the NDA over and we can talk about<br>
> it."<br> > > > > > > "Jim: Great, you have my email and my fax # is..."<br> > > > > > > "Rep: By the way, what sort of volume are you going to be using?"<br>
> > > > > > "Jim: Oh, 10-12.."<br> > > > > > > "Rep: thousand per week, excellent..."<br> > > > > > > "Jim: No, a dozen pieces, total, lifetime buy, or at best maybe<br>
> every<br> > > > > > > year."<br> > > > > > > "Rep: Oh...<dial tone>"<br> > > > > > > {Well, to be fair, it's not that bad, they don't hang up on you..<br>
> > > > > ><br> > > > > > ><br> > > > > > ><br> > > > > > Since about a year, it's been clear to me that weather forecasting<br> > (i.e.,<br> > > > > > running a more or less sophisticated atmospheric model to provide<br>
> weather<br> > > > > > predictions) is going to be "mainstream" in the sense that every<br> > business<br> > > > > > that needs such forecasts for its operations can simply run them<br>
> in-house.<br> > > > > ><br> > > > > > Case in point: I bought a $1100 HP box (the obvious target group<br> > being<br> > > > > > teenage downloaders) which performs the HIRLAM limited area model<br>
> *on the<br> > > > > > grid that we used until October 2006* in December last year.<br> > > > > ><br> > > > > > It's about twice as slow as our then-operational 50-CPU Sun Fire<br>
> 15K.<br> > > > > ><br> > > > > > I wonder what effect this will have on CPU developments ...<br> > > > > ><br> > > > > > --<br> > > > > > Toon Moene - e-mail: <a href="mailto:toon@moene.indiv.nluug.nl">toon@moene.indiv.nluug.nl</a> - phone: +31 346<br>
> 214290<br> > > > > > Saturnushof 14, 3738 XG Maartensdijk, The Netherlands<br> > > > > > At home: <a href="http://moene.indiv.nluug.nl/~toon/">http://moene.indiv.nluug.nl/~toon/</a><br>
> > > > > Progress of GNU Fortran:<br> > <a href="http://gcc.gnu.org/ml/gcc/2008-01/msg00009.html">http://gcc.gnu.org/ml/gcc/2008-01/msg00009.html</a><br> > > > > ><br> > > > > ><br>
> > > > ><br> > > > > _______________________________________________<br> > > > > Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br> > > > > To change your subscription (digest mode or unsubscribe) visit<br>
> > > > <a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a><br> > > > ><br> > > > ><br> > > > ><br> > > ><br>
> > > --<br> > > > Jonathan Aquilina<br> > > ><br> > > ><br> > > _______________________________________________<br> > > Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br>
> > To change your subscription (digest mode or unsubscribe) visit<br> > <a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a><br> > ><br> > ><br>
> ><br> ><br> <br> _______________________________________________<br> Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org">Beowulf@beowulf.org</a><br> To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div><br>