Does your university have public computer labs? Do the computers run some variant of Unix? <br><br>At UMN, where I did my grad work in physics, there were a number of semi-public "Scientific Visualization" or "Large Data Analysis" labs that were hosted in the local supercomputer center. The center there has a number of large machines that you had to apply and give a really good rationale to use, but the smaller development labs (with 2-way to 10-way sunfires, similar sized sgi's, linux machines, etc) basically sat vacant 5-6 days per week. <br>
<br>Some of the labs had a pbs queue, some had a condor queue, and some just required that background jobs be "nice +19 ./a.out". My graduate work required several large parametric studies which computationally looked like lots of monte-carlo-ish runs which could be done in parallel. The beauty of this was that no message passing was required, so, if there were 23 cores open one evening at 6pm, and assuming no one would be doing work overnight (for the next 14 hours), I could start 23 14 hour jobs at 6pm and have a little less than 2 weeks of cpu work done by 8am the next morning. I used (and mentioned) the technique in the paper, <a href="http://www.pnas.org/cgi/content/full/101/37/13431">http://www.pnas.org/cgi/content/full/101/37/13431</a> (search for "computational impotence").<br>
<br>This only works though if your university's computer labs run a unix-ish os, and if the sysadmins are progressive. At the school where I presently teach similar endeavors have been much harder to start-up.<br><br>
Nathan Moore<br><br><br><br><br><br><div class="gmail_quote">On Wed, Jul 2, 2008 at 8:44 AM, Joe Landman <<a href="mailto:landman@scalableinformatics.com">landman@scalableinformatics.com</a>> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi Mark<div class="Ih2E3d"><br>
<br>
Mark Kosmowski wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I'm in the US. I'm almost, but not quite ready for production runs -<br>
still learning the software / computational theory. I'm the first<br>
person in the research group (physical chemistry) to try to learn<br>
plane wave methods of solid state calculation as opposed to isolated<br>
atom-centered approximations and periodic atom centered calculations.<br>
</blockquote>
<br></div>
Heh... my research group in grad school went through that transition in the mid 90s. Went from an LCAO-type simulation to CP like methods. We needed a t3e to run those (then).<br>
<br>
Love to compare notes and see which code you are using someday. On-list/off-list is fine.<div class="Ih2E3d"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
It is turning out that the package I have spent the most time learning<br>
is perhaps not the best one for what we are doing. For a variety of<br>
reasons, many of which more off-topic than tac nukes and energy<br>
efficient washing machines ;) , I'm doing my studies part-time while<br>
working full-time in industry.<br>
</blockquote>
<br></div>
More power to ya! I did mine that way too ... the writing was the hardest part. Just don't lose focus, or stop believing you can do it. When the light starts getting visible at the end of the process, it is quite satisfying.<br>
<br>
I have other words to describe this, but they require a beer lever to get them out of me ...<div class="Ih2E3d"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I think I have come to a compromise that can keep me in business.<br>
Until I have a better understanding of the software and am ready for<br>
production runs, I'll stick to a small system that can be run on one<br>
node and leave the other two powered down. I've also applied for an<br>
adjunt instructor position at a local college for some extra cash and<br>
good experience. When I'm ready for production runs I can either just<br>
bite the bullet and pay the electricity bill or seek computer time<br>
elsewhere.<br>
</blockquote>
<br></div>
Give us a shout when you want to try the time on a shared resource. Some folks here may be able to make good suggestions. RGB is a physics guy at Duke, doing lots of simulations, and might know of resources. Others here might as well.<br>
<br>
Joe<div><div></div><div class="Wj3C7c"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
Thanks for the encouragement,<br>
<br>
Mark E. Kosmowski<br>
<br>
On 7/1/08, ariel sabiguero yawelak <<a href="mailto:asabigue@fing.edu.uy" target="_blank">asabigue@fing.edu.uy</a>> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Well Mark, don't give up!<br>
I am not sure which one is your application domain, but if you require 24x7<br>
computation, then you should not be hosting that at home.<br>
On the other hand, if you are not doing real computation and you just have a<br>
testbed at home, maybe for debugging your parallel applications or something<br>
similar, you might be interested in a virtualized solution. Several years<br>
ago, I used to "debug" some neural networks at home, but training sessions<br>
(up to two weeks of training) happened at the university.<br>
I would suggest to do something like that.<br>
You can always scale-down your problem in several phases and save the<br>
complete data-set / problem for THE RUN.<br>
<br>
You are not being a heretic there, but suffering energy costs ;-)<br>
In more places that you may believe, useful computing nodes are being<br>
replaced just because of energy costs. Even in some application domains you<br>
can even loose computational power if you move from 4 nodes into a single<br>
quad-core (i.e. memory bandwidth problems). I know it is very nice to be<br>
able to do everything at home.. but maybe before dropping your studies or<br>
working overtime to pay the electricity bill, you might want to reconsider<br>
the fact of collapsing your phisical deploy into a single virtualized<br>
cluster. (or just dispatch several threads/processes in a single system).<br>
If you collapse into a single system you have only 1 mainboard, one HDD, one<br>
power source, one processor (physically speaking), .... and you can achieve<br>
almost the performance of 4 systems in one, consuming the power of.... well<br>
maybe even less than a single one. I don't want to go into discussions about<br>
performance gain/loose due to the variation of the hardware architecture.<br>
Invest some bucks (if you haven't done that yet) in a good power source.<br>
Efficiency of OEM unbranded power sources is realy pathetic. may be 45-50%<br>
efficiency, while a good power source might be 75-80% efficient. Use the<br>
energy for computing, not for heating your house.<br>
What I mean is that you could consider just collapsing a complete "small"<br>
cluster into single system. If your application is CPU-bound and not I/O<br>
bound, VMware Server could be an option, as it is free software<br>
(unfortunately not open, even tough some patches can be done on the<br>
drivers). I think it is not possible to publish benchmarking data about<br>
VMware, but I can tell you that in long timescales, the performance you get<br>
in the host OS is similar than the one of the guest OS. There are a lot of<br>
problems related to jitter, from crazy clocks to delays, but if your<br>
application is not sensitive to that, then you are Ok.<br>
Maybe this is not a solution, but you can provide more information regarding<br>
your problem before quitting...<br>
<br>
my 2 cents....<br>
<br>
ariel<br>
<br>
Mark Kosmowski escribió:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
At some point there a cost-benefit analysis needs to be performed. If<br>
my cluster at peak usage only uses 4 Gb RAM per CPU (I live in<br>
single-core land still and do not yet differentiate between CPU and<br>
core) and my nodes all have 16 Gb per CPU then I am wasting RAM<br>
resources and would be better off buying new machines and physically<br>
transferring the RAM to and from them or running more jobs each<br>
distributed across fewer CPUs. Or saving on my electricity bill and<br>
powering down some nodes.<br>
<br>
As heretical as this last sounds, I'm tempted to throw in the towel on<br>
my PhD studies because I can no longer afford the power to run my<br>
three node cluster at home. Energy costs may end up being the straw<br>
that breaks this camel's back.<br>
<br>
Mark E. Kosmowski<br></blockquote></blockquote></blockquote></div></div></blockquote></div><br clear="all"><br>