<div>The quality of writing and thoughtful insight presented on this board has kept me coming back over the years as a reader, as I have observed and learned a great deal by reading various posts... I really appreciate the feed back. Thank you.</div>
<div> </div>
<div>Is this forum an appropriate place to discuss software concepts, issues, and questions related to modeling a problem to be implemented by a cluster, or is it mostly a place for shop talk to address hardware specs...?</div>
<div> </div>
<div>My goal is to spend less time with the hardware and more time modeling problems with software, but I am directing some effort to understand the mechanics of the involved hardware components so as to write good code... by trying to understand the nature of the machine, how it works, and how it fits together. This endeavor has spread my time thin, sometimes yielding information that I can not use nor understand, so I welcome criticism to help me focus. I'll try to keep my fuzzy CS questions limited. </div>
<div> </div>
<div>A little about me: For most of my twenties I lived out of a backpack hitching the country working various terrestrial and maritime jobs from coast to coast, and recently completed my college degree. Currently, I work odd jobs to make ends meet, and with my freetime I enjoy the topics of CS and science, illustrate art, and practise classical guitar. During my undergraduate studies at JSC VT, a college professor, Martin A. Walker now teaching chemistry at SUNY Potsdam, influenced me to install Linux on a system and to work on chemistry problems. I decided to pick an interesting problem that I could spend a long time developing, and choose a focus related to molecular biology and hard sciences with a smattering of math classes.</div>
<div> </div>
<div>I love the rain forest jungle of Linux, but I have become lost in it too...</div>
<div> </div>
<div>My first cluster will be constructed using recycled legacy x86 pcs, classical cluster w/ a Linux kernel, and perhaps I will start with a failover cluster before parallel high-throughput...</div>
<div> </div>
<div>Have a nice day,</div>
<div>Jeremy</div>
<div> </div>
<div> </div>
<div> </div>
<div> </div>
<div> </div>
<div> </div>
<div><br> </div>
<div class="gmail_quote">On Sun, Sep 27, 2009 at 7:21 PM, Gerry Creager <span dir="ltr"><<a href="mailto:gerry.creager@tamu.edu">gerry.creager@tamu.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">Jeremy,<br><br>I think you'll discover that the Beowulf list tends to comprise a number of folks who are engaged in high performance, or high throughput, computing already, or are coming into the fray, now interested in learning what is composed of the art of the possible.<br>
<br>We've a nice assortment of knowledgable folk here, who offer their expertise freely, and whose knowledge is often complementary and extensible, in that one person's experiences and knowledge are often building blocks for another's explanation.<br>
<br>We tend to run Linux, as a core OS of choice, for a variety of reasons. These include familiarity, experience and comfort levels, and in a number of cases, a systematic determination that it's the best choice for what we're doing.<br>
<br>In this post, while apparently asking for opinions about the best OS for grid or cluster computing, you point out "yet another academic OS project" (which is not to dismiss it, but simply to categorize it). EROS, from an academic perspective, looks interesting, but currently impractical.<br>
<br>You see, like you, I've a finite temporal resource, and am limited in my current job to a 168 hour work week (and by my wife and family to an even shorter one). I have invested a lot of time in *nix over the years, and have decided to my satisfaction that Linux is the best fit for my scientific efforts. Further (or better|worse, depending on outlook), I prefer CentOS these days for stability. You see, I've isolated clusters that have been running without updates for half a decade, because they're up and stable. I tend to create cluster environments that meet a particular need for performance or throughput, and which can then be administered as efficiently as possible... preferably meaning that neither I, nor my other administrators, have to spend much time with 'em. My real job isn't to play with clusters, OS's or administration, it's to obtain funding and do research using computational models.<br>
<br>Please don't take this as a slight. Instead, I'm trying to give you a flavor of *some* of the folks here, and a basis for several of the replies. We're interested, and there are almost certainly folks on this list who've investigated all aspects of what you are asking about. I trust these to answer your queries much better than I can. And don't stop asking. But do realize that we tend to spend a lot of our time trying to get the work out the door rather than searching for the next great tool that could consume all our time learning whether it's practical.<br>
<br>Finally, getting back to the query that started all of this, I suspect Linux, and NOT Solaris, would prove easier, by some margin. I recommend you spend a little time investigating NPACI Rocks (yes, I do use them for some clusters) as they have implementations using either Linux or Solaris, and someone's developing a Rocks Roll for grid use, or so I'm told. That could give you a fairly simple implementation path if that's what you're looking for. At first glance, EROS does not look like it's ready for prime time, so I'd not be looking that way. Of course, SOMEONE needs to try it in the cluster world, someday, but I don't have the time to be that person.<br>
<br>Good luck in your studies, and welcome to the group!<br>gerry<br><br>Jeremy Baker wrote:<br>
<blockquote class="gmail_quote" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid">
<div>
<div></div>
<div class="h5">EROS (Extremely Reliable Operating System)<br><br><br> <a href="http://www.eros-os.org/eros.html" target="_blank">http://www.eros-os.org/eros.html</a><br><br><br><br><br><br>-- <br>Jeremy Baker<br>PO 297<br>
Johnson, VT<br>05656<br><br><br></div></div>------------------------------------------------------------------------<br><br>_______________________________________________<br>Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br></blockquote><br>-- <br>Gerry Creager -- <a href="mailto:gerry.creager@tamu.edu" target="_blank">gerry.creager@tamu.edu</a><br>
Texas Mesonet -- AATLT, Texas A&M University <br>Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983<br>Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843<br></blockquote></div><br><br clear="all">
<div></div><br>-- <br>Jeremy Baker<br>PO 297<br>Johnson, VT<br>05656<br>