<div dir="ltr"><div class="gmail_quote"><div>Thank you for your reply. I actually contributed a little bit of code to CPMD back in the day.</div><div><br></div><div>I'm going to start by trying to learn abinit. They have experimental, CUDA only, GPU support, so I may save up for some used nVidia cards at some point, maybe I can find a deal on P106 class cards.</div><div><br></div><div>I already have the three Opteron 940 boxes; I've kept them since buying them in grad school. Having said this, you remind me that my laptop is probably more powerful than those old machines. I'll use the laptop to learn abinit on and then to do small system calculations while I'm (likely slowly) getting other equipment up and running.</div><div><br></div><div>Assuming my work and writing is acceptable quality, how likely will I be to get published with just a master degree?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Message: 1<br>
Date: Sun, 02 Feb 2020 23:40:50 +0000<br>
From: Jörg Saßmannshausen <<a href="mailto:sassy-work@sassy.formativ.net" target="_blank">sassy-work@sassy.formativ.net</a>><br>
To: <a href="mailto:beowulf@beowulf.org" target="_blank">beowulf@beowulf.org</a><br>
Subject: Re: [Beowulf] First cluster in 20 years - questions about<br>
today<br>
Message-ID: <2382819.MDnfneh6fb@deepblue><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi Mark,<br>
<br>
being a chemist and working in HPC for some years now, for a change I can make <br>
some contribution to the list as well.<br>
<br>
I would not advice to use hardware which is over 5 years old, unless somebody <br>
else is footing the electricity bill. The new AMDs are much faster and also as <br>
you have more cores per node, you can run larger simulations without having <br>
InfiniBand interconnections. The next question would be which programs do you <br>
want to use? ORCA? NWChem? Gamess-US? CP2K/Castep? They all have different <br>
requirements and the list is by no means exhaustive. Do you just want to stick <br>
to DFT calculations or wavefunction ones as well (like CASSCF, CASPT2)? The <br>
bottom line is you want to have something which is efficient and tailored to the <br>
program(s) you want to use. <br>
<br>
Forget about Solaris. I don't know any code other than Gamess-US which is <br>
supporting Solaris. Stick to Linux. From what you said I guess you want to use <br>
code like CP2K which requires large memory. Again the latest AMD can address <br>
really large memory so I would suggest to go for that, if you really want to <br>
be productive. You might want to consider using NVMe as scratch/swap or even <br>
OS drive and, if you want to use CP2K, make sure you got enough memory and <br>
cores. <br>
If you just want to toy around then by all means use old hardware but you will <br>
have more frustration than fun.<br>
<br>
For your information: I am a 'gentleman' scientist, i.e. I do my research, <br>
chemistry in my case, like most respectable scientist in the evening or <br>
weekend and I still got a daytime job to attend to. By enlarge I get one <br>
publication out per year in highly cited journals. Right now, as until <br>
recently I had some clusters at my disposal, I got an old 8 core box with 42 <br>
GB or RAM which I am planning to replace this year with an AMD one for reasons <br>
already mentioned on the list. I wanted to do that last year but for one <br>
reason or another that did not work out. My desktop is a Intel(R) Core(TM) <br>
i7-4770 CPU @ 3.40GHz machine which also does calculations and post-<br>
processing. My bottle neck right now is the time I need to write up stuff, <br>
another reason why I am still using the old server. At least it is heating my <br>
dining room. :-)<br>
<br>
Let me know if you got any more questions, happy to help out a colleague!<br>
<br>
All the best<br>
<br>
Jörg<br>
<br>
Am Samstag, 1. Februar 2020, 22:21:09 GMT schrieb Mark Kosmowski:<br>
> I've been out of computation for about 20 years since my master degree.<br>
> I'm getting into the game again as a private individual. When I was active<br>
> Opteron was just launched - I was an early adopter of amd64 because I<br>
> needed the RAM (maybe more accurately I needed to thoroughly thrash my swap<br>
> drives). I never needed any cluster management software with my 3 node,<br>
> dual socket, single core little baby Beowulf. (My planned domain is<br>
> computational chemistry and I'm hoping to get to a point where I can do ab<br>
> initio catalyst surface reaction modeling of small molecules (not<br>
> biomolecules).)<br>
> <br>
> I'm planning to add a few nodes and it will end up being fairly<br>
> heterogenous. My initial plan is to add two or three multi-socket,<br>
> multi-core nodes as well as a 48 port gigabit switch. How should I assess<br>
> whether to have one big heterogenous cluster vs. two smaller<br>
> quasi-homogenous clusters?<br>
> <br>
> Will it be worthwhile to learn a cluster management software? If so,<br>
> suggestions?<br>
> <br>
> Should I consider Solaris or illumos? I do plan on using ZFS, especially<br>
> for the data node, but I want as much redundancy as I can get, since I'm<br>
> going to be using used hardware. Will the fancy Solaris cluster tools be<br>
> useful?<br>
> <br>
> Also, once I get running, while I'm getting current with theory and<br>
> software may I inquire here about taking on a small, low priority academic<br>
> project to make sure the cluster side is working good?<br>
> <br>
> Thank you all for still being here!<br><br>
</blockquote></div></div>