[Beowulf] First cluster in 20 years - questions about today

Benson Muite benson_muite at emailplus.org
Tue Feb 4 12:46:59 PST 2020


Generally, getting published does not depend on having an academic qualification if the work is sufficiently interesting. Choose venue appropriately. There seems to be quite a plit between domain specific publications and HPC publications, with only a few venues able to reliably review both the HPC and domain specific contributions if they are in one paper. Some codes work reasonably well in single precision (for example GROMACS), for which gaming/graphics GPUs can give quite good performance. 


On Wed, Feb 5, 2020, at 10:27 AM, Mark Kosmowski wrote:
> Thank you for your reply. I actually contributed a little bit of code to CPMD back in the day.
> 
> I'm going to start by trying to learn abinit. They have experimental, CUDA only, GPU support, so I may save up for some used nVidia cards at some point, maybe I can find a deal on P106 class cards.
> 
> I already have the three Opteron 940 boxes; I've kept them since buying them in grad school. Having said this, you remind me that my laptop is probably more powerful than those old machines. I'll use the laptop to learn abinit on and then to do small system calculations while I'm (likely slowly) getting other equipment up and running.
> 
> Assuming my work and writing is acceptable quality, how likely will I be to get published with just a master degree?
> 
>> Message: 1
>>  Date: Sun, 02 Feb 2020 23:40:50 +0000
>>  From: Jörg Saßmannshausen <sassy-work at sassy.formativ.net>
>>  To: beowulf at beowulf.org
>>  Subject: Re: [Beowulf] First cluster in 20 years - questions about
>>  today
>>  Message-ID: <2382819.MDnfneh6fb at deepblue>
>>  Content-Type: text/plain; charset="utf-8"
>> 
>>  Hi Mark,
>> 
>>  being a chemist and working in HPC for some years now, for a change I can make 
>>  some contribution to the list as well.
>> 
>>  I would not advice to use hardware which is over 5 years old, unless somebody 
>>  else is footing the electricity bill. The new AMDs are much faster and also as 
>>  you have more cores per node, you can run larger simulations without having 
>>  InfiniBand interconnections. The next question would be which programs do you 
>>  want to use? ORCA? NWChem? Gamess-US? CP2K/Castep? They all have different 
>>  requirements and the list is by no means exhaustive. Do you just want to stick 
>>  to DFT calculations or wavefunction ones as well (like CASSCF, CASPT2)? The 
>>  bottom line is you want to have something which is efficient and tailored to the 
>>  program(s) you want to use. 
>> 
>>  Forget about Solaris. I don't know any code other than Gamess-US which is 
>>  supporting Solaris. Stick to Linux. From what you said I guess you want to use 
>>  code like CP2K which requires large memory. Again the latest AMD can address 
>>  really large memory so I would suggest to go for that, if you really want to 
>>  be productive. You might want to consider using NVMe as scratch/swap or even 
>>  OS drive and, if you want to use CP2K, make sure you got enough memory and 
>>  cores. 
>>  If you just want to toy around then by all means use old hardware but you will 
>>  have more frustration than fun.
>> 
>>  For your information: I am a 'gentleman' scientist, i.e. I do my research, 
>>  chemistry in my case, like most respectable scientist in the evening or 
>>  weekend and I still got a daytime job to attend to. By enlarge I get one 
>>  publication out per year in highly cited journals. Right now, as until 
>>  recently I had some clusters at my disposal, I got an old 8 core box with 42 
>>  GB or RAM which I am planning to replace this year with an AMD one for reasons 
>>  already mentioned on the list. I wanted to do that last year but for one 
>>  reason or another that did not work out. My desktop is a Intel(R) Core(TM) 
>>  i7-4770 CPU @ 3.40GHz machine which also does calculations and post-
>>  processing. My bottle neck right now is the time I need to write up stuff, 
>>  another reason why I am still using the old server. At least it is heating my 
>>  dining room. :-)
>> 
>>  Let me know if you got any more questions, happy to help out a colleague!
>> 
>>  All the best
>> 
>>  Jörg
>> 
>>  Am Samstag, 1. Februar 2020, 22:21:09 GMT schrieb Mark Kosmowski:
>>  > I've been out of computation for about 20 years since my master degree.
>>  > I'm getting into the game again as a private individual. When I was active
>>  > Opteron was just launched - I was an early adopter of amd64 because I
>>  > needed the RAM (maybe more accurately I needed to thoroughly thrash my swap
>>  > drives). I never needed any cluster management software with my 3 node,
>>  > dual socket, single core little baby Beowulf. (My planned domain is
>>  > computational chemistry and I'm hoping to get to a point where I can do ab
>>  > initio catalyst surface reaction modeling of small molecules (not
>>  > biomolecules).)
>>  > 
>>  > I'm planning to add a few nodes and it will end up being fairly
>>  > heterogenous. My initial plan is to add two or three multi-socket,
>>  > multi-core nodes as well as a 48 port gigabit switch. How should I assess
>>  > whether to have one big heterogenous cluster vs. two smaller
>>  > quasi-homogenous clusters?
>>  > 
>>  > Will it be worthwhile to learn a cluster management software? If so,
>>  > suggestions?
>>  > 
>>  > Should I consider Solaris or illumos? I do plan on using ZFS, especially
>>  > for the data node, but I want as much redundancy as I can get, since I'm
>>  > going to be using used hardware. Will the fancy Solaris cluster tools be
>>  > useful?
>>  > 
>>  > Also, once I get running, while I'm getting current with theory and
>>  > software may I inquire here about taking on a small, low priority academic
>>  > project to make sure the cluster side is working good?
>>  > 
>>  > Thank you all for still being here!
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://beowulf.org/pipermail/beowulf/attachments/20200205/98433635/attachment-0002.html>


More information about the Beowulf mailing list