[Beowulf] First cluster in 20 years - questions about today

Jörg Saßmannshausen sassy-work at sassy.formativ.net
Thu Feb 6 14:15:26 PST 2020


Hi Mark,

good to know you contributed to the COMD code at the time, so you are familiar 
with what I was referring too.

Regarding publishing: Good question, I don't have the answer for that. In 
theory if the work is up to the standard of the journal, you should be able to 
publish it. However, if you are not associated with an university or research 
centre, I don't know the answer here. That was one of the things I had to look 
into recently for one reason or another I don't want to go in here. 

I guess you should address this when you got something to publish. There are 
still the preprint servers for physics and chemistry.

All the best and good luck!

Jörg

Am Dienstag, 4. Februar 2020, 21:27:51 GMT schrieb Mark Kosmowski:
> Thank you for your reply.  I actually contributed a little bit of code to
> CPMD back in the day.
> 
> I'm going to start by trying to learn abinit.  They have experimental, CUDA
> only, GPU support, so I may save up for some used nVidia cards at some
> point, maybe I can find a deal on P106 class cards.
> 
> I already have the three Opteron 940 boxes; I've kept them since buying
> them in grad school.  Having said this, you remind me that my laptop is
> probably more powerful than those old machines.  I'll use the laptop to
> learn abinit on and then to do small system calculations while I'm (likely
> slowly) getting other equipment up and running.
> 
> Assuming my work and writing is acceptable quality, how likely will I be to
> get published with just a master degree?
> 
> > Message: 1
> > Date: Sun, 02 Feb 2020 23:40:50 +0000
> > From: Jörg Saßmannshausen <sassy-work at sassy.formativ.net>
> > To: beowulf at beowulf.org
> > Subject: Re: [Beowulf] First cluster in 20 years - questions about
> > 
> >         today
> > 
> > Message-ID: <2382819.MDnfneh6fb at deepblue>
> > Content-Type: text/plain; charset="utf-8"
> > 
> > Hi Mark,
> > 
> > being a chemist and working in HPC for some years now, for a change I can
> > make
> > some contribution to the list as well.
> > 
> > I would not advice to use hardware which is over 5 years old, unless
> > somebody
> > else is footing the electricity bill. The new AMDs are much faster and
> > also as
> > you have more cores per node, you can run larger simulations without
> > having
> > InfiniBand interconnections. The next question would be which programs do
> > you
> > want to use? ORCA? NWChem? Gamess-US? CP2K/Castep? They all have different
> > requirements and the list is by no means exhaustive. Do you just want to
> > stick
> > to DFT calculations or wavefunction ones as well (like CASSCF, CASPT2)?
> > The
> > bottom line is you want to have something which is efficient and tailored
> > to the
> > program(s) you want to use.
> > 
> > Forget about Solaris. I don't know any code other than Gamess-US which is
> > supporting Solaris. Stick to Linux. From what you said I guess you want to
> > use
> > code like CP2K which requires large memory. Again the latest AMD can
> > address
> > really large memory so I would suggest to go for that, if you really want
> > to
> > be productive. You might want to consider using NVMe as scratch/swap or
> > even
> > OS drive and, if you want to use CP2K, make sure you got enough memory and
> > cores.
> > If you just want to toy around then by all means use old hardware but you
> > will
> > have more frustration than fun.
> > 
> > For your information: I am a 'gentleman' scientist, i.e. I do my research,
> > chemistry in my case, like most respectable scientist in the evening or
> > weekend and I still got a daytime job to attend to. By enlarge I get one
> > publication out per year in highly cited journals. Right now, as until
> > recently I had some clusters at my disposal, I got an old 8 core box with
> > 42
> > GB or RAM which I am planning to replace this year with an AMD one for
> > reasons
> > already mentioned on the list. I wanted to do that last year but for one
> > reason or another that did not work out. My desktop is a Intel(R) Core(TM)
> > i7-4770 CPU @ 3.40GHz machine which also does calculations and post-
> > processing. My bottle neck right now is the time I need to write up stuff,
> > another reason why I am still using the old server. At least it is heating
> > my
> > dining room. :-)
> > 
> > Let me know if you got any more questions, happy to help out a colleague!
> > 
> > All the best
> > 
> > Jörg
> > 
> > Am Samstag, 1. Februar 2020, 22:21:09 GMT schrieb Mark Kosmowski:
> > > I've been out of computation for about 20 years since my master degree.
> > > I'm getting into the game again as a private individual.  When I was
> > 
> > active
> > 
> > > Opteron was just launched - I was an early adopter of amd64 because I
> > > needed the RAM (maybe more accurately I needed to thoroughly thrash my
> > 
> > swap
> > 
> > > drives).  I never needed any cluster management software with my 3 node,
> > > dual socket, single core little baby Beowulf.  (My planned domain is
> > > computational chemistry and I'm hoping to get to a point where I can do
> > 
> > ab
> > 
> > > initio catalyst surface reaction modeling of small molecules (not
> > > biomolecules).)
> > > 
> > > I'm planning to add a few nodes and it will end up being fairly
> > > heterogenous.  My initial plan is to add two or three multi-socket,
> > > multi-core nodes as well as a 48 port gigabit switch.  How should I
> > 
> > assess
> > 
> > > whether to have one big heterogenous cluster vs. two smaller
> > > quasi-homogenous clusters?
> > > 
> > > Will it be worthwhile to learn a cluster management software?  If so,
> > > suggestions?
> > > 
> > > Should I consider Solaris or illumos?  I do plan on using ZFS,
> > > especially
> > > for the data node, but I want as much redundancy as I can get, since I'm
> > > going to be using used hardware.  Will the fancy Solaris cluster tools
> > > be
> > > useful?
> > > 
> > > Also, once I get running, while I'm getting current with theory and
> > > software may I inquire here about taking on a small, low priority
> > 
> > academic
> > 
> > > project to make sure the cluster side is working good?
> > > 
> > > Thank you all for still being here!



More information about the Beowulf mailing list