[Beowulf] Introduction and question

fabricio fcannini at gmail.com
Mon Feb 25 06:33:57 PST 2019


Em 23/02/2019 11:30, Will Dennis escreveu:

Welcome, and remember:

Every cluster is sacred,
Every cluster is good.
If a cluster is wasted,
Don (Becker) gets quite irate.

PS: yes you will sing to the tune. ;)

> Hi folks,
> 
> I thought I’d give a brief introduction, and see if this list is a good 
> fit for my questions that I have about my HPC-“ish” infrastructure...
> 
> I am a ~30yr sysadmin (“jack-of-all-trades” type), completely 
> self-taught (B.A. is in English, that’s why I’m a sysadmin :-P) and have 
> ended up working at an industrial research lab for a large 
> multi-national IT company (http://www.nec-labs.com). In our lab we have 
> many research groups (as detailed on the aforementioned website) and a 
> few of them are now using “HPC” technologies like Slurm, and I’ve become 
> the lead admin for these groups. Having no prior background in this 
> realm, I’m learning as fast as I can go :)
> 
> Our “clusters” are collections of 5-30 servers, all collections bought 
> over years and therefore heterogeneous hardware, all with 
> locally-installed OS (i.e. not trad head-node with PXE-booted diskless 
> minions) which is as carefully controlled as I can make it via standard 
> OS install via Cobbler templates, and then further configured via config 
> management (we use Ansible.) Networking is basic 10GbE between nodes (we 
> do have Infiniband availability on one cluster, but it’s fell into 
> disuse now since the project that has required it has ended.) Storage is 
> one or more traditional NFS servers (some use ZFS, some not.) We have 
> within the past few years adopted Slurm WLM for a job-scheduling system 
> on top of these collections, and now are up to three different Slurm 
> clusters, with I believe a fourth on the way.
> 
> My first question for this list is basically “do I belong here?” I feel 
> there’s a lot of HPC concepts it would be good for me to learn, so as I 
> can improve the various research group’s computing environments, but not 
> sure if this list is for much larger “true HPC” environments, or would 
> be a good fit for a “HPC n00b” like me...
> 
> Thanks for reading, and let me know your opinions :)
> 
> Best,
> 
> Will
> 
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> 



More information about the Beowulf mailing list