[Beowulf] Introduction and question
Douglas Eadline
deadline at eadline.org
Thu Mar 21 12:47:20 PDT 2019
It should also be pointed out that the early Beowulf
community was largely composed of engineers, computer scientists,
biologists, chemists, and physicists. Of course all
technical backgrounds, but with a common goal -- cheaper,
better, faster.
By definition the Beowulf community (1) has always been
very welcoming. New ideas and diversity have been our
strength.
--
Doug
(1) I am just now trying to define what that Beowulf Community
is, what it does, and what it can do in the future. More to follow.
> "Many employers look for people who studied humanities and learned IT by
> themselves, for their wider appreciation of human values."
>
> Mark Burgess
>
> https://www.usenix.org/sites/default/files/jesa_0201_issue.pdf
>
> On 2/23/19 4:30 PM, Will Dennis wrote:
>>
>> Hi folks,
>>
>> I thought Id give a brief introduction, and see if this list is a
>> good fit for my questions that I have about my HPC-ish
>> infrastructure...
>>
>> I am a ~30yr sysadmin (jack-of-all-trades type), completely
>> self-taught (B.A. is in English, thats why Im a sysadmin :-P) and
>> have ended up working at an industrial research lab for a large
>> multi-national IT company (http://www.nec-labs.com). In our lab we
>> have many research groups (as detailed on the aforementioned website)
>> and a few of them are now using HPC technologies like Slurm, and
>> Ive become the lead admin for these groups. Having no prior
>> background in this realm, Im learning as fast as I can go :)
>>
>> Our clusters are collections of 5-30 servers, all collections bought
>> over years and therefore heterogeneous hardware, all with
>> locally-installed OS (i.e. not trad head-node with PXE-booted diskless
>> minions) which is as carefully controlled as I can make it via
>> standard OS install via Cobbler templates, and then further configured
>> via config management (we use Ansible.) Networking is basic 10GbE
>> between nodes (we do have Infiniband availability on one cluster, but
>> its fell into disuse now since the project that has required it has
>> ended.) Storage is one or more traditional NFS servers (some use ZFS,
>> some not.) We have within the past few years adopted Slurm WLM for a
>> job-scheduling system on top of these collections, and now are up to
>> three different Slurm clusters, with I believe a fourth on the way.
>>
>> My first question for this list is basically do I belong here? I
>> feel theres a lot of HPC concepts it would be good for me to learn,
>> so as I can improve the various research groups computing
>> environments, but not sure if this list is for much larger true HPC
>> environments, or would be a good fit for a HPC n00b like me...
>>
>> Thanks for reading, and let me know your opinions :)
>>
>> Best,
>>
>> Will
>>
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> https://beowulf.org/cgi-bin/mailman/listinfo/beowulf
>
--
Doug
More information about the Beowulf
mailing list