[Beowulf] [OT] MPI-haters
Brian Dobbins
bdobbins at gmail.com
Thu Mar 10 16:27:42 PST 2016
I like to think that RGB can be 'summoned' by mentioning his name a few
times in a thread... and then magically he appears, waxing poetically about
some interesting area of Beowulfry / HPC, and then vanishes in a puff of
equations.
So that I'm actually contributing something meaningful and not wistfully
remembering the past, I'll add that I think the low traffic is simply
because *building* systems has become much easier - there's plenty of
open-source or proprietary tools if you're inclined to do it yourself, and
plenty of vendors who'll ensure you don't need to. Clearly there's been a
large increase in HPC usage over the years, but the vast majority of those
systems (>98%?) are ones that operate at a scale where not *much* needs to
be 'figured out' - eg, a flat network topology so you don't need to ensure
hop-aware node selection for jobs, parallel file systems that 'work' and
give improvement without requiring you to recompile a kernel, rip your hair
out, etc.
As a corollary to this, years ago most places were still 'experimenting'
with clusters - at universities, they were often run by a research group or
a department, tasked to a narrow area, and serving a small handful of
users. That meant that tinkering with them was very doable - you want to
take the 12-node cluster down for two hours to try a new network driver
that might help your QCD code via better latency? Go for it! Now,
clusters are no longer an 'engineering project' by a handful of grad
students or linux geeks, they're a fundamental, central resource for
research communities, and they're larger, serving many more users, and
often managed by dedicated teams of IT staff. When you tried to tinker
with that network driver six years ago it wasn't a problem. But now you
want the IT department that's running a production cluster 'appliance' to
give you root access to try some beta driver to get a few percentage faster
results on their 500-node cluster? Well, I'm going to go out on a limb and
label that as 'unlikely'. ;)
In short, I think the environment we operate under has changed
considerably, leading to less traffic about the nuts and bolts of clusters
-
if you no longer need to wrestle with your PXE boot configuration files
because some distribution or tool handles that all for you, you no longer
need to post your frustrations and questions to the list for help, right?
(I say that because I think I did it once..) At the same time, the
*usage* landscape
has diversified quite a bit - so fewer people know as much about the whole
field, and thus certain topics garner fewer comments.
All in all, though, it's a list with some incredibly experienced people --
maybe it's worth thinking about a better way to use this list as a
resource? For example, instead of it just being a 'How do I do <X>?"
thing, perhaps once a month someone (*cough*Chris Samuel*cough*) gets a
volunteer to write a post about their recent challenges/experiences/etc.?
Just an idea; I know I rarely post questions here, yet when I hear a talk
about something, I always have a bunch of thoughts about it. Thoughts?
Cheers,
- Brian
On Thu, Mar 10, 2016 at 11:48 AM, Prentice Bisbal <
prentice.bisbal at rutgers.edu> wrote:
> On 03/10/2016 01:34 PM, Jeff Becker wrote:
>
>> On 03/10/2016 10:32 AM, Prentice Bisbal wrote:
>>
>>> This list used to get A LOT more traffic. Not sure what happened over
>>> the past few years. I miss the witty banter and information I used to get
>>> from all that traffic, but I definitely don't miss Vincent.
>>>
>>
>> :-)
>>
>>>
>>> It just occurred to me that if you know who Vincent or RGB is, you're
> probably an old-timer on this list now.
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20160310/a8b82e92/attachment.html>
More information about the Beowulf
mailing list