[Beowulf] [External] RIP CentOS 8
Lux, Jim (US 7140)
james.p.lux at jpl.nasa.gov
Fri Dec 11 16:31:28 UTC 2020
One interesting take on this is my experience as a HPC user at JPL.
At JPL we use both external and inhouse clusters (most recently, Halo and Aurora, just replaced by Gattaca) - they have a fairly consistent user environment - Maybe some changes in what packages are preinstalled (matlab, MKL, BLAS, various versions) and some small differences in mass storage. So Gattaca was a "forklift upgrade" - whole brand new cluster, physically distributed between JPL and that datacenter near Las Vegas, whose name eludes me as I write this.
However, we also use clusters at SDSC and TACC - and both of those have radically different environments from the ones at JPL - SLURM & Launcher vs PBS, for instance.
I don't recall which versions of the OS either runs (my codes aren't particularly OS specific, and as long as I can compile and run Fortran and Python, I'm happy) - I don't know that I'd even notice if it changed - as a user. The computer in my office that runs Linux happens to be Ubuntu, although I don't use it for developing my HPC codes, and I'm writing this on a Mac, and I also run the same codes on a Windows PC. So I've evolved a fairly "platform insensitive" work stream. However, I'm also a tiny customer. The folks who run Entry Descent and Landing simulations or Trajectory calculations for flybys might well be a lot pickier.
I am sure the sysadmins for those clusters DO care, very much, because they're having to keep things up to date, install new packages, etc.
That's likely who would be affected by CentOS, RHE, etc.
On 12/10/20, 9:41 AM, "Beowulf on behalf of Prentice Bisbal via Beowulf" <beowulf-bounces at beowulf.org on behalf of beowulf at beowulf.org> wrote:
> I've added some comments on LWN - but it may be a tough day for HPC. That
> is the last market segement I can see that is tied to RPM as a "thing@
Actually, I think the opposite is true. HPC clusters are usually
walled-off computing environments, where *most* of the software being
run on them is developed in-house or otherwise compiled from
source-code. (I work in academia, where just about 100% of apps used are
open-source).
Large cluster upgrades are usually "forklift" upgrades, where a new
cluster usually means a completely new, separate computing environment
from the previous cluster.
I think these factors make HPC clusters an *easier* place to change
course than other computing environments.
--
Prentice
On 12/8/20 6:47 PM, Andrew M.A. Cater wrote:
> On Tue, Dec 08, 2020 at 09:50:13PM +0000, Jörg Saßmannshausen wrote:
>> Dear all,
>>
>> what I never understood is: why are people not using Debian?
>>
> I don't know - I suggested it 20 years ago when rgb launched his Extreme Linux
> and I use it daily - but not on HPC.
>
> I've added some comments on LWN - but it may be a tough day for HPC. That
> is the last market segement I can see that is tied to RPM as a "thing@
>
> Andy
>
>> I done some cluster installation (up to 100 or so nodes) with Debian, more or
>> less out of the box, and I did not have any issue with it. I admit, I might
>> have missed out something I don't know about, the famous unkown-unkowns, but
>> by enlarge the clusters were running rock solid with no unusual problem.
>> I did not use Lustre or GPFS etc. on it, I only played around a bit with BeeFS
>> and some GlusterFS in a small scale.
>>
>> Just wondering, as people mentioned Ubuntu.
>>
>> All the best from a dark London
>>
>> Jörg
>>
>> Am Dienstag, 8. Dezember 2020, 21:12:02 GMT schrieb Christopher Samuel:
>>> On 12/8/20 1:06 pm, Prentice Bisbal via Beowulf wrote:
>>>> I wouldn't be surprised if this causes Scientific Linux to come back
>>>> into existence.
>>> It sounds like Greg K is already talking about CentOS-NG (via the ACM
>>> SIGHPC syspro Slack):
>>>
>>> https://urldefense.us/v3/__https://www.linkedin.com/posts/gmkurtzer_centos-project-shifts-focus-to-cent__;!!PvBDto6Hs4WbVuu7!eRp5sh-AHxUg9ZB6GnTp9h-BvrWKzD8SkToy3CBZJY1z3NPWRe6biBQLTOBsMrjELEunz28$
>>> os-stream-activity-6742165208107761664-Ng4C
>>>
>>> All the best,
>>> Chris
>>
>>
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
>> To change your subscription (digest mode or unsubscribe) visit https://urldefense.us/v3/__https://beowulf.org/cgi-bin/mailman/listinfo/beowulf__;!!PvBDto6Hs4WbVuu7!eRp5sh-AHxUg9ZB6GnTp9h-BvrWKzD8SkToy3CBZJY1z3NPWRe6biBQLTOBsMrjE3pU10sE$
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit https://urldefense.us/v3/__https://beowulf.org/cgi-bin/mailman/listinfo/beowulf__;!!PvBDto6Hs4WbVuu7!eRp5sh-AHxUg9ZB6GnTp9h-BvrWKzD8SkToy3CBZJY1z3NPWRe6biBQLTOBsMrjE3pU10sE$
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit https://urldefense.us/v3/__https://beowulf.org/cgi-bin/mailman/listinfo/beowulf__;!!PvBDto6Hs4WbVuu7!eRp5sh-AHxUg9ZB6GnTp9h-BvrWKzD8SkToy3CBZJY1z3NPWRe6biBQLTOBsMrjE3pU10sE$
More information about the Beowulf
mailing list