[Beowulf] Register article on Epyc

Scott Atchley e.scott.atchley at gmail.com
Wed Jun 21 13:00:18 PDT 2017

In addition to storage, if you use GPUs for compute, the single socket is
compelling. If you rely on the GPUs for the parallel processing, then the
CPUs are just for serial acceleration and handling I/O. A single socket
with 32 cores and 128 lanes of PCIe can handle up to eight GPUs with four
CPU cores per GPU. This would be a very dense solution and could be
attractive for data centers as well as HPC.

On Wed, Jun 21, 2017 at 12:39 PM, Kilian Cavalotti <
kilian.cavalotti.work at gmail.com> wrote:

> On Wed, Jun 21, 2017 at 5:39 AM, John Hearns <hearnsj at googlemail.com>
> wrote:
> > For a long time the 'sweet spot' for HPC has been the dual socket Xeons.
> True, but why? I guess because there wasn't many other options, and in
> the first days of multicore CPUs, it was the only way to have decent
> local parallelism, even with QPI (and its ancestors) being a
> bottleneck. And also to have enough PCIe lanes (40 lanes ought to
> enough for anyone, right?)
> But now, with 20+ core CPUs, does it still really make sense to have
> dual socket systems everywhere, with NUMA effects all over the place
> that typical users are blissfully unaware of?
> Seems to me like this is a smart design move from AMD, and that
> single-socket systems, with 20+ core CPUs and 128 PCIe lanes could
> make a very cool base for many HPC systems. Of course, that's just on
> paper for now, proper benchmarking will be required.
> Cheers,
> --
> Kilian
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20170621/4704d5d7/attachment.html>

More information about the Beowulf mailing list