[Beowulf] HPC with CUDA

Adam DeConinck ajdecon at ajdecon.org
Fri Jun 13 11:45:20 PDT 2014

Hash: SHA512

On Fri, Jun 13, 2014 at 10:34:05PM +0700, "C. Bergström" wrote:
> On 06/13/14 09:16 PM, Greg Keller wrote:
> >Supermicro has a board that provides Eight x16 slots, but I understand
> >it's wired so that 2 slots are effectively sharing 16 lanes.  Let me know
> >if you find anything more awesome...
> Newer Intel CPU can support 1x 32 lanes or 2x 32 lanes? At some point you'll
> hit a bottleneck where you can't actually use all the lanes. In some of the
> supercomputer designs they are doing a 1GPU to 1CPU ratio. I think getting
> the workload balanced across this design would be challenging and that's
> hoping all the driver issues have been ironed out.

An IVB Xeon CPU like the E5-2690v2 has 40 PCIe lanes, which I've typically seen
divided up into two x16 links and one x8 link. So if you want full
bandwidth between the CPU and each GPU, you're limited to two GPUs per
socket. Most motherboards which provide more than two x16 links per
socket use a PCIe switch to "split" a single x16 link. These switches
can generally provide full bandwidth between any pair of devices on the
switch, but they all share a single "uplink" to the CPU.

Choosing the right GPU:CPU ration depends a lot on what applications you
expect to be running. If your app performance depends heavily on
transfers between GPU and host memory, you probably don't want more than
2 GPUs per CPU socket. But if your app makes heavy use of GPU-to-GPU
peer-to-peer transfers, you might care more about getting as many GPUs
on a socket as you can.

(Very small self-plug: I gave a talk on managing GPU-enabled HPC
clusters at the GPU Technology Conference in March, where I talked a bit
about managing compute node topology as well as other tools. Slides are


- -- 
Adam DeConinck
Email: ajdecon at ajdecon.org | Twitter: @ajdecon
Web: https://www.ajdecon.org/
PGP key: https://www.ajdecon.org/ajdecon-key.txt
Version: GnuPG v1.4.12 (Darwin)


More information about the Beowulf mailing list