[Beowulf] HPC with CUDA
"C. Bergström"
cbergstrom at pathscale.com
Fri Jun 13 08:34:05 PDT 2014
On 06/13/14 09:16 PM, Greg Keller wrote:
>
> From: "Raphael Verdugo P." <raphael.verdugo at gmail.com
> <mailto:raphael.verdugo at gmail.com>>
> To: beowulf at beowulf.org <mailto:beowulf at beowulf.org>
> Subject: [Beowulf] HPC with CUDA
>
> I need install 5 GPUs(Geforce GTX 780s) in a server and 1 Tesla
> Keppler K40 in other.
>
> ? Do they have any recommendation for server HP or Dell?
> processor? , RAM?
>
>
> We have considered the easily overlooked Dell T620 for these types of
> projects needing a lot of slots and power supplied...
>
> Slots
> 7 PCIe slots:
> Four x16 slots with x16 bandwidth, full-length, full-height
> Two x8 slots with x8 bandwidth, full-length, full-height
> One x8 slot with x4 bandwidth, full-length, full-height
> (http://www.dell.com/us/business/p/poweredge-t620/pd?~ck=anav
> <http://www.dell.com/us/business/p/poweredge-t620/pd?%7Eck=anav>)
>
> There are rack mount kits available and you can cram 32 disks in it if
> you end up with a craving for local IO at some point. It's marketed
> as a "Tower" server so we overlooked it for a year.
>
> Supermicro has a board that provides Eight x16 slots, but I understand
> it's wired so that 2 slots are effectively sharing 16 lanes. Let me
> know if you find anything more awesome...
Newer Intel CPU can support 1x 32 lanes or 2x 32 lanes? At some point
you'll hit a bottleneck where you can't actually use all the lanes. In
some of the supercomputer designs they are doing a 1GPU to 1CPU ratio. I
think getting the workload balanced across this design would be
challenging and that's hoping all the driver issues have been ironed out.
More information about the Beowulf
mailing list