[Beowulf] NVIDIA GPUs, CUDA, MD5, and "hobbyists"

Kilian CAVALOTTI kilian at stanford.edu
Thu Jun 19 10:05:15 PDT 2008

On Thursday 19 June 2008 12:17:07 am John Hearns wrote:
> Actually, I should imagine Kilian is referring to something else,
> not the inbuilt timeout which is in the documentation. But I can't
> speak for im.

I don't know about this timeout. As I said we didn't really had time nor 
the resources to investigate the crashes. But I've definitely seen a 
machine freeze when launching a binary containing CUDA code.

> I guess the art
> here is finding a motherboard with the right number and type of
> PCI-express slots to take both the companion box and a decent
> graphics card for X use.

AFAIK, the multi GPU Tesla boxes contain up to 4 Tesla processors, but 
are hooked to the controlling server with only 1 PCIe link, right? Does 
this spell like "bottleneck" to anyone?

Sure, moving data between host memory and GPU memory is not what you do 
the most often, but still.


More information about the Beowulf mailing list