[Beowulf] any gp-gpu clusters?

Jeffrey B. Layton laytonjb at charter.net
Fri Jun 22 08:34:46 PDT 2007


I have no idea if this will help anyone, but here is an article
that might help get started or at least provide some links:

http://www.linux-mag.com/launchpad/business-class-hpc/main/3533

WARNING: You have to register to read the article (sorry
about that).

 From what I understand, CTM is really just the low-level definition
of the interface to AMD Stream processors. On the other hand
CUDA is a real compiler with added features to make coding
for GPUs easier. It also has a BLAS and FFT library.

I think NVIDIA is ahead in the tools department, but I don't
expect AMD to stay behind.

Jeff


> Hi all,
> is anyone messing with GPU-oriented clusters yet?
>
> I'm working on a pilot which I hope will be something like 8x 
> workstations, each with 2x recent-gen gpu cards.
> the goal would be to host cuda/rapidmind/ctm-type gp-gpu development.
>
> part of the motive here is just to create a gpu-friendly 
> infrastructure into which commodity cards can be added and refreshed 
> every 8-12 months.  as opposed to "investing" in quadro-level cards 
> which are too expensive enough to toss when obsoleted.
>
> nvidia's 1U tesla (with two g80 chips) looks potentially attractive,
> though I'm guessing it'll be premium/quadro-priced - not really in 
> keeping with the hyper-moore's-law mantra...
>
> if anyone has experience with clustered gp-gpu stuff, I'm interested 
> in comments on particular tools, experiences, configuration of the host
> machines and networks, etc.  for instance, is it naive to think that 
> gp-gpu is most suited to flops-heavy-IO-light apps, and therefore doesn't
> necessarily need a hefty (IB, 10Geth) network?
>
> thanks, mark hahn.
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf
>




More information about the Beowulf mailing list