<div dir="ltr"><br><br><div class="gmail_quote">2008/10/3 Vincent Diepeveen <span dir="ltr"><<a href="mailto:diep@xs4all.nl">diep@xs4all.nl</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
On Oct 3, 2008, at 5:45 PM, Joe Landman wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
GPU as a compression engine? Interesting ...<br>
<br>
Joe<br>
<br>
</blockquote>
<br>
For great compression, it's rather hard to get that to work.<br>
With a lot of RAM some clever guys manage.<br>
<br>
GPU has a lot of stream processors, yet little RAM a stream processor.</blockquote><div><br>A tesla have 4GB ram for 240 stream processors. That gives ~16MB for processor.<br>But SMID arrays (stream multiptocessors in CUDA) need a lot of threads to tolerate memory latencies and not get idle.<br>
Something of the order of 512 threads per multiprocessor (tesla have 30), so we have 4GB divided among 15360, that gives 273k per thread.<br><br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<br>
Additionally they all have to execute the same code at a bunch of SP's at the same time.<br>
<br>
So there is a big need for some real clever new algorithm there,<br>
as the lack of RAM is gonna hurt really bigtime.</blockquote><div><br>You need to get data from machine main memory, compress and send results back several times.<br>The bandwidth of pci express today is 8GB/s, so this is the maximum data rate a gpu can compress.<br>
You can use some tricks like computation and i/o (to main memory) parallelization, but will be constrained to 8GB/s anyway.<br> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<br>
Would be a mighty interesting study to get something to work there. It allows real<br>
complicated mathematical functions for PPM functionality.<br>
<br>
What's in that GPU soon will be in a CPU anyway, so it benefits the entire planet a thing like that.</blockquote><div><br>This will be interesting.<br>Removing pci express from the path, the gpu will become really a parallel coprocessor.<br>
<br>With the pressure for a more flexible programming model caused by Larabee, the coprocessor could become as programable as the main processor and we will have a processor with few big cores for serial workloads and many cores for parallel workloads.<br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
<br>
Where can i ask for funding?<br>
<br>
Vincent<br>
<br>
<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Cheers<br>
Carsten<br>
</blockquote>
<br>
-- <br>
Joseph Landman, Ph.D<br>
Founder and CEO<br>
Scalable Informatics LLC,<br>
email: <a href="mailto:landman@scalableinformatics.com" target="_blank">landman@scalableinformatics.com</a><br>
web : <a href="http://www.scalableinformatics.com" target="_blank">http://www.scalableinformatics.com</a><br>
<a href="http://jackrabbit.scalableinformatics.com" target="_blank">http://jackrabbit.scalableinformatics.com</a><br>
phone: +1 734 786 8423 x121<br>
fax : +1 866 888 3112<br>
cell : +1 734 612 4615<br>
<br>
</blockquote>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a><br>
To change your subscription (digest mode or unsubscribe) visit <a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
</blockquote></div><br></div>