[Beowulf] Broadcast - not for HPC - or is it?

Kevin Van Workum vanw+beowulf at sabalcore.com
Mon Oct 4 11:27:34 PDT 2010


On Fri, Sep 24, 2010 at 6:21 AM, Matt Hurd <matthurd at acm.org> wrote:
> I'm associated with a somewhat stealthy start-up.  Only teaser product
> with some details out so far is a type of packet replicator.
>
> Designed 24 port ones, but settled on 16 and 48 port 1RU designs as
> this seemed to reflect the users needs better.
>
> This was not designed for HPC but for low-latency trading as it beats
> a switch in terms of speed.  Primarily focused on low-latency
> distribution of market data to multiple users as the port to port
> latency is in the range of 5-7 nanoseconds as it is pretty passive
> device with optical foo at the core.  No rocket science here, just
> convenient opto-electrical foo.
>
> One user has suggested using them for their cluster but, as they are
> secretive about what they do, I don't understand their use case.  They
> suggested interest in bigger port counts and mentioned >1000 ports.
>
> Hmmm, we could build such a thing at about 8-9 ns latency but I don't
> quite get the point just being used to embarrassingly parallel stuff
> myself.  Would have thought this opticast thing doesn't replace an
> existing switch framework and would just be an additional cost rather
> than helping too much.  If it has a use, may we should build one with
> a lot of ports though 1024 ports seems a bit too big.
>
> Any ideas on the list about use of low latency broadcast for specific
> applications in HPC?  Are there codes that would benefit?
>
> Regards,
>
> Matt.

Maybe they're doing a Monte Carlo forecast based on real-time market
data; broadcasting the data to 1000+ processes where each process is
using a different random seed to generate independent points in
phase-space. Of course they would then have to send the updated
phase-space somewhere in order to update their likelihoods and issue a
reaction. I suppose if communication was the primary bottleneck,
doubling of the performance would be an upper limit.


-Kevin


> _________________
> www.zeptonics.com
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>



-- 
Kevin Van Workum, PhD
Sabalcore Computing Inc.
Run your code on 500 processors.
Sign up for a free trial account.
www.sabalcore.com
877-492-8027 ext. 11




More information about the Beowulf mailing list