dealing with lots of sockets (was Re: [Beowulf] automount on high ports)

Bruno Coutinho coutinho at dcc.ufmg.br
Wed Jul 2 12:34:48 PDT 2008


2008/7/2 Perry E. Metzger <perry at piermont.com>:

>
> "Robert G. Brown" <rgb at phy.duke.edu> writes:
> >> Well, it actually kind of is. Typically, a box in an HPC cluster is
> >> running stuff that's compute bound and who's primary job isn't serving
> >> vast numbers of teeny high latency requests. That's much more what a
> >> web server does. However...
> >
> > I'd have to disagree.  On some clusters, that is quite true.  On others,
> > it is very much not true, and whole markets of specialized network
> > hardware that can manage vast numbers of teeny communications requests
> > with acceptably low latency have come into being.  And in between, there
> > is, well, between, and TCP/IP at gigabit speeds is at least a contender
> > for ways to fill it.
>
> I have to admit my experience here is limited. I'll take your word for
> it that there are systems where huge numbers of small, high latency
> requests are processed. (I thought that teeny stuff in HPC land was
> almost always where you brought in the low latency fabric and used
> specialized protocols, but...)
>
> >> Myself, I'm a believer in event driven code. One thread, one core. All
> >> other concurrency management should be handled by events, not by
> >> multiple threads.[....]
>

libevent can be used for event-based servers.
http://www.monkey.org/~provos/libevent/



>
> > Interesting.  Makes sense, but a lot of boilerplate code for daemons has
> > always used the fork approach.  Of course, things were "smaller" back
> > when the approach was dominant.  The forking approach is easy to program
> > and reminiscent of pipe code and so on.


This site describe several approaches to solve this problem:
http://www.kegel.com/c10k.html

Look for Chromium's X15. It can handle thousands of simultaneous conections
and can saturate gigabit networks even with lots of slow clients.


>
> Sure, but it is way inefficient. Every single process you fork means
> another data segment, another stack segment, which means lots of
> memory. Every process you fork also means that concurrency is achieved
> only by context switching, which means loads of expense on changing
> MMU state and more. Even thread switching is orders of magnitude worse
> than a procedure call. Invoking an event is essentially just a
> procedure call, so that wins big time.


As fas I know, process creation can take up to 1,000,000 cycles.


>
>
> Event driven systems can also avoid locking if you keep global data
> structures to a minimum, in a way you really can't manage well with
> threaded systems. That makes it easier to write correct code.
>
> The price you pay is that you have to think in terms of events, and
> few programmers have been trained that way.
>
> Perry
> --
> Perry E. Metzger                perry at piermont.com
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20080702/9a4a154d/attachment.html>


More information about the Beowulf mailing list