newbie: 16-node 500Mbps design

Laurent Itti itti at cco.caltech.edu
Fri Aug 18 21:24:32 PDT 2000


Thanks for your reply!

> I think 5 ethernet cards per node working is pushing it.  Will you even be
> able to stuff that many onto your motherboards?  If you can, getting linux to
> work properly with that many cards, generating lots and lots of interrupts, is
> going to be a challenge, if not impossible.  Even with dual fast ethernet,
> getting maximal performance out of linux TCP is not easy.  Check the list
> archives about TCP stalls to see the problems people are having.

great, exactly the kind of info I was looking for (the MBs have all
peripherals on-board and 6 open slots) . I'll check it out!

> With 16 singles you have 16 CPUs each connected to the others via a 100+mbs
> ethernet link.  With 8 duals you have 16 CPUs, with each CPU connected to one
[...]

yes, that's the way I had understood the dilemna so far.  I will have to
think of what type of communication flow we will mostly have.

> > Qty. 2, 128Mb PC133 168-pin SDRAM			$270	local
> 
> No ECC?  With over 2GB of total RAM, the probably that you will get a single

excellent suggestion! thanks!

> > Mandrake Linux 7.1 deluxe				$56	linuxmall
> 
> No need to buy a copy of linux for each machine.  It's those people building
> NT clusters who have to spend half their money on software.

well, as pointed out by the previous email, it's only going to be $800 in
total, i.e., less than a single Matlab license with signal & image
toolboxes, and less than 5% of total cost.  The idea was to say "thank
you" for the great OS.  If there is a better way to spend those 5% in
helping the Linux community, I am open. But it must be something that I
can put on a P.O. without the purchasing dept thinking that I am wasting
money in donations.

> You can't have a channel-bonded machine on the same network with a non-channel
> bonded machine, like you have here.  Your control node is connected to one
> switch, while each node is connected to 5.  This means that 4 out of 5 packets

excellent point! so I guess 5 ethers to the cluster, and one more to the
outside for that node ;-)   but it looks like the interconnect will
probably scale down after your first comment (maybe 2 or 3 instead of 5).

thanks again for the great feedback!

  -- laurent






More information about the Beowulf mailing list