[Beowulf] Re: Intel quad core nodes?

Robert G. Brown rgb at phy.duke.edu
Wed Oct 10 21:34:15 PDT 2007


On Thu, 11 Oct 2007, Alan Louis Scheinine wrote:

> When dual-core came out we had a debate here
> about the cost of dual-core versus single-core.
> There was not a cost savings if one considered
> just the CPU, but there was a cost savings if
> one considered the entire box.  On the other
> hand, using benchmark figures we saw that
> the performance scaled a little less than
> simple frequency multiplier.  In the end, at
> the time of introduction of dual-core, the choice
> with regard to "cost-effectiveness" was a toss-up
> (either choice equally cost-effectiveness).

This is a very old cost benefit computation that goes all the way back
to SMP Pentium Pros.

Over the years it has USUALLY been the case that for people with CPU
bound code SMP systems (which I would argue continues to include modern
multiprocessor multicores) would realize a cost-benefit advantage
compared to UP systems because of not having to buy so many chassis and
needlessly replicating and paying for case, power supply, disk, and
network interface.

In the earliest days (2.0.x kernels) there was only a single kernel lock
on for interrupts so all interrupt processing was effectively single
threaded, which caused systems-bound code to scale poorly.  For many
processor generations now that hasn't been the case, though.

For code that was memory bound things have been less consistent.  The
ability of memory to keep up with multiple cores has varied quite a lot.
In some years -- for certain motherboards, processors, chipsets, memory
bus speeds, cache sizes -- two processors could run straight to memory
with little to no binding, in others you'd drop to maybe 1.4x UP speed
as they collided on the bus.

For code that was network bound (or bound in several dimensions) things
got even more complicated.  Two processors sharing a single network
channel could easily be bound -- or not.  Adding a second NIC could
unbind the processes -- or not.

In other words, it has always been YMMV, but well worth considering
especially for mostly-CPU bound code.  I think that this is still likely
to be the case.  For my personal applications four cores on two
processors in one box might run 8x as fast as UP (per box) with little
to no binding.  I doubt that parallel stream will, though, and forcing
IPCs for 8 processors through just 1-2 gigabit channels is likely to
produce problems as well.

    rgb

>
> We can assume that AMD and Intel have marketing
> specialists that are almost as intelligent as the
> HPC staff at CRS4.  So it is not surprising that at
> the moment of the introduction of quad-core, the actual
> cost-effectiveness taking into account everything is a
> function that meets the dual-core cost effectiveness.
> A kind of C1 continuity.
>
> best regards,
> Alan
>
>
> Centro di Ricerca, Sviluppo e Studi Superiori in Sardegna
> Center for Advanced Studies, Research, and Development in Sardinia
>
> Postal Address:               |  Physical Address for FedEx, UPS, DHL:
> ---------------               |  -------------------------------------
> Alan Scheinine                |  Alan Scheinine
> c/o CRS4                      |  c/o CRS4
> C.P. n. 25                    |  Loc. Pixina Manna Edificio 1
> 09010 Pula (Cagliari), Italy  |  09010 Pula (Cagliari), Italy
>
> Email: scheinin at crs4.it
>
> Phone: 070 9250 238  [+39 070 9250 238]
> Fax:   070 9250 216 or 220  [+39 070 9250 216 or +39 070 9250 220]
> Operator at reception: 070 9250 1  [+39 070 9250 1]
> Mobile phone: 347 7990472  [+39 347 7990472]
>
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf
>

-- 
Robert G. Brown
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone(cell): 1-919-280-8443
Web: http://www.phy.duke.edu/~rgb
Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977



More information about the Beowulf mailing list