<html>
<head>
<style>
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
FONT-SIZE: 10pt;
FONT-FAMILY:Tahoma
}
</style>
</head>
<body class='hmmessage'>
This is a very interesting discussion to me. I have started to purchase components for an 8 core microWulf based on the Calvin College microWulf constructed by Prof. Joel Adams and his student except I will use slightly faster cores with an AMD X2 5400+ in the Master node (dual core) and three AMD X2 4000+ dual core processors enclosed in inexpensive boxes. The Master node has an MSI K9N SLI Platinum motherboard which has two Gigabit ports so perhaps the initial configuration with three satellite dual core CPU can be extended to a second set of boxes later. All these AM2-socket CPU are dual core and apparently Prof. Adams was able to address them in the microWulf as individual cores but there is, I believe, some hyperthreading between the dual cores so what is the story about how the dual cores can be addressed individually but still have hyperthreading between the dual cores? I am an experienced programmer for Von Neuman architecture and a total novice on parallel systems but as I build the microWulf I wonder if MPI will decouple the hyperthreading or is it not there? From what little I have learned so far the microWulf switch depends on the relatively slow Gigabit Ethernet so there is probably time within each dual core CPU for hyperthreading to occur if indeed provision is provided for hyperthreading in the AMD X2 dual cores. Sorry to ask such a dumb question but I am trying to learn.<BR>
<BR>
Don Shillady<BR>
Emeritus PRofessor of Chemistry, VCU<BR>
Ashland Va (working at home)<BR><BR><BR>
<BLOCKQUOTE>
<HR>
From: richard.walsh@comcast.net<BR>To: toon.knapen@gmail.com; beowulf@beowulf.org<BR>Subject: Re: [Beowulf] multi-threading vs. MPI<BR>Date: Fri, 7 Dec 2007 22:15:25 +0000<BR>CC: <BR><BR>
<DIV> </DIV>
<BLOCKQUOTE style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #1010ff 2px solid">-------------- Original message -------------- <BR>From: "Toon Knapen" <toon.knapen@gmail.com> <BR>
<DIV> </DIV>
<DIV>How come there is almost unanimous agreement in the beowulf-community while the rest is almost unanimous convinced of the opposite ? Are we just tapping ourselves on the back or is MP not sufficiently dissiminated or ... ? </DIV>
<DIV> </DIV>
<DIV>Mmm ... I think the answer to this is that the rest of world (non-HPC world) is in a time</DIV>
<DIV>warp. HPC went through its SMP-threads phase in the early-mid 1990s with OpenMP, and then we needed more a more scalable approach (MPI). Now that multi-core and multi-socket has brought parallelism to the rest of the Universe, SMP-based parallelism has had a resurgence ... this has also naturally caused some in HPC to revisit the question as nodes have fattened. </DIV>
<DIV> </DIV>
<DIV>The allure of a programming model that is intuitive, expressive, symbolically light-weight,</DIV>
<DIV>and provides a way to manage the latency variance across memory partitions is irresistable.</DIV>
<DIV> </DIV>
<DIV>I kind of like the CAF extension to Fortran and the concept of co-arrays. The co-array is</DIV>
<DIV>and array of identical normal arrays, but one per active image/process. They are defined as such:</DIV>
<DIV> </DIV>
<DIV> real, dimension (N) [*] :: X, Y</DIV>
<DIV> </DIV>
<DIV>If the program is run on 8 cores/processors/images the * becomes 8. 8, 1D arrays of size</DIV>
<DIV>N are created on each processor. In any references to the locale component of the co-array</DIV>
<DIV>(the image on the processor referencing it), you can drop the []s ... all other references (remote)</DIV>
<DIV>must include it. This is symbolically light, but reminds the programmer of every costly non-</DIV>
<DIV>local reference with the presence of the []s in the assignment or operation. There is much</DIV>
<DIV>more to it than that of course, but as the performance gap between carefully constructed</DIV>
<DIV>MPI applications and CAF compiled code shrinks I can see the later gaining some traction</DIV>
<DIV>for purely programming elegance related reasons. If you accept that notion that most MPI</DIV>
<DIV>programs are written at a B- level in terms of efficiency then the idea of gap closing may not</DIV>
<DIV>be so far fetched. CAF is supposed to be include in the Fortran 2008 standard.</DIV>
<DIV> </DIV>
<DIV>rbw</DIV>
<DIV> </DIV>
<DIV>-- <BR><BR>"Making predictions is hard, especially about the future." <BR><BR>Niels Bohr <BR><BR>-- <BR><BR>Richard Walsh <BR>Thrashing River Consulting-- <BR>5605 Alameda St. <BR>Shoreview, MN 55126 </DIV></BLOCKQUOTE>
<BLOCKQUOTE>--Forwarded Message Attachment--<BR>From: toon.knapen@gmail.com<BR>To: beowulf@beowulf.org<BR>Subject: [Beowulf] multi-threading vs. MPI<BR>Date: Fri, 7 Dec 2007 20:07:32 +0000<BR><BR><PRE>_______________________________________________<BR>Beowulf mailing list, Beowulf@beowulf.org<BR>To change your subscription (digest mode or unsubscribe) visit <A href="http://www.beowulf.org/mailman/listinfo/beowulf" target=_blank>http://www.beowulf.org/mailman/listinfo/beowulf</A><BR></PRE></BLOCKQUOTE></BLOCKQUOTE></body>
</html>