[Beowulf] Re: MS Cray

Robert G. Brown rgb at phy.duke.edu
Wed Sep 17 20:36:40 PDT 2008


On Wed, 17 Sep 2008, Eric Thibodeau wrote:

> David Mathog wrote:
>> Getting back to the original subject, what would this Cray box "look
>> like" when it is running windows?  Does it show up as one desktop for
>> everything (basically an SMP machine), one desktop per blade, one per
>> processor(or core), or even virtualized, with more than one desktop per
>> core?  In terms of administering the box the first of these would be by
>> far the easiest to deal with, since there would only be the one copy of
>> Windows present.
>> 
> I seriously doubt that MS is presenting the entire system as a huge SMP. If 
> it's the case, I'd stay away from it since it implies that either you have to 
> use a proprietary API to get performance (inter-core communications à la MPI) 
> or that the model is OpenMosix ish...which IMHO is a nice theory, horrible 
> practice model. My impression/technical view of it is that the system most 
> probably runs off a "master" board with slave boards which boot using a 
> network image (pretty much like NFS roots). It is the most logical approach, 
> again, IMHO (single point of management and all)

I thought it had already been said that it runs MPI, so it isn't "real
SMP" (whatever that means).  I would put it a different way -- it runs N
kernels, not one N-way kernel, and it has to simulate shared memory as
there is (probably) no hardware supported NUMA or the like across the
processors.  Or rather probably doesn't support anything like NUMA and
just uses message passing on top of a very normal looking
beowulf/cluster architecture.  Didn't it say that in the original
article?  Or was it in one of the reposts of the article?

I think the view of the universe using MS "clustering" is that all the
systems boot up a "headless windows" of some sort, and one can
access/configure the node OS's via rdesktop or some such.  But I'm
guessing they have it preconfigured so that an MPI task on the "head
node" automagically distributes on the worker nodes.

    rgb

>> Regards,
>> 
>> David Mathog
>> mathog at caltech.edu
>> Manager, Sequence Analysis Facility, Biology Division, Caltech
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit 
>> http://www.beowulf.org/mailman/listinfo/beowulf
>> 
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf
>

-- 
Robert G. Brown                            Phone(cell): 1-919-280-8443
Duke University Physics Dept, Box 90305
Durham, N.C. 27708-0305
Web: http://www.phy.duke.edu/~rgb
Book of Lilith Website: http://www.phy.duke.edu/~rgb/Lilith/Lilith.php
Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977


More information about the Beowulf mailing list