[Beowulf] Building new cluster - estimate
Ivan Oleynik
iioleynik at gmail.com
Tue Jul 29 20:16:02 PDT 2008
>
> vendors have at least list prices available on their websites.
>>
>>
>> I saw only one vendor siliconmechanics.com <http://siliconmechanics.com>
>> that has online integrator. Others require direct contact of a saleperson.
>>
>
> This isn't usually a problem if you have good spec's that they can work
> with for you.
>
Yes, I do have good spec's, see my original posting, although might consider
AMD as well. Joe, can you provide a quote?
> You will pay (significantly) more per rack to have this. You seemed to
> indicate that bells and whistles are not wanted (e.g. "cost is king").
>
Air conditioning problem has been solved, will put my new cluster in a
proper place with enough power and BTUs to dissipate.
The hallmarks of good design for management of
> power/heat/performance/systems *all* will add (fairly non-trivial) premiums
> over your pricing. IPMI will make your life easier on management, though
> there is a cross-over where serial consoles/addressable and switchable PDUs
> make more sense. Of course grad students are "free", though the latency to
> get one into a server room at 2am may be higher than that of the IPMI and
> other solutions.
>
Yes, will consider IPMI as people advise.
>
>
> Some vendors here can deliver the San Clemente based boards in compute
> nodes (DDR2). DDR3 can be delivered on non-Xeon platforms, though you lose
> other things by going that route.
>
Would 5100 chipset work with 5400 Harpertown xeons?
> If cost is king, then you don't want IPMI, switchable PDUs, serial
> consoles/kvm over IP, fast storage units, ...
>
Yes, except of IPMI as people advised.
>
> Listening to the words of wisdom coming from the folks on this list,
> suggest that revising this plan, to incorporate at least some elements that
> make your life easier, is definitely in your interest.
Yes, this is what I am doing after getting this excellent feedback from all
of you.
>
> We agree with those voices. We are often asked to help solve our customers
> problems, remotely. Having the ability to take complete control (power,
> console, ...) of a node via a connection enables us to provide our customer
> with better support. Especially when they are a long car/plane ride away.
>
We usually manage the clusters ourselves because don't have a resources in
academia for expensive support contracts beyond standard 3 year hardware
warranty.
>
> I might suggest polling the people who build them for their research
> offline and ask them what things they have done, or wish they have done.
> You can always buy all the parts from Newegg and build it yourself if you
> wish. Newegg won't likely help you with subtle booting/OS load/bios
> versioning problems. Or help you identify performance bottlenecks under
> load. If this is important to you, ask yourself (and the folks on the list)
> what knowledgeable support and good design is worth.
Yes, agreed. Would like to get this feedback from people in academia, what
things they wish have done if looking in the past.
Thanks, Joe
Ivan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20080729/c0601f74/attachment.html>
More information about the Beowulf
mailing list