[Beowulf] Interesting google server design

Greg Byshenk greg.byshenk at aoes.com
Fri Apr 3 01:19:52 PDT 2009

On Thu, Apr 02, 2009 at 03:16:22PM +0200, Simon Hogg wrote:
> Robert G. Brown wrote:

> >IIRC Google doesn't use "server grade" anything.  They use OTC parts and
> >do a running computation on failure rates and optimize price performance
> >dynamically.  They are truly industrial scale production here.  For them
> >servicing/replacing a system is cheap:  Box dies.  Employee notes this,
> >grabs box from Big Stack of Boxes, carries it to dead box, removes dead
> >box, replace it with new working box, presses power switch, walks away.
> >Problem solved.
> This may have changed, but way back when I remember being told that Google
> *don't* replace dead nodes, they just turn them off.  Supposedly it wasn't
> cost-effective to repair them or cannibalize them for other nodes.

> As I say, this was a good few years ago now, so the economics now may be
> different (or my original info might have been based on hearsay).

Someone from Google did a presentation at a conference back around 2000
(either LISA or OSCon, I think) where they described their systems.

What they said at that time was that they indeed did not replace nodes
that failed -- at least not as they failed.  If memeory serves, they
reported that they just turned off failed nodes, and then went through at
some scheduled time and replaced all the failed nodes at once.

They also described one earlier iteration of their server design that
sounds a bit like the one currently under discussion, which consisted of
four motherboards screwed onto a flat plate, with drives underneath,
and some external power supply.

Greg Byshenk                             
Technical Supervisor/Team Leader ICT     Telephone: +31 (0)71 579 5539
AOES B.V.                                Telefax  : +31 (0)71 572 1277
Huygensstraat 34                         Mobile:  : +31 (0)61 809 8713
2201 DK Noordwijk (ZH)                   email:  greg.byshenk at aoes.com
AOES web - <http://www.aoes.com>

More information about the Beowulf mailing list