[Beowulf] 'dual' Quad solution from Tyan

Vincent Diepeveen diep at xs4all.nl
Wed Mar 1 18:05:52 PST 2006


Oh Ricardo,

Before i forget to mention. If power is a problem with SATA disks,
consider that in general Maxtor diskdrives eat a LOT more power than
those of several other manufacturers.

At least that's when i read the power consumption that's written on the disk 
itself and compare it to others.

Note this didn't stop me from buying 4 maxtor S-ATA drives, as i expect 
those drives to fail soon,
and when one fails i want to bicycle only 1 street to a computershop here 
which only sells Maxtors (of course,
as it brings him the most profit those disks). Returning disks to a 
computershop that's 100 KM away is
always such problems.

YMMV

Yet i'm very interested in knowing reactions from others here what they find 
from those huge power differences
in harddrives.

Vincent

----- Original Message ----- 
From: "Joe Landman" <landman at scalableinformatics.com>
To: "Ricardo Reis" <rreis at aero.ist.utl.pt>; <beowulf at beowulf.org>
Sent: Wednesday, March 01, 2006 6:21 PM
Subject: Re: [Beowulf] 'dual' Quad solution from Tyan


> On Tue, 28 Feb 2006 22:38:31 +0000 (WET), Ricardo Reis wrote
>> Thank you all for your reply's.
>>
>>   1. The system will be used for CFD intensive calculation, using
>> comercial and in the house codes, MPI flavor;
>
> You want smaller systems then.
>
>>   2. The cluster I've thought to build initially would be:   * 8
>> nodes (including master), with dual motherboards (2 Opteron CPUs,
>>  single core)   * 16 Opteron 2.4GHz;   * 4 GB per node (32 GB total)
>> ;   * 1 80 Gb disc (SATA II) per node for system and scratch space;
>>  * 2 80 Gb disc (SATA II) for system on master, on RAID 1;   * 3 500
>> Gb disc (SATA II) for storage, home;   * 2 Gigabit switch, one for
>> MPI, another for system and NFS;   * Motherboard is the Tyan
>> S2882G3NR-D;
>
> Not the best choice of MB.  Uses broadcom NICs, and we have seen higher 
> than
> we like failure rates with Tyan MBs at our customers sites.
>
>>    3. I thought that the lantency in this VX50 would be far less
>> than in the Gigabit network;
>
> Possibly, but at a much higher cost.  If latency is your issue go with
> Infinipath or Infiniband (for the moment).  I have been hearing 
> interesting
> things about 10Gbe, but haven't had a chance to look into it yet in great 
> depth.
>
>>    4. The solution for cluster vs. VX50 is around less 3500 euro for
>> the VX50;
>
> Interesting.  You could get a bunch of single CPU boards, load them with 
> dual
> core units, and come in at a lower price point.
>
>>    5. I thought also that the requirements in HVAC would be less for
>> the VX50;
>
> Fewer PS, more fans, more noise, single point of failure (the last one is 
> bad).
>
>>    6. I'm aware and thinking that this technology is new and can be
>> a single-point of failure, regarding the cluster option;
>
> Yes.
>
>>    7. Why 2 single core are better than a dual core? because of
>> sharing resources?
>
> Actually for CFD, it depends upon the code and the memory access patterns. 
> If
> you fill up the memory channel with one core, the second core will have to
> wait to access the memory.
>
>>
>>    thanks for your knowledge sharing,
>>
>>   Ricardo Reis
>>
>>   "Non Serviam"
>>
>>   n.p.: http://radio.ist.utl.pt
>>   n.r.: http://atumtenorio.blogspot.com
>>                      <- Send with Pine Linux/Unix/Win/Mac OS->
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
>
> --
> Scalable Informatics LLC
> http://www.scalableinformatics.com
> phone: +1 734 786 8423
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit 
> http://www.beowulf.org/mailman/listinfo/beowulf
> 




More information about the Beowulf mailing list