[Beowulf] visualization machine
Andrew Robbie (GMail)
andrew.robbie at gmail.com
Sun Mar 30 07:17:41 PDT 2008
On Thu, Mar 27, 2008 at 9:41 PM, Ricardo Reis <rreis at aero.ist.utl.pt> wrote:
>
> Hi all
>
> I beg to take advantage of your experience although the topic isn't
> completly cluster thing. I got some money to buy a new machine, at least
> 8Gb and I'm thinking between a 2 x dual core or a 1 x quad (or even 2x
> quads). It must be one machine because it must (urgh) be able to use it's
> 8Gb in serial codes (don't ask).
Just be aware that most of the machines designed to be number crunchers
have shortcomings in board layout or bus design that make them suck for
visualization.
Not that many that will be happy with 8GB for starters. So few machines are
actually ever populated with big dimms that you almost always get issues.
So you end up going for machines with lots of ram slots, ecc support etc,
which
is all good. These are almost always at least dual socket. But many of those
motherboards aren't designed to take a 16x PCIe graphics card and only have
PCIe 8x buses. Also, graphics cards have an extra retaining lug which
extends further than the PCIe slot; this is commonly blocked on server
motherboards by some capacitor.
High end graphics cards always take up two slots and require additional
power; on
the Quadro 5600 and other cards this connector enters from the top not the
end,
hence making it impossible to fit them in a 3U case. Oh yeah -- don't think
about
one of these for under your desk unless you want to wear earmuffs in the
office.
> Anyway, I've been experiencing with
> paraview for parallel visualization and was wondering on your opinion
> on... buying a ultra-duper-cool state-of-the-art graphic card (Nvidia) or
> 2 graphic cards?
Depends -- is performance critical *now*? If so, buy the fastest Quadro. If
you want
to maximize performance over time, just upgrade the graphics card every six
months
with the sweet spot on the price/performance curve. Quadros are the first
low-yield
parts from the fab; the same chips, with slightly slower/cheaper memory
hierarchy,
become mass market later.
Don't bother with SLI, you won't notice any speedup unless you invest lots
of tuning
time. And since your viz app is 3rd party, probably no speedup at all.
ATI vs nVidia: ATI drivers really really suck. nVidia drivers are generally
stable unless
you are on the bleeding edge (eg brand new part) or a corner case (eg quad
buffered
stereo on a 2.2 kernel but with recent hardware & drivers). nVidia developer
support
sucks too unless you are a major game author or eg industrial light & magic.
ATI
developer support is non-existent under linux; under windows I'm told they
can be ok
about fixing windows bugs.
Regards,
Andrew
(flight simulation geek)
>
> thanks for your time,
>
> Ricardo Reis
>
> 'Non Serviam'
>
> PhD student @ Lasef
> Computational Fluid Dynamics, High Performance Computing, Turbulence
> http://www.lasef.ist.utl.pt
>
> &
>
> Cultural Instigator @ Rádio Zero
> http://www.radiozero.pt
>
> http://www.flickr.com/photos/rreis/
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20080331/13d40354/attachment.html>
More information about the Beowulf
mailing list