[Beowulf] 64bit comparisons
andrewxwang at yahoo.com.tw
Fri Oct 15 20:17:41 PDT 2004
I believe you can get more info from the following
"hpc", "scitech", "xgrid-users"
Also, people on those lists (if I remember correctly)
use LAM-MPI, GridEngine, and also the IBM xlc/xlf
compilers (XL compilers generate faster code for the
--- "Hujsak, Jonathan T (US SSA)"
<jonathan.hujsak at baesystems.com> ªº°T®§¡G
> Have you gained any new 'lessons learned' since the
> below? Can you recommend a good version of MPI to
> use for these?
> We've been looking at MPICH, MPIPro and also the
> Apple xgrid...
> Jonathan Hujsak
> BAE Systems
> San Diego
> Bill Broadley bill at cse.ucdavis.edu
> Fri May 14 11:48:21 PDT 2004
> * Previous message: [Beowulf] 64bit comparisons
> * Next message: [Beowulf] 64bit comparisons
> * Messages sorted by: [ date ]
> thread ]
> subject ]
> [ author ]
> On Fri, May 14, 2004 at 09:44:01AM -0700, Robert B
> Heckendorn wrote:
> > One of the options we are strongly considering for
> our next cluster is
> > going with Apple X-servers. There performance is
> purported to be good
> Careful to benchmark both processors at the same
> time if that is your
> intended usage pattern. Are the dual-g5's shipping
> yet? Last I heard
> yield problems were resulting in only uniprocessor
> shipments. My main
> concern that despite the marketing blurb of 2
> 10GB/sec CPU interfaces
> or similar that there is a shared 6.4 GB/sec memory
> > and their power consumption is small.
> Has anyone measured a dual g5 xserv with a
> kill-a-watt or similar?
> > Can people comment on any comparisons betwee Apple
> and (Athlon64
> > or Opteron)?
> Personally I've had problems, I need to spend more
> time resolving them,
> things like:
> * Need to tweak /etc/rc to allow Mpich to use
> shared memory
> * Latency between two mpich processes on the
> same node is 10-20
> times the
> linux latency. I've yet to try LAM.
> * Differences in semaphores requires a rewrite for
> some linux code I
> * Difference in the IBM fortran compiler required
> a rewrite compared
> to code
> that ran on Intel's, portland group's, and GNU's
> fortran compiler.
> Given all that I'm still interested to see what the
> G5 is good at and
> what workloads the G5 wins perf/price or perf/watt.
> Bill Broadley
> Computational Science and Engineering
> UC Davis
> > _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or
> unsubscribe) visit
More information about the Beowulf