<div dir="ltr">With 8 processes, AMD-gnu is better than others.<br><br>Parallel 8 core job results:<br><br><br>AMD-GNU 26.880
sec <br>AMD-Pathscale 33.746 sec <br>AMD-Intel10 27.979 sec <br>Intel-Intel10 30.371 sec<br><br>Thank you,<br>Sangamesh<br>Consultant, HPC<br><br><div class="gmail_quote">
On Thu, Sep 18, 2008 at 2:08 PM, Bill Broadley <span dir="ltr"><<a href="mailto:bill@cse.ucdavis.edu">bill@cse.ucdavis.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d">Sangamesh B wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi Bill,<br>
<br>
I'm sorry. I composed the mail in proper format, but its not showing as<br>
I put.<br>
<br>
See, I've tested with three compilers only for AMD. For intel only Intel<br>
ifort.<br>
</blockquote>
<br></div>
Ah, so with 8 threads what was the intel time? The amd-gnu, amd-pathscale, and amd-intel times?<div><div></div><div class="Wj3C7c"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
Also there are two results for a single run (not for all. I missed out to<br>
take results with time command).<br>
<br>
I hope this helps,<br>
<br>
Thanks,<br>
Sangamesh<br>
<br>
On Thu, Sep 18, 2008 at 11:59 AM, Bill Broadley <<a href="mailto:bill@cse.ucdavis.edu" target="_blank">bill@cse.ucdavis.edu</a>>wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
I'm trying to understand your post, but failed. Can you post a link,<br>
publish a google spreadsheet or format it differently?<br>
<br>
You tried 3 compilers on both machines? Which times are for which<br>
CPU/Compiler combos? I tried to match up the columns and ros, but sometimes<br>
there were 3 columns, and sometimes 4. None of them lines up nicely under<br>
CPU or compiler headings.<br>
<br>
Mine (and many other folks) read email in ASCII/text, so a table should<br>
look like:<br>
<br>
Serial run:<br>
Compiler A Compiler B Compiler C<br>
=====================================================<br>
Intel 2.3 GHz 30 29 31<br>
AMD 2.3 GHZ 28 32 32<br>
<br>
Note that I used spaces and not tabs so it appears clear to everyone<br>
irregardless of their mail client, ascii/text, html, tab settings, etc.<br>
<br>
I've been testing these machines quite a bit lately and have been quite<br>
impressed with the barcelona memory systems, for instance:<br>
<br>
<a href="http://cse.ucdavis.edu/bill/fat-node-numa3.png" target="_blank">http://cse.ucdavis.edu/bill/fat-node-numa3.png</a><br>
<br>
<br>
Sangamesh B wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
The scientific application used is Dl-Poly - 2.17.<br>
<br>
Tested with Pathscale and Intel compilers on AMD Opteron Quad core. The<br>
time<br>
figures mentioned were taken from DL-Poly output file. Also I had used<br>
time<br>
command. Here are the results:<br>
<br>
<br>
AMD-2.3GHz (32 GB RAM)<br>
INTEL-2.33GHz (32 GB RAM)<br>
<br>
GNU gfortran Pathscale Intel 10<br>
ifort Intel 10 fiort<br>
<br>
1. Serial<br>
<br>
OUTPUT file 147.719 sec 158.158 sec 135.729 sec<br>
73.952 sec<br>
<br>
Time command 2m27.791s<br>
2m38.268s 1m13.972s<br>
<br>
2. Parallel<br>
4 core<br>
<br>
OUTPUT file 39.798 sec 44.717 sec 36.962 sec<br>
32.317 sec<br>
<br>
Time Command 0m41.527s<br>
0m46.571s 0m36.218s<br>
<br>
<br>
3. Parallel<br>
8 core<br>
<br>
OUTPUT 26.880 sec 33.746 sec 27.979 sec<br>
30.371 sec<br>
<br>
Time cmd<br>
0m30.171s<br>
<br>
<br>
The optimization flags used:<br>
<br>
Intel ifort 10: -O3 -axW -funroll-loops (don't remember exact<br>
flag. Similar to loop unroll)<br>
<br>
Pathscale: -O3 -OPT:Ofast -ffast-math -fno-math-errno<br>
<br>
GNU gfortran -O3 -ffast-math -funroll-all-loops -ftree-vectorize<br>
<br>
<br>
I'll try to use the further: <a href="http://directory.fsf.org/project/time/" target="_blank">http://directory.fsf.org/project/time/</a><br>
<br>
Thanks,<br>
Sangamesh<br>
<br>
<br>
On Thu, Sep 18, 2008 at 6:07 AM, Vincent Diepeveen <<a href="mailto:diep@xs4all.nl" target="_blank">diep@xs4all.nl</a>><br>
wrote:<br>
<br>
How does all this change when you use a PGO optimized executable on both<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
sides?<br>
<br>
Vincent<br>
<br>
<br>
On Sep 18, 2008, at 2:34 AM, Eric Thibodeau wrote:<br>
<br>
Vincent Diepeveen wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Nah,<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I guess he's referring to sometimes it's using single precision<br>
floating<br>
point<br>
to get something done instead of double precision, and it tends to keep<br>
sometimes stuff in registers.<br>
<br>
That isn't a problem necessarily, but if i remember well floating point<br>
state<br>
could get wiped out when switching to SSE2.<br>
<br>
Sometimes you lose your FPU registerset in that case.<br>
<br>
Main problem is that there is so many dangerous optimizations possible,<br>
to speedup testsets, because in itself floating point is real slow to<br>
do<br>
at hardware,<br>
from hardware viewpoint seen.<br>
<br>
Yet in general last generations of intel compilers that has improved<br>
really a lot.<br>
<br>
Well, running the same code here is the result discrepancy I got:<br>
</blockquote>
FLOPS:<br>
my code has to do: 7,975,847,125,000 (~8Tflops) ...takes 15minutes on<br>
8*2core Opeteron with 32 Gigs-o-RAM (thank you OpenMP ;)<br>
<br>
The running times (ran it a _few_ times...but not the statistical<br>
minimum<br>
of 30):<br>
ICC -> runtime == 689.249 ; summed error == 1651.78<br>
GCC -> runtime == 1134.404 ; summed error == 0.883501<br>
<br>
Compiler Flags:<br>
icc -xW -openmp -O3 vqOpenMP.c -o vqOpenMP<br>
gcc -lm -fopenmp -O3 -march=native vqOpenMP.c -o vqOpenMP_GCC<br>
<br>
No trickery, no smoky mirrors ;) Just a _huge_ kick ASS k-Means<br>
parallelized with OpenMP (thank gawd, otherwise it takes hours to run)<br>
and a<br>
rather big database of 1.4 Gigs<br>
<br>
... So this is what I meant by floating point errors. Yes, the runtime<br>
was<br>
almost halved by ICC (and this is on an *opteron* based system, Tyan<br>
VX50).<br>
The running time wasn't what I was actually looking for rather than<br>
precision skew and that's where I fell off my chair.<br>
<br>
For the ones itching for a little more specs:<br>
<br>
eric@einstein ~ $ icc -V<br>
Intel(R) C Compiler for applications running on Intel(R) 64, Version<br>
10.1<br>
Build 20080602<br>
Copyright (C) 1985-2008 Intel Corporation. All rights reserved.<br>
FOR NON-COMMERCIAL USE ONLY<br>
<br>
eric@einstein ~ $ gcc -v<br>
Using built-in specs.<br>
Target: x86_64-pc-linux-gnu<br>
Configured with:<br>
/dev/shm/portage/sys-devel/gcc-4.3.1-r1/work/gcc-4.3.1/configure<br>
--prefix=/usr --bindir=/usr/x86_64-pc-linux-gnu/gcc-bin/4.3.1<br>
--includedir=/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.1/include<br>
--datadir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.3.1<br>
--mandir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.3.1/man<br>
--infodir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.3.1/info<br>
<br>
--with-gxx-include-dir=/usr/lib/gcc/x86_64-pc-linux-gnu/4.3.1/include/g++-v4<br>
--host=x86_64-pc-linux-gnu --build=x86_64-pc-linux-gnu --disable-altivec<br>
--enable-nls --without-included-gettext --with-system-zlib<br>
--disable-checking --disable-werror --enable-secureplt --enable-multilib<br>
--enable-libmudflap --disable-libssp --enable-cld --disable-libgcj<br>
--enable-languages=c,c++,treelang,fortran --enable-shared<br>
--enable-threads=posix --enable-__cxa_atexit --enable-clocale=gnu<br>
--with-bugurl=<a href="http://bugs.gentoo.org/" target="_blank">http://bugs.gentoo.org/</a> --with-pkgversion='Gentoo<br>
4.3.1-r1<br>
p1.1'<br>
Thread model: posix<br>
gcc version 4.3.1 (Gentoo 4.3.1-r1 p1.1)<br>
<br>
Vincent<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
On Sep 17, 2008, at 10:25 PM, Greg Lindahl wrote:<br>
<br>
On Wed, Sep 17, 2008 at 03:43:36PM -0400, Eric Thibodeau wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Also, note that I've had issues with icc<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
generating really fast but inaccurate code (fp model is not IEEE *by<br>
default*, I am sure _everyone_ knows this and I am stating the<br>
obvious<br>
here).<br>
<br>
All modern, high-performance compilers default that way. It's<br>
</blockquote>
certainly<br>
the case that sometimes it goes more horribly wrong than necessary,<br>
but<br>
I wouldn't ding icc for this default. Compare results with IEEE mode.<br>
<br>
-- greg<br>
<br>
<br>
<br>
</blockquote></blockquote>
_______________________________________________<br>
</blockquote>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a><br>
To change your subscription (digest mode or unsubscribe) visit<br>
<a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
<br>
<br>
</blockquote>
------------------------------------------------------------------------<br>
<br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a><br>
To change your subscription (digest mode or unsubscribe) visit<br>
<a href="http://www.beowulf.org/mailman/listinfo/beowulf" target="_blank">http://www.beowulf.org/mailman/listinfo/beowulf</a><br>
<br>
</blockquote>
<br>
</blockquote>
<br>
</blockquote>
<br>
</div></div></blockquote></div><br></div>