[Beowulf] Python libraries slow to load across Scyld cluster
Don Kirkby
dkirkby at cfenet.ubc.ca
Fri Jan 16 16:38:41 PST 2015
Thanks for the suggestions, everyone. I've used them to find more information, but I haven't found a solution yet.
It looks like the time is spent opening the Python libraries, but my attempts to change the Beowulf configuration files have not made it run any faster.
Skylar asked:
> Do any of your search paths (PATH, PYTHONPATH, LD_LIBRARY_PATH, etc.)
> include a remote filesystem (i.e. NFS)? This sounds a lot like you're
> blocked on metadata lookups on NFS. Using "strace -c" will give you a
> histogram of system calls by count and latency, which can be helpful in
> tracking down the problem.
Yes, the compute nodes mount from a network file system to a local RAM disk. When I look at mounted file systems, I can see that the Python libraries are on a network mount. The Python libraries are at /usr/local/lib/python2.7.
$ bpsh 5 df
Filesystem 1K-blocks Used Available Use% Mounted on
[...others deleted...]
192.168.1.1:/usr/local/lib
926067424 797367296 80899808 91% /usr/local/lib
I used strace as suggested and found that most of the time is spent in open().
$ bpsh 5 strace -c python2.7 cached_imports_decimal.py
started at 2015-01-16 14:29:45.543066
imported decimal at 0:00:21.719083
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
97.95 0.040600 44 932 822 open
[...others deleted...]
I also looked at the timing of the individual system calls to see which files were slow to open:
bpsh 5 strace -r -o strace.txt python2.7 cached_imports_decimal.py
more strace.txt
[...]
0.000063 open("/usr/local/lib/python2.7/lib-dynload/usercustomize.so", O_RDONLY) = -1 ENOENT (No such file or directory)
0.000701 open("/usr/local/lib/python2.7/lib-dynload/usercustomizemodule.so", O_RDONLY) = -1 ENOENT (No such file or directory)
0.127012 open("/usr/local/lib/python2.7/lib-dynload/usercustomize.py", O_RDONLY) = -1 ENOENT (No such file or directory)
0.126985 open("/usr/local/lib/python2.7/lib-dynload/usercustomize.pyc", O_RDONLY) = -1 ENOENT (No such file or directory)
0.127037 stat("/usr/local/lib/python2.7/site-packages/usercustomize", 0x7fff28a973f0) = -1 ENOENT (No such file or directory)
0.000086 open("/usr/local/lib/python2.7/site-packages/usercustomize.so", O_RDONLY) = -1 ENOENT (No such file or directory)
0.126963 open("/usr/local/lib/python2.7/site-packages/usercustomizemodule.so", O_RDONLY) = -1 ENOENT (No such file or directory)
[...]
There were many many entries like this, all under /usr/local/lib/python2.7 or /usr/java/jre1.6.0_19/lib/amd64.
Jeff White suggested:
> If you are using these libraries often and they exist on a remote server
> (NFS or whatever) you may want to use the "libraries" or "prestage"
> directives in Scyld's config to put them on compute nodes instead.
Both of the directories I'm reading were already listed in /etc/beowulf/config
libraries /usr/local/lib
libraries /usr/java/jre1.6.0_19/lib/amd64/
I'm guessing that is why my call to import the collections module will speed up if you run it several times. Is there a configuration setting that will let a compute node cache more libraries?
Based on Jeff White's suggestion, I tried adding a prestage directive for my Python libraries:
prestage /usr/local/lib/python2.7
I then tried reloading the service and rerunning the test.
# /sbin/service beowulf reload
There was no change to the time taken to run the test. When I looked at the documentation more closely, I found that my libraries have to end with / if I want the subdirectories to be cached, and I should use restart instead of reload. I changed the config to this:
libraries /usr/local/lib/
libraries /usr/java/jre1.6.0_19/lib/amd64/
I restarted the service and reran the test. This time the compute nodes rebooted.
# /sbin/service beowulf restart
Still no change.
John Hearns suggested:
> If this is not an MPI code maybe you would be better running it using some sort of parallel shell, eg. pdsh
My trivial example doesn't use MPI, but my real script does. I trimmed away everything I could to find the smallest script that was slow to run: it just imports the decimal module, but takes around 25s. If I trim a bit more and only import the collections module, it initially takes around 10s, but speeds up to under 0.01s after several runs. (The decimal module imports the collections module.)
In addition, I see the same slow speed whether I launch it with mpirun or bpsh.
Any other suggestions?
Don
More information about the Beowulf
mailing list