[Beowulf] stace_analyzer.pl can't work.
Jeff Layton
laytonjb at att.net
Tue Sep 16 05:22:05 PDT 2008
I've never used Mercurial or any other "real" programming tool for tracking changes, but go for it! It forces me to learn it.
I like the idea of a DB, but I'm a bit worried that this will get out of hand. It's a simple tool to do a quick analysis (although I have bigger plans in mind). I haven't looked at SQLite in a few years. Is it still an in-memory DB or does it allow you to dump the DB to a file (or two).
BTW - thanks for the patch. I like the second option of ignoring any return codes that are negative. Easy change.
Thanks!
Jeff
----- Original Message ----
From: Joe Landman <landman at scalableinformatics.com>
To: Jeff Layton <laytonjb at att.net>
Cc: Mark Hahn <hahn at mcmaster.ca>; Eric.L <eric.l.2046 at gmail.com>; beowulf at beowulf.org
Sent: Tuesday, September 16, 2008 8:13:10 AM
Subject: Re: [Beowulf] stace_analyzer.pl can't work.
A quick patchy-patchy for 310
--- strace_analyzer.pl 2008-09-16 07:57:34.000000000 -0400
+++ strace_analyzer_new.pl 2008-09-16 08:01:45.000000000 -0400
@@ -307,7 +307,7 @@
$junk =~ s/[^0-9]//g;
# Keep track of total number bytes read
- $ReadBytesTotal += $junk;
+ $ReadBytesTotal += $junk if ($junk != -1);
# Clean up write unit
($junk1, $junk2)=split(/\,/,$cmd_unit);
There may be other error return codes which are negative, so if you want
to filter those as well, use "(if $junk < 0)" rather than the above.
As for the rest of the code structure, writing this parser isn't all
that hard, and for those with smaller memories but bigger disks (and a
desired to analyze large straces), we could use the DBIx::SimplePerl
module. Jeff is already putting his arrays together as hashes, and that
module makes it real easy to dump a hash data structure directly into a
database, say a SQLite3 database. Which, curiously, could make a bit of
the code easier to deal with/write/debug.
The issue you have to worry about in dealing with huge streams of data,
is running out of ram. This happens. Many "common" techniques fail
when data gets very large (compared to something like ram). We had to
solve a large upload/download problem for a customer who decided to use
a web server for multi gigabyte file upload/download form in an
application. The common solution was to pull everything in to ram and
massage it from there. This failed rather quickly.
I don't personally have large amounts of "free" time, but I could likely
help out a bit with this. Jeff, do you want me to create something on
our mercurial server for this? Or do you have it in SVN/CVS somewhere?
Joe
--
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web : http://www.scalableinformatics.com
http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20080916/6637ac12/attachment.html>
More information about the Beowulf
mailing list