[Beowulf] Jupyter and EP HPC
Lux, Jim (337K)
james.p.lux at jpl.nasa.gov
Mon Jul 30 15:38:58 PDT 2018
Job Queue? At home, my experimental cluster is a pack of 4 beaglebones running a pretty vanilla debian - not exactly a mindbender in performance, but easy to fool with to experiment.
At work, yeah, all the usual stuff.
From: Gavin W. Burris [mailto:bug at wharton.upenn.edu]
Sent: Monday, July 30, 2018 10:17 AM
To: Lux, Jim (337K) <james.p.lux at jpl.nasa.gov>
Cc: Fred Youhanaie <fly at anydata.co.uk>; beowulf at beowulf.org
Subject: Re: [Beowulf] Jupyter and EP HPC
Since this is Beowulf, I assume you have a job queue. Check out the batch spawner, too.
On Sat 07/28/18 10:21AM EDT, Lux, Jim (337K) wrote:
> That might be exactly it..
> On 7/27/18, 2:17 PM, "Beowulf on behalf of Fred Youhanaie" <beowulf-bounces at beowulf.org on behalf of fly at anydata.co.uk> wrote:
> I'm not a jupyter user, yet, however, out of curiosity I just googled for what I think you're looking for. Is this any good?
> I have now bookmarked it for my own future use!
> On 27/07/18 21:56, Lux, Jim (337K) wrote:
> > -----Original Message-----
> > From: Beowulf [mailto:beowulf-bounces at beowulf.org] On Behalf Of Joe Landman
> > Sent: Friday, July 27, 2018 11:54 AM
> > To: beowulf at beowulf.org
> > Subject: Re: [Beowulf] Jupyter and EP HPC
> > On 07/27/2018 02:47 PM, Lux, Jim (337K) wrote:
> >> I’ve just started using Jupyter to organize my Pythonic ramblings..
> >> What would be kind of cool is to have a high level way to do some
> >> embarrassingly parallel python stuff, and I’m sure it’s been done, but
> >> my google skills appear to be lacking (for all I know there’s someone
> >> at JPL who is doing this, among the 6000 people doing stuff here).
> >> What I’m thinking is this:
> >> I have a high level python script that iterates through a set of data
> >> values for some model parameter, and farms out running the model to
> >> nodes on a cluster, but then gathers the results back.
> >> So, I’d have N copies of the python model script on the nodes.
> >> Almost like a pythonic version of pdsh.
> >> Yeah, I’m sure I could use lots of subprocess() and execute() stuff
> >> (heck, I could shell pdsh), but like with all things python, someone
> >> has probably already done it before and has all the nice hooks into
> >> the Ipython kernel.
> > I didn't do this with ipython or python ... but this was effectively the way I parallelized NCBI BLAST in 1998-1999 or so. Wrote a perl script to parse args, construct jobs, move data, submit/manage jobs, recover results, reassemble output. SGI turned that into a product.
> > -- yes.. but I was hoping someone had done that for Jupyter..
> >>>> for parametervalue in parametervaluelist:
> > .... result = simulation(parametervalue)
> > Results.append(result)
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> > To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin
> Computing To change your subscription (digest mode or unsubscribe)
> visit http://www.beowulf.org/mailman/listinfo/beowulf
Gavin W. Burris
Senior Project Leader for Research Computing The Wharton School University of Pennsylvania Search our documentation: http://research-it.wharton.upenn.edu/about/
Subscribe to the Newsletter: http://whr.tn/ResearchNewsletterSubscribe
More information about the Beowulf