[Beowulf] Digital Image Processing via HPC/Cluster/Beowulf - Basics
Douglas Eadline
deadline at eadline.org
Mon Nov 5 09:07:46 PST 2012
>
>
> From: CJ O'Reilly <supaiku at gmail.com<mailto:supaiku at gmail.com>>
> Date: Saturday, November 3, 2012 3:47 PM
> To: Mark Hahn <hahn at mcmaster.ca<mailto:hahn at mcmaster.ca>>
> Cc: "beowulf at beowulf.org<mailto:beowulf at beowulf.org>"
> <beowulf at beowulf.org<mailto:beowulf at beowulf.org>>
> Subject: Re: [Beowulf] Digital Image Processing via HPC/Cluster/Beowulf -
> Basics
>
>
> Thanks, infoative: p
> I'll consider your advice.
>
> If i read correctly, it seems the answer to the question about programming
> was: yes, a program must be written to accommodate a cluster. Did i get
> you right?
>
>
>>> You got that right
But bear in mind that for your task (whatever it
>>> is), someone might have written most of the pieces you need already.
>>> If you're using some computationally intensive utility (finite element
>>> modeling or raytraced graphics, for instance) as the underpinnings of
>>> your problem may already be cluster-aware.
>
> But Mark's comments are very true.. In general, there is NO turnkey
> solution and whatever is out there will be fine for some parts of your
> problem and a pain for others. So spending a bit of time figuring out
> what it is you are trying to do, and what the parallelization/HPC parts
> are is worth it. No point in a flexible multi-user resource allocation
> system with fancy schedulers and job pre-emption if you're the only user
> of the box, for instance.
>
> It might be worth building a "toy" cluster with, say, 4 nodes working
> against a file server, and fooling around a bit with workloads like the
> one you are planning to get a feel for it. Don't go for performance, but
> try to understand how your workload can be divided up, and what the
> information flows are (lots of node to node, or very little? .. Shared
> disk gets hit all the time?)
<shameless plug>
http://limulus.basement-supercomputing.com/
Four motherboards, GigE, one power supply and case, 200+ CPU GFLOPS
The final software, due out next week, will be a complete open source
stack built on Scientific Linux 6.x with Warewulf, Grid Scheduler,
MPI, popular libraries, and Julia. All integrated ready to run.
See below about ClusterMonkey.net as well -- a good resource (I did
say shameless)
</shameless plug>
--
Doug
>
> There are a variety of cluster in a box things out there to get started (I
> hesitate to suggest any, because they may not exist any more) (back when,
> I tried ClusterMatic, and Rocks.) It really doesn't matter what you use,
> because as Mark points out, it probably is pretty clunky in some ways, but
> by experiencing the clunkyness, you'll instantly become more expert. And
> worst case, you've spent a week of your life doing it.
>
> Really, a week's playing around can be invaluable. (I wonder if people
> offer short courses on this.. It might be useful for people where the
> manager comes in and says, my boss said we should look at putting X on a
> cluster, can you write up a white paper in a month to lay it all out)
>
>
> http://www.clustermonkey.net/ might be a decent resource on putting
> together a low end cluster
>
> Check out their projects and getting started sections..
>
>
>
> --
> Mailscanner: Clean
>
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf
>
--
Doug
--
Mailscanner: Clean
More information about the Beowulf
mailing list