[Beowulf] CSharifi Next generation of HPC
Ehsan Mousavi
mousavi.ehsan at gmail.com
Mon Dec 3 21:47:36 PST 2007
C-Sharifi Cluster Engine: The Second Success Story on "Kernel-Level
Paradigm" for Distributed Computing Support
Contrary to two school of thoughts in providing system software support for
distributed computation that advocate either the development of a whole new
distributed operating system (like Mach), or the development of
library-based or patch-based middleware on top of existing operating systems
(like MPI, Kerrighed and Mosix), Dr. Mohsen Sharifi hypothesized another
school of thought as his thesis in 1986 that believes all distributed
systems software requirements and supports can be and must be built at the
Kernel Level of existing operating systems; requirements like Ease of
Programming, Simplicity, Efficiency, Accessibility, etc which may be coined
as Usability. Although the latter belief was hard to realize, a sample
byproduct called DIPC was built purely based on this thesis and openly
announced to the Linux community worldwide in 1993. This was admired for
being able to provide necessary supports for distributed communication at
the Kernel Level of Linux for the first time in the world, and for providing
Ease of Programming as a consequence of being realized at the Kernel Level.
However, it was criticized at the same time as being inefficient. This did
not force the school to trade Ease of Programming for Efficiency but instead
tried hard to achieve efficiency, alongside ease of programming and
simplicity, without defecting the school that advocates the provision of all
needs at the kernel level. The result of this effort is now manifested in
the C-Sharifi Cluster Engine.
C-Sharifi is a cost effective distributed system software engine in support
of high performance computing by clusters of off-the-shelf computers. It is
wholly implemented in Kernel, and as a consequence of following this school,
it has Ease of Programming, Ease of Clustering, Simplicity, and it can be
configured to fit as best as possible to the efficiency requirements of
applications that need high performance. It supports both distributed
shared memory and message passing styles, it is built in Linux, and its
cost/performance ratio in some scientific applications (like meteorology and
cryptanalysis) has shown to be far better than non-kernel-based solutions
and engines (like MPI, Kerrighed and Mosix).
Best Regard
~Ehsan Mousavi
C-Sharifi Development Team
-----Original Message-----
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On
Behalf Of Toon Knapen
Sent: Sunday, December 02, 2007 6:22 PM
To: Mark Hahn
Cc: Beowulf Mailing List
Subject: Re: [Beowulf] Using Autoparallel compilers or Multi-Threaded
libraries with MPI
Mark Hahn wrote:
>> IMHO the hybris approach (MPI+threads) is interesting in case every
>> MPI-process has lots of local data.
>
> yes. but does this happen a lot? the appealing case would be threads
> that make lots of heavy use of some large data, _but_
> without needing synchronization/locking. once you need locking
> among the threads, message passing starts to catch up.
Direct solvers (for Finite Elements for instance) need a lot of data.
Additionally distributing the matrix generate interfaces (between the
different submatrices) which are hard to solve. In such situation, one
tries to minimize the number of interfaces (by having one submatrix per
MPI-process) and speed up the solving of each submatrix using threads.
Finance is another example. Financial applications need to evaluate a
large number of open positions based on the simulated, current or past
market-data. There are many dependencies between all the different data
which makes that it is hard to decompose the data in largely independent
chunks.
>
>> latter is simpler because it only requires MPI-parallelism but if the
>> code
>> is memory-bound and every mpi-process has much of the same data, it
>> will be
>> better to share this common data with all processes on the same cpu
>> and thus
>> use threads intra-node.
>
> what kind of applications behave like that? I agree that if your MPI
> app is keeping huge amounts of (static) data replicated in each rank,
> you should rethink your design.
>
See above.
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf
mailing list