From b40sup at prg.cpe.ku.ac.th Mon Oct 1 22:50:06 2001 From: b40sup at prg.cpe.ku.ac.th (Sugree Phatanapherom) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] SCE 1.2 Release Announcement Message-ID: <00ae01c14a2f$0ebd7460$15226c9e@cpe.ku.ac.th> ============================================ Parallel Research GroupDepartment of Computer EngineeringFaculty of Engineering, Kasetsart UniversityBangkok, Thailand. ============================================ Proud to Announce the Released of SCE V1.2 (Scalable Computing Environment) A truly integrated cluster software environment What is SCE? ============ One of the problems with the wide adoption of clusters for mainstream high performance computing is the difficulty in building and managing the system. There are many efforts in solving this problem by building fully automated, integrated stack of software distribution from several well-known open source software. The problem is that these sets of software come from many sources and never been designed to work together as a truly integrated system. With the experiences and tools developed to build many clusters in our site, we decided to build an integrate software tool that is easy to use for cluster user community. These software tools, called SCE (Scalable Computing Environment), consist of cluster builder tool, complex system management tool (SCMS), scalable real-time monitoring, web base monitoring software (KCAP), parallel Unix command and batch scheduler. This software run on top of our cluster middleware that provides cluster wide process control and many services. MPICH are also included. All tools in SCE are designed to be truly integrated since all of them except MPI and PVM are built by our group. SCE also provides more than 30 APIs to access system resources information, control remote process execution, ensemble management and more. These APIs and the interaction among software components allow user to extends and enhance SCE in many ways. SCE is also designed to be very easy to use. Complete GUI and Web automate most of the installation and configuration. What is new in SCE1.2 ? ================== * Many bug fixes. More stable. * Partially support for Globus- Job manager interface for Globus are added so KSIX can be used to start MPICH-G2 tasks across grid. * new component. SQMS portal Version 1.0 - System administrator can setup a small web portal for cluster users. * New enhanced KSIX - faster and more reliable * Major bug fix for KSTAT module. Now working. * KCAP is now fully integrated in the installation processes. Integrate 3D navigation is now work right out of the box. (Required VRML plug-in on web browser) * Smarter installation wizard, rpm dependency checks. * Many bug fixed for SQMS. Better manage output. Fixed to support Globus (still not very complete). SCE Features ============ * Fully automated installation up to the point that users can run MPI program on the system. * Single configuration point for all software, not just a collection of a bunch of software * No kernel modification, compatible to all Beowulf application software * The distribution includes Powerful SCMS management tools for monitoring and managing the cluster * KCAP web and VRML interface for cluster monitoring through Internet * SQMS simple batch scheduling that work right out of the box. * Beowulf Builder easy and powerful tools that help you build a diskless cluster * KSIX Cluster middleware that provides a global process space at user level * MPICH fully configured * Support the popular RedHat 7.1 Distribution How to Download =============== You may download directly from our website: http://prg.cpe.ku.ac.th/research/sce/ or http://sourceforge.net/project/sce or http://sce.sourceforge.net/. Sugree Phatanapherom g4465027@ku.ac.th From mark at northforknet.com Mon Oct 15 22:30:26 2001 From: mark at northforknet.com (Mark Hayden) Date: Tue Nov 9 01:14:19 2010 Subject: [Beowulf-announce] North Fork Networks SANi.q. 1.00b7 Message-ID: <3BCB8CA1.E0D97ED4@northforknet.com> I would like to briefly announce the availability of a new release (1.00beta7) of the North Fork Networks SANi.q. storage management product for Linux. We are currently looking for additional users to test our software. Additional information, including a user manual and freely downloadable binary distribution, can be found on our web site (www.northforknet.com). Regards, Mark Hayden SANi.q. is the first fully distributed volume management software. It allows the construction of high-performance, highly available storage area networks with commodity hardware. Features include: * Storage pooling across any number of storage servers. * Configurable cross-box volume striping and replication. * Copy-on-write volume snapshots. * On-the-fly data migration and hot-swapping of servers. * Incremental volume resynchronization (only changes are resynchronized after a server restart) * Support for shared volume access (eg., Sistina's GFS). * Fully replicated management configuration. * A java-based GUI.