[Beowulf] VMWARE GSX Cluster
Uwe Paucker
uwe.paucker at web.de
Wed Jun 2 12:11:45 PDT 2004
Hi to all,
i am new to beowulf (and Number Cruncher).
I have read plenty news and readme, but i did not realy understand everything.
What is PVM ?
It seams, that an application has to be compiled with pvm libs,
but then the application scales better ?!
What is MPI ?
Like PVM, and is only for mono CPU machines ?!
What is openMosix ?
I think, that openMosix can run every Application, without compiling,
but, can not scale the application as good as PVM or MPI ?!
The Goal for my cluster solution is to run VMWARE GSX3.
7x standard PC
(Pentium 4 2.0 1500MB Ram, 1x 100Mbit, 1x 1000Mbit, 1x 40GB IDE HDD)
1x 100MBit Ethernet Switch to the Users
1x 1000MBit Ethernet Switch to the Cluster
1.
Do i need a shared storage for the VMWARE Virtual Disks ?
Can a shared storage be created with a kind of online replication between
these 7 Host IDE Disks ? (may be slow, but for testing it is ok)
2.
I will only run 1 VMWARE Guest OS (Windows 2000 Server)
Can Beowulf/PVM/MPI/openMosix scale 1 VMWARE Instanz over 7 Nodes ?
The Problem is, that the application in the Windows 2000 Guest can not run
on SMP Machines, so CPU scaling is the key.
As far as i now, openMosix can't scale 1 Process over more than 1 Node ?!
VMWARE is not Beowulf/PVM/MPI ready, and source code is not shipped,
so Beowulf/PVM/MPI would not run ?!
3.
What happens, if 1 Node dies ?
4.
But, if it is possible to scale VMWARE over these Nodes, what is to be done,
i mean the Setup.
Does somebody realized such a scenario ?
I would be happy about an answer. thanx and kr.
--
----------------------------------------------------------------------
Uwe Paucker
Germany
ICQ# 341938015
FAXBOX# 01212-5-108-41-016
VOICEBOX# 01212-5-108-41-016
----------------------------------------------------------------------
________________________________________________________________
Verschicken Sie romantische, coole und witzige Bilder per SMS!
Jetzt neu bei WEB.DE FreeMail: http://freemail.web.de/?mc=021193
More information about the Beowulf
mailing list