[Beowulf] statless compute nodes
Roland Fehrenbacher
rf at q-leap.de
Thu May 28 10:09:50 PDT 2015
>>>>> "Joe" == Joe Landman <landman at scalableinformatics.com> writes:
Joe> On 05/27/2015 09:22 PM, Trevor Gale wrote:
>> Hello all,
>>
>> I was wondering how stateless node fair with very memory
>> intensive applications. Does it simply require you to have a
>> large amount of RAM to house your file system and program data?
>> or are there other limitations?
Joe> Warewulf has been out the longest of the stateless
Joe> distributions. We had rolled our own a while before using it,
Joe> and kept adding capability to ours.
Not quite true. Qlustar (coming from its predecessor Q-Leap BeoBox) has
never booted anything but stateless. And that's since
2001. Don't think warewulf has been around at the time.
Not to mention that our images are modular with the modules being packaged
up as debs => absolutely clean, reproducible + easily upgradeable images!
-- Lustre OST --
Included image modules: core lustre-2.6 ofed
sn-1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 277M 234M 44M 85% /
devtmpfs 5.9G 0 5.9G 0% /dev
none 5.9G 3.5M 5.9G 1% /run
none 5.9G 0 5.9G 0% /run/shm
none 4.0K 0 4.0K 0% /sys/fs/cgroup
192.168.52.254:/srv/ql-common 277M 234M 44M 85% /etc/qlustar/common
ost1 719G 128K 719G 1% /zfs/ost1
ost1/var 4.0G 2.9M 4.0G 1% /var
/scratch 4.0G 12M 4.0G 1% /tmp
ost1/ost1 5.3T 4.6T 676G 88% /l/ost1
-- Compute Node --
Included image modules: core lustre-client ofed torque
beo-32:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 275M 232M 44M 85% /
devtmpfs 32G 0 32G 0% /dev
none 32G 3.0M 32G 1% /run
none 32G 0 32G 0% /run/shm
none 4.0K 0 4.0K 0% /sys/fs/cgroup
192.168.52.254:/srv/ql-common 275M 232M 44M 85% /etc/qlustar/common
system 6.4G 0 6.4G 0% /zfs/system
system/var 2.0G 2.2M 2.0G 1% /var
/scratch 7.4G 991M 6.4G 14% /tmp
192.168.51.254 at o2ib:/l 11T 9.3T 1.3T 89% /l
--
Roland
-------
http://www.q-leap.com / http://qlustar.com
--- HPC / Storage / Cloud Linux Cluster OS ---
More information about the Beowulf
mailing list