[Beowulf] size of swap partition

Eric Thibodeau kyron at neuralbs.com
Mon Jun 9 18:28:28 PDT 2008


Mikhail,

    Somewhat like Gerry said, ballpark figures have always been an 
arbitrary 1.5*RAM. This is completely ridiculous nowadays and should 
depend entirely on the applications you run. Typically, you should never 
swap out memory on a running application.

    I recommend you perform some metrics collection, doesn't have to be 
perfect and super-fine-grained. Something like Ganglia should be 
sufficient to give you an idea of how much swap you need, if ever you 
actually hit it...but don't!

Eric
PS: this is a redundant topic on the list ...do a little searching and 
you'll hit it ;)

Gerry Creager wrote:
> Misha,
>
> We have the potential to have to swap whole jobs out of memory on a 
> complete node.  As a result, I recommend 1.5-2.0 times memory in swap 
> if this is a consideration.  I do know there's likely to be a bit of 
> discussion as this varies widely from site to site and based on 
> requirements.
>
> gerry
>
> Mikhail Kuzminsky wrote:
>> A lot of time ago it was formulated simple rule for swap partition size
>> (equal to main memory size).
>>
>> Currently we all have relative large RAM on the nodes (typically, I 
>> beleive, it is 2 or more GB per core; we have 16 GB per dual-socket 
>> quad-core Opteron node). What is typical modern swap size today?
>>
>> I understand that it depends from applications ;-) We, in particular, 
>> practically don't have jobs which run "out-of-RAM". For single core 
>> dual-socket Opteron nodes w/4GB RAM per node and "molecular modelling 
>> workload" we used 4 GB swap partition.
>>
>> But what are the reccomendations of modern praxis ?
>>
>> Mikhail Kuzminksy
>> Computer Assistance to Chemical Research Center
>> Zelinsky Inst. of Organic Chemistry
>> Moscow   _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit 
>> http://www.beowulf.org/mailman/listinfo/beowulf
>




More information about the Beowulf mailing list