newbie requests advice.

Zao Yang zyang at
Mon Jun 18 13:00:13 PDT 2001

we built a linux farm to do verilog simulations at the company I'm working
for. as far as I know, all HDL simulators are single threaded, I don't
know how would beowulf help you. we are using a tool called LSF, from
Platform Computing I think, which can monitor all the machines in our
server farm and automatically dispatch simulation jobs to one of them when
the job queues, server load, memory and other constraints are met. for
nightly regressions, we just use cron to kick off a script to submit all
jobs to LSF queues.

also, the LSF allows you to configure certain servers to be available to
the accept jobs during some specific time. this way, we can use our
desktop machines to do normal developement work during day time, and at
night time, our desktop machines are placed back to the LSF server pool to
run nightly regression.

our experience shows that the limitation on 3GB memory is only problematic
if you are running gate level simulation of a very large asic. but, in
that case, you would have the same problem with any 32bit processor

// zao

On Sat, 16 Jun 2001, Adam Shand wrote:

> Hi.  I've read through the FAQ, the HOWTO and browsed the list archives
> but there's a lot of information there and I'm having a hard time turning
> it into concrete answers :-)
> I've just started work at a fabless semiconductor company, up until now
> we've run all of our simulationsregression tests on single Solaris boxes,
> or directly on the developers workstation.  Recently we've done some tests
> and it appears that high end Intel cpu's are not only much cheaper but
> that they significantly outperform high end Sparc cpu's.  So I've been
> asked to build and evaluate a Linux cluster of some sort to try and take
> advantage of this.  Unfortunately the applications that we have to run are
> all commercial so we don't have the ability to tune the source of them.
> So, questions ...
> * Some of our jobs can use upwards of 4Gb of RAM, from my understanding
>   3Gb is the maximum that a single process can address with 2.4 kernel.
>   Is this limitation something that Network Virtual Memory can help with?
>   If so how much of a performance hit does it impose, I assume it's
>   better then swapping to disk?
> * Without the ability to optimize the code of the apps we run is it even
>   worth pursuing a beowulf cluster?
> * And off-topic, if it's not can you suggest any other open source
>   solutions that might help?  Currently all of our designers have dual
>   1.7Ghz boxes as their desktops.  Perhaps some form of scheduling
>   software to harness all this to run jobs over night would be useful?
> * Any other suggestions of what to read, buy, look into?
> Thanks for your time,
> Adam.

More information about the Beowulf mailing list