# [Beowulf] Audio simulation as an example of demanding computational tasks

Robin Whittle rw at firstpr.com.au
Mon Nov 5 19:24:36 PST 2012

```In the "Mark Hahn's Beowulf/Cluster/HPC mini-FAQ . . ." thread I gave
some examples of why acoustic modelling (simulating vibration in air
and/or the bodies and strings of musical instruments) can be
computationally extremely demanding:

http://www.beowulf.org/pipermail/beowulf/2012-November/030379.html

John Hearns replied:

> Robin, thankyou for a really, really interesting explanation.
>
> So at last - we find what Exascale computers will be used for.
> Modelling concert halls!
>
> But fiftieth order reflections? Are sounds really that strong that we
> could hear something at that level?  Maybe, for all I know.

50th or 100th order reflections was just a guess.  Here is a more
careful estimate.

Let's say we have a nice quiet concert hall and want to model the
reverberation of a single crack on a snare drum.  (Scottish pipe bands
have large snare drums which emit a painfully loud crack even when hit
with only a moderate stroke.)   If the hall's dimensions are about 34.3
x 34.3 x 34.3 metres, then it takes 100msec for the reflection from one
wall to reach the opposite wall.

With good hearing, in an empty hall, we can probably detect the
reverberation of the snare drum 5 seconds later, so we can hear the sum
of these roughly 50th order reflections.

With 6 walls, I think each order multiples the number of paths by 5, so
the total number of paths is about 5^50 ~=10^34.

If we limited the number of paths to 10^10, for instance, this would
mean we could only simulate about 5/3.4 = 1.5 seconds of reverb, which
would not match the real experience.  There are no-doubt various
approximations which could greatly reduce the workload with little
affect on the audible result.

Slavishly modelling the path of reflections is a difficult approach to
acoustic modelling, but because each such reflection would have a
direction, it would be possible to do realistic binaural simulation, for
a single head, with a single sound source.  Real rooms with interesting
acoustics would need to be modelled as dozens or hundreds of reflecting
surfaces, and the whole problem quickly goes beyond any reasonable
practical bounds unless short-cuts can be applied.

The simple way to model reverb is to measure, or create, an impulse
response.  With a binaural head in a real room, or with a computer
simulation of the same, it would be possible to generate an impulse
response for each ear, for a sound source in a given location.  Then it
would be a simple matter to convolve the audio signal we want to model
from that location with these impulse responses to generate a good
binaural simulation.  However, getting such an impulse response in a
real room would be tricky, because repeated impulse signals or special
techniques would be required to minimise the impact of microphone and
general background noise.

Having an impulse response is OK for a single fixed sound source and a
single head location.  To model the reverberated sound of an orchestra,
the same procedure could be used with a separate impulse response for
each instrument.

However, this is no use if you want to simulate moving sound sources or
a moving or rotating head.  Nor would it help if there was a need to
simulate non-linear systems, such as wall panels which rattle or distort
in the presence of high signal levels.

Even without reverb, acoustic simulation can be challenging.  When we
stand on a beach we hear the sound of the waves breaking into water and
onto sand, with the sand grains themselves making sounds as they move
along with the water, banging against other sand grains.  Each little
droplet hitting water or sand makes its own sound, which goes pretty
much directly to your two ears, by different paths, with different
frequency responses and phase behaviour.  There is also a longer path as
the same sound is reflected from the water's surface, unless of course
the water between your ears and the breaking wave is covered with foam,
which would attenuate higher frequencies more than lower frequencies,
depending on the angle of incidence.  Then there is wind movements
causing some changes in propagation.

The behavior of a body of a double-bass, guitar or similar probably
calls for finite element analysis, since the stresses change with the
sum of the acoustic signals at each instant, and this changes the
elasticity of the wood, thereby changing its response to these signals.

I think that with even a minimal grasp of physics, we can easily find
commonplace pleasant and unpleasant audible experiences in which the
physical processes would be prohibitively difficult to simulate on
pretty much any computational system we can imagine.

One day I will attempt to do some of the above.  As far as I know, the
best approach will be to initially use a single server with as many
CPU-cores as I can get, with as much RAM as possible.  I would split the
workload up with threads or MPI.  I guess that threads would be more
efficient and easy to debug, but MPI means the software could work on a
cluster of such machines - and so be applicable to much larger tasks.

- Robin

```