Experimenting with Logos’ Internal Voice

The external sound is low frequency, so I thought that a relatively high-frequency voice like that of a female would contrast and stand out acoustically from the background sound, particularly when the latter increased in volume. An example of this would be the ‘everlasting’ in the Symposium 2 video


Why a single female voice?

I feel that it would be in keeping for me to record this as a female voice but using my own male voice plays with an ambivalence of timbre achieved by digital means. On a more technical level, I want the soft female voice to cut through the noise coming from the subwoofers.

Another alternative would be to give Logos manifold voices. The thing is to experiment with different combinations. It could be argued that a single voice would imply an individuality fostering a direct connection. On the other hand, many voices could signify a wider community/society, humanity even but lose on the way some of its intimacy. At least I have the option for either.

Scripting Decisions

The scripting is another point of decision making. For now, I have opted for short phrases or words to beckon the person closer. These are perhaps more distinguishable from the surrounding environment and convey meaning more clearly than having to stand there listening to a long monologue. This is an area that offers a great deal of flexibility. Again, I lean towards a female voice, but it is not a deal-breaker. I also think that the higher frequencies contained in sibilance or whispering are directional, pointing directly to the sculpture as their source and cut through the omnidirectional sounds from the subwoofer low-frequency sounds.

The origins of the sounds

All the sounds in Logos are made using my voice. Firstly using my voice allows for a broad field of experimentation on altering sound which can be applied to other field and synthesised recording. Secondly, it is consistent with my idea of transformation. (Each step is like a mutation: I work heuristically until something works. This, however, differs from an evolutionary system in that it is wholely purpose-led. In the future, it would be interesting to look at generating sounds stochastically which then fall into niches, predetermined or not, that allows them to be saved and used. The criterion for choice could be many, aesthetic, content, fit for a purpose which the process is blind to. This reminds me of Latham’s work with computer-generated ‘life’.)

Altering sounds

The following samples were made by recording my voice, which is a baritone, and applying a number of filters in audacity to achieve the sought after effect, these were:

  • change pitch
  • reverb
  • echo
  • distortion hard limiter
  • Normaliser

Having applied the echo, some phonemes repeat in an unnatural fashion, a little like a stutter. I delete these out at points of zero amplitude.

Sample voices and words

The permutations are almost endless. The results can change meaning, context and response.

Limitations to these experiments what to do next

Without the completed sculpture, I am unable to experiment and test how its internal chamber might affect sound. And therefore these samples act only as an indication of how I would imagine the sounds to be.

The words, ‘come nearer, come near, don’t touch me’ are deliberately intructive. They are chosen for their relevant to still sculpture. They could change, evolve. With the use of various sensors, voices could be activated in response to different viewer actions or positions. For example, code could be applied to one voice that was activated to stepping away with words such as, ‘come back, don’t go, don’t leave’.

Thinking this through, this acoustic device could be applied to any still sculpture. Therein lies an interesting piece of research. From monolith to complex installation to screen, the conceptual device gives the work an agency that still, silent or ‘scripted narrative’ often do not have. It addresses the viewer directly and might give the semblance of a limited intersubjectivity even if the visitor knows it for what it is. This is an interesting exploration in itself which also leads to the use of AI. Now that could be an interesting collaboration with someone.


I have composed a sample simulation with the low sound destined for the subwoofer and the sibilant voice for the sculpture’s interior. It is a short, programmatic piece where the viewer is beckoned to approach, then move away a little before leaving altogether. The subwoofers can be head reacting to her proximity while the voice from within the sculpture speaks.

Installation simulation

To listen to the full effect please do so on ear or headphones. This last sample, if listened to on laptop or tablet speaker, demonstrates the difference between low and higher frequencies. The speakers on such devices are unable to deliver the low vibrations. This is what makes it possible to combine the two soundtrack types.

Why use use Audacity?

There are many DAWs and programmes for altering sound recordings and generating them from scratch: Cakewalk, Audition, Reaper, Ableton, etc. Each one offers a particular advantage. Audacity is the first programme I go to for two reasons: familiarity and directness. It was one of the first programmes I used, it is free, open-source and very flexible. What you can do with most other sound software you can do with Audacity. However, its disadvantages are various. It works with a destructive process, that is to say, you cannot go back to previous states once saved so you have to save versions as you go along. This makes for an untidy and complex file system. Another disadvantage is that the plugins available are not always very sophisticated which means you have to work harder to achieve a given aim. This is not altogether bad in itself, it makes me think about what I am doing. However, the finness of parameters is not always there. Although other software offers other things that Audacity cannot give, it is my first port of call whenever I work with a sound recording.


Although I have explored working binaurally and used it in an instance in the symposium video, there is not really a space for it in this installation: it could prove distracting. However, I look forward to employing this in purely acoustic work.