Trans:space

Archive for March, 2014

Some Thought Part 2 (Some More Thoughts)

Saturday, March 29th, 2014

How do we perceive our environment through sound & how can we interpret that for an audience?

 

What is an audience?

As I had eluded to in my previous post I was getting rather sidetracked with the concept of an audience being a group of people sitting in a large darkened room on one side of a proscenium arch while a group of performers do their thing on the other. This need not be the case an audience can be a one on one experience, as in an audience with the Pope. So in terms of light do I need something to light as in a performance? Do we need performers? Do we need a show with a beginning and an end? Do we even need someone selling ice creams in the interval???

 

I had also tied myself up with the idea of representing the whole of the audio-scape in one linear real-time experience, but this is not what the vOICe does. The vOICe takes a snapshot of just a small part of the our hemisphere of perception and turns it into sound over the course of a second or two and then repeats the process.

 

As I have mentioned before the old red, green and blue disco lights that flash in time to the bass mid and treble of an audio signal in their way present us with a similar experience to the vOICe however these units tell us little about our environment, why? The problem is that if we plug an omnidirectional mic into one of these units we get the sounds from the whole of the environment represented, unlike the vOICe which presents us with just a small part of our surroundings to concentrate on. So we must use a Uni-Directional

 

microphone in order to pick-up sound from only the direction that we are interested in. If we now filter that signal into three separate signals, bass mid and treble, we can feed those signals to a single RGB LED red for the base signal, green for the mid range signal and blue for the high frequency signals. What we now have is a light source that will change colour and intensity depending on the frequency and amplitude of the signal it receives. Although this is representative it dose not immerse the audience in the same way that the vOICe does. In fact it cannot, when using the vOICe one is usually blindfolded or blind. Also as we use our ears to receive the audio from the vOICe we are deprived of yet another sense. Looking at a colour changing LED with the whole of the audio-scape we are trying to experience through that LED in the background can only distract us. The LED should be mounted on the inside of a mask, such as a painted out face protecting mask, that will obscure the view. While this gives us the experience from a specific direction that is not how we see. When we look at a scene our eye will scan that scene, we may then focus in on a particular part of the scene that takes our interest but all the while we are still seeing the rest of the scene with our peripheral vision. So we should add more audio inputs and more LEDs. I would suggest creating a horizontal array of about twelve mics arranged at 30 degrees to each other mounted in or on some nice piece of head ware. The LEDs should be mounted parallel to the mask so that the light shines out onto the mask rather than pointing into the eyes of the viewer. The LED attached to the fore-most pointing mic should be positioned directly in front of the eyes of the viewer with the rest arranged in an array that matches the layout of the mics with the rear-most LED pointing down to the viewers chin

 

Gesture in relation to sound

Friday, March 21st, 2014

Circle, diagonal, square

Circle, diagoanal, square

Circle, diagonal, square

Circle

Diagonal

All responding to same sound

Each responding to same sound

All responding to same sound 2

I asked people to listen to three sounds made by VOICE and to imagine a shape to each sound and to then make a gesture from the shape they imagined. I filmed people making the gestures.
The VOICE sounds were of the shape made by the technology from a circle, and diagonal line and a square.
I found that people who didn’t know the technology tended to make gestures that were linear moving from one side to the other or from top to bottom for both the sound for the circle and the diagonal. They tended to make a more dotted or punctuated yet rounded shape for the square sound. Yet none of them made a shape matching the sound.

When we recorded ourselves, those of us working on the project, we had been working with the technology all day and had a greater understanding of it. The gestural shapes that we made tended to be very similar for each shape, apart from the square.

This is something I found fascinating and I would like to explore further and ideally a choreography for film based on the resulting gestures in correlation with the sounds.

Talking with Michael, an idea came up to work with visually impaired people on the choreography. This would present many challenges yet may be stunningly powerful. It may also give visually impaired people a chance to explore movement as an expressive tool, beyond keeping safe. (of course safety would need to be paramount as a starting point). I would be very interested to see how movement arises from an internal perspective not relating to ideas of external perception by others and input from the media etc.

To include such a choreographic piece of work or film in an interactive installation would be very exciting.

We played with creating pattern whilst wearing the VOICE technology and the sounds this produced whilst looking at the pattern in different ways whilst drawing, meant that an instant simple composition was created through drawing. This would be a great thing for the public to be involved with in an interactive installation. It was very exciting to do and gave a great understanding of the technology and how it works.

Revealed: the images behind the sounds of the outside world.

Tuesday, March 4th, 2014

In a previous blog I presented two sound files created from images of the outside world. Jane carried out a live version of the blog experiment and presented the recording in another blog, too. Each was transformed into sound using The vOICe — a sensory substitution device invented by Dr Peter Meijer to turn images into sound so that we can see with our ears. I wanted to find out what sort of immediate response a person has to the sounds first, with very little information, and then with a hint of the underlying images to explore any correspondence one might have between what is heard and what it represents. The experiment is not over!  Before I asked which sound was preferred. I would like sighted readers of the blog to view each series of images, shown in the videos here, and in the comments please state which one your prefer visually: OA or ON?

The OA video and sound stands for “Outside Artificial”. These three images were taken as I walked up a set of stairs to the library at the University of Bath. The steps are painted white at the edges to aid sighted pedestrians. The final image has the first glimpse of the library appearing at the top of the image.

The ON video stands for “Outside Natural”. These images are all very similar and were taken as I walked by a field on a hill surrounding the Bath city centre. In each there is a bright section of sky at the very top of each image. Below that are dark trees and mostly grass that fills the rest of the image below the sky.

Look back at your comments on the earlier blog, and see how your first impressions compare with what you know now: http://www.janepitt.co.uk/trans-space/2013/11/14/glimpsing-sound/

Don’t forget to comment below!

vOICeOA = Outside Artificial

vOICeON = Outside Natural