Trans:space

Some Thought Part 2 (Some More Thoughts)

March 29th, 2014 by Mike Snarr

How do we perceive our environment through sound & how can we interpret that for an audience?

 

What is an audience?

As I had eluded to in my previous post I was getting rather sidetracked with the concept of an audience being a group of people sitting in a large darkened room on one side of a proscenium arch while a group of performers do their thing on the other. This need not be the case an audience can be a one on one experience, as in an audience with the Pope. So in terms of light do I need something to light as in a performance? Do we need performers? Do we need a show with a beginning and an end? Do we even need someone selling ice creams in the interval???

 

I had also tied myself up with the idea of representing the whole of the audio-scape in one linear real-time experience, but this is not what the vOICe does. The vOICe takes a snapshot of just a small part of the our hemisphere of perception and turns it into sound over the course of a second or two and then repeats the process.

 

As I have mentioned before the old red, green and blue disco lights that flash in time to the bass mid and treble of an audio signal in their way present us with a similar experience to the vOICe however these units tell us little about our environment, why? The problem is that if we plug an omnidirectional mic into one of these units we get the sounds from the whole of the environment represented, unlike the vOICe which presents us with just a small part of our surroundings to concentrate on. So we must use a Uni-Directional

 

microphone in order to pick-up sound from only the direction that we are interested in. If we now filter that signal into three separate signals, bass mid and treble, we can feed those signals to a single RGB LED red for the base signal, green for the mid range signal and blue for the high frequency signals. What we now have is a light source that will change colour and intensity depending on the frequency and amplitude of the signal it receives. Although this is representative it dose not immerse the audience in the same way that the vOICe does. In fact it cannot, when using the vOICe one is usually blindfolded or blind. Also as we use our ears to receive the audio from the vOICe we are deprived of yet another sense. Looking at a colour changing LED with the whole of the audio-scape we are trying to experience through that LED in the background can only distract us. The LED should be mounted on the inside of a mask, such as a painted out face protecting mask, that will obscure the view. While this gives us the experience from a specific direction that is not how we see. When we look at a scene our eye will scan that scene, we may then focus in on a particular part of the scene that takes our interest but all the while we are still seeing the rest of the scene with our peripheral vision. So we should add more audio inputs and more LEDs. I would suggest creating a horizontal array of about twelve mics arranged at 30 degrees to each other mounted in or on some nice piece of head ware. The LEDs should be mounted parallel to the mask so that the light shines out onto the mask rather than pointing into the eyes of the viewer. The LED attached to the fore-most pointing mic should be positioned directly in front of the eyes of the viewer with the rest arranged in an array that matches the layout of the mics with the rear-most LED pointing down to the viewers chin

 

Gesture in relation to sound

March 21st, 2014 by Sian Thomas

Circle, diagonal, square

Circle, diagoanal, square

Circle, diagonal, square

Circle

Diagonal

All responding to same sound

Each responding to same sound

All responding to same sound 2

I asked people to listen to three sounds made by VOICE and to imagine a shape to each sound and to then make a gesture from the shape they imagined. I filmed people making the gestures.
The VOICE sounds were of the shape made by the technology from a circle, and diagonal line and a square.
I found that people who didn’t know the technology tended to make gestures that were linear moving from one side to the other or from top to bottom for both the sound for the circle and the diagonal. They tended to make a more dotted or punctuated yet rounded shape for the square sound. Yet none of them made a shape matching the sound.

When we recorded ourselves, those of us working on the project, we had been working with the technology all day and had a greater understanding of it. The gestural shapes that we made tended to be very similar for each shape, apart from the square.

This is something I found fascinating and I would like to explore further and ideally a choreography for film based on the resulting gestures in correlation with the sounds.

Talking with Michael, an idea came up to work with visually impaired people on the choreography. This would present many challenges yet may be stunningly powerful. It may also give visually impaired people a chance to explore movement as an expressive tool, beyond keeping safe. (of course safety would need to be paramount as a starting point). I would be very interested to see how movement arises from an internal perspective not relating to ideas of external perception by others and input from the media etc.

To include such a choreographic piece of work or film in an interactive installation would be very exciting.

We played with creating pattern whilst wearing the VOICE technology and the sounds this produced whilst looking at the pattern in different ways whilst drawing, meant that an instant simple composition was created through drawing. This would be a great thing for the public to be involved with in an interactive installation. It was very exciting to do and gave a great understanding of the technology and how it works.

Revealed: the images behind the sounds of the outside world.

March 4th, 2014 by Michael Proulx

In a previous blog I presented two sound files created from images of the outside world. Jane carried out a live version of the blog experiment and presented the recording in another blog, too. Each was transformed into sound using The vOICe — a sensory substitution device invented by Dr Peter Meijer to turn images into sound so that we can see with our ears. I wanted to find out what sort of immediate response a person has to the sounds first, with very little information, and then with a hint of the underlying images to explore any correspondence one might have between what is heard and what it represents. The experiment is not over!  Before I asked which sound was preferred. I would like sighted readers of the blog to view each series of images, shown in the videos here, and in the comments please state which one your prefer visually: OA or ON?

The OA video and sound stands for “Outside Artificial”. These three images were taken as I walked up a set of stairs to the library at the University of Bath. The steps are painted white at the edges to aid sighted pedestrians. The final image has the first glimpse of the library appearing at the top of the image.

The ON video stands for “Outside Natural”. These images are all very similar and were taken as I walked by a field on a hill surrounding the Bath city centre. In each there is a bright section of sky at the very top of each image. Below that are dark trees and mostly grass that fills the rest of the image below the sky.

Look back at your comments on the earlier blog, and see how your first impressions compare with what you know now: http://www.janepitt.co.uk/trans-space/2013/11/14/glimpsing-sound/

Don’t forget to comment below!

vOICeOA = Outside Artificial

vOICeON = Outside Natural

Some Thoughts (Part1)

February 26th, 2014 by Mike Snarr

How do we perceive our environment through sound & how can we interpret that for an audience?

The more I think about this the more difficult it becomes. Despite many years of designing and operating lighting systems to interpret music for bands somehow the prospect of representing the auditory presence of a place seems overwhelming. Maybe because it is not a specific place but a concept of process . The vOICe has a very simple algorithm that it applies equally to all that it sees initially I was intrigued by this idea and wanted to come up with a way of turning the auditory image of a pace into some sort of projected representation of the place but the more I thought about it the less satisfactory this approach appeared to be in terms of conveying the essences and feeling of a place, I found myself wanting to stand in a darkened room and throw buckets of coloured light around to represent the specific place.

In order to fully understand the problem we need to look at the qualities of light. The two main qualities are Intensity and colour.

INTENSITY: how bright is our light? Traditionally louder sounds are equated with brighter lighting, as with the vOICe but in reverse as it transforms areas of higher intensity into louder sounds.

COLOUR: Light, as with sound, is made up of different frequencies. Low frequencies (with a wavelength of around 430 nanometres) that appear as violet to us could represent low frequency sounds and high frequencies (around 630 nanometres) that appear as red light could represent high pitch sounds.

Thus with a combination of intensity and colour we could represent the sound-scape of a place but would it really tell us anything about the place?

There are many other qualities to light which affect our perception of place such as:

DIRECTION: lighting from above or below can have an immediate effect on our perception of a scene. Light directly from above we associate with ethereal heavenly bodies and from below with hellish demonic underworlds. Is this a learned cultural reference or does it have an even deeper perceptual meaning, either way the association goes back thousands of years. Horizontal lighting helps our perception of objects in space hence its use in ballet and dance.

SIZE: we rarely thing of light as having size but it is very relevant to what we see. A small light source such as the sun in the middle of a cloudless summers day will produce very sharp defined shadows which emphasizes textures in surfaces. An overcast sky on the other hand will produce a soft diffuse light that envelops shadows and hides texture. But this is not what we think of when we imagine these scenarios. Asked to think of a cloudless summers day we will usually conjurer images of soft warm flowing golden light and an overcast sky will be represented by cold sharp steely images. How do these representations come about? Well for hundreds of years artist and photographers have depicted them this way, from Constable to the impressionist summer days were represented as soft warm and lazy. Ask a modern photographer to take a portrait on a summers day and the first thing they will grab (after the camera) is a warming filter, they will then probably stick a large light diffuser above the subject to soften the shadows probably adding a golden reflector to soften and warm the shadows even more before adding a light behind the subject to provide as hazy highlight to the hair and shoulders, job done.

SHAPE: Does our light act as one large round beam or is it many small shards, such as sunlight through woodland leaves?

MOVEMENT: is our light static or dose it shimmer? Does it flicker and jump like a fire or sweep like a search light?

Not surprisingly I have found myself thinking in rather theatrical terms. I also find myself thinking about the visual representation of environment rather than the auditory environment, and with other project members being a musician and a dancer I find myself thinking in terms of performance. A structured performance within a space with a lighting design and cues and rehearsals and lots of lighting equipment. But this is not what the vOICe does, it does note care for size or movement or direction or even colour it simply takes one static monochrome image of one small part of our surroundings and converts that to noise then it does it again. Some how our brains can take that noise and from it form an impression of our surroundings.

Duet for vOICE and Tabla

February 23rd, 2014 by Sam Randhawa

Duet for vOICE and Tabla with electronic drone.

“How do we perceive our environment through sound & how can we interpret that for an audience?”

Intrigued by Jane’s short film ‘Cuxton Subway’ I began creating a piece of music that I thought would best describe elements of music and sound perception that I have never experienced prior to Trans:space.
The first of these, as demonstrated in ‘Cuxton Subway’, was the effect of sound on the listener as they move through a space. The reverberations or resonating frequencies varied drastically as the space was entered, the source of the sound approached, passed and the space exited.
The second new element was the experience of the vOICE itself as an audio description of an environment. The rhythms (or meter) and melodic nuances created by the vOICE are fascinating. Although my experience of it was not enough to for me to rely on as a visual aid, it did undoubtedly describe the visual aspects and formations of the environments or spaces we visited.

So ‘Duet for vOICE and Tabla’ was created. New tabla recordings were made and layered with short tabla samples from each of the five locations. Using reverbs, compression and eq I simulated the acoustic properties of each space as best as I could to attempt to interpret each space for the listener. New recordings were made simply because the intention was to create one piece of music which moved through each of the spaces rather like Jane walking through the tunnel. I wanted therefore to work in a particular BPM based on the pulse rate of certain vOICE recordings. Extracts from the recordings of the vOICE were then layered with the tabla recordings. Each vOICE recording has been matched with the appropriate tabla recording according to location. Please note though that a subtle electronic drone has also been added to glue the two main elements together.

SUBWAY
CHAPEL
HALL
STUDIO
PINE WOODS
HALL
CHAPEL
PINE WOODS
STUDIO
SUBWAY

For descriptions of the environments listed please see the post ‘Medway Tabla’ November 18th, 2013.

To listen;

River Voice ‘Sounding’ St George’s

February 21st, 2014 by transspace_jane

River Voice Choir Sounding St George’s 15.02.14

RV_blog_pic1 RV_blog_pic2

 Listening when singing is essential for choirs & involves variable levels of listening..to know you’re in tune with your vocal group (i.e. soprano, alto, tenor, bass), making sure you’re in tempo and occasionally (once you know the piece well enough!) listening to the whole sound. River Voice are a choir with a particular awareness of listening, many of their members are blind or have visual impairments which has egendered a heightened awareness of listening in the sighted choir members too.

I met River Voice during the Tania Holland/Jon Hering Lachrymose project at Turner Contemporary and had the pleasure of singing with them at xmas. I’d already had some interesting conversations with various members in rehearsals about sound & perception, discovering that Carla had already tried vOICe a little on her old Nokia phone.

I invited them to workshop the process of ‘Sounding’ that’s evolving through Trans:space. In my previous blog ‘Filtering Perception’ I described a workshop with Artlink Central where we attempted to develop a language to sonify objects. The aim of the River Voice workshop was to find sounds that might interpret or translate the architecture as well as the ‘sense of the place’ it’s surfaces, it’s volume & resonance and our embodiment of it as a whole.  I was also curioous to see how different it would be working with a group completely new to ‘sounding’ and if any common sonic vocabulary would emerge.

We were situated in the centre of a partially carpeted, large brick built, high wood raftered building with an arched roof space. To get us focused we began by singing tones and tuning in to each other. We followed this with a 360º/deep listening exercise and some vocalising of the immediate soundscape within and without the building to limber up our improvising voices. The choir’s normal focus is rehearsing their next concert repertoire so some lively discussion followed about how to move your focus around when listening, the meditative sensation of deeply listening, the complexity of multi-sensory perception & synaesthesia; this moved on to how we might sonify a sensory experience as well as actual physical matter. This required us to re-tune; out of the soundscape, in to the resonance of the space and to devise sounds that could express our sensory experience of being there. Some people described a sense of stillness, being a still small point in a vast space, perhaps like being on a beach with the tide far out. That this stillness was a kind of silence. For others listening & embodying made them want to make a sound that would travel up and out into the volume of space meeting it’s largeness and transcending the physical architecture akin to a swelling organ. We discussed how preconceptions might affect our perception. The following track includes some of that discussion and ends with us vocalising a description of both the sense of stillness and vastness opening out, exploring pitch, intonation and harmonics. It begins with one small hum that swells along the line as each person picks it up, and throngs and alters along the way.  We had a tea break and then made a second sounding evolved from the first, heard here in the following track.

http://

http://

St George’s Centre is the regular rehearsal space for River Voice so they have a degree of familiarity with it. We extended that familiarity in the second part of the workshop by vocalising in two different areas of the building; the original gated and domed ‘choir’ or alter area and the side ‘corridors’ – long narrow wooden floored, stone walled walkways that are adjoined to the main space by a series of arches, in order to explore how this might alter our vocal interpretations and to play vocally with the resonance of each new space. The following clip is the DoHng’ng’ng’ng’ng sound devised for the ‘alter’ space.

http://

Finally we devised a partially gated almost compressed sound B – aaaaahr that travelled in an upwards curve, for the space under the arches.. finally triggering the choir back in to default song mode and a rendition of the Beach Boys Barbara Ann. The filter of memory in perception is seemingly never far away as our brain works to process and categorise our immediate experience of a place..

http://

By the way, in addition to the ambient sounds of the building plus the wind and traffic outside, you’ll hear the odd scratch, snuffle, rattle & chink of the 3 working guide dogs.

Thanks River Voice..

Filtering perception

February 12th, 2014 by transspace_jane

The process of answering the trans-space R&D enquiry ” How do we perceive our environment through sound & how can we interpret that for an audience?” has led me to make explorations into evolving the vocal ‘sounding’ methodology I use to create sonic responses to places. Developing it from the starting point of flexible direct vocal improvisation drawn from 360º deep listening interpreting an immediate soundscape to attempting to evolve a non-lexical vocabulary of phonemes that can be uttered to creatively interpet and express the strata of sensory input we receive as we experience a place. This new phase is going to take a while to unfold!

We perceive our environments, and the objects within them, through simultaneous filters with one filter often more heightened than the others depending on the individual’s physical condition & experience: sonic, tactile, smell, visual, memory, emotion etc.. Each of these clammers for attention when we’re consciously analysing an object or a place in order to interpret it in a specific medium. It becomes hard to choose which filter to pay attention to. This became very clear in the TRANS-SPACE workshop I led with musician Claire Docherty & 5 of the Artlink Central participant artists in Stirling. We’ve all worked together before, making vocal soundings in buildings and outdoor spaces such as Fourth Valley Royal Hospital & Stirling Castle, we have a complicity and fluidity with each other so it was natural to work together to take the next step. I’d been shopping for sounds, which was a first for me, looking for objects that would be likely to present a challenge but also stimulate us to produce variety of sound. It wasn’t important what the objects were so their purposes weren’t revealed or discussed. Our focus for the day was to embody raw input and process it into vocal sounds that expressed something about it. Investigating 4 objects over the course of the day, becoming aware of each person’s default filters, tuning in and out, one drawn to express rhythm, others focused on tactility or texture, materiality, aesthtic response, associated memory and inevitably imagination. Sharing our responses to form the beginnings of a common sonic vocabulary for sonifying objects. You can hear some clips of the resulting soundings and discussions here:

http://

http://

http://

Taking these sounds and thoughts back down to Kent with me I made some solo attempts to respond to the some of the same indoor spaces Sam played Tabla in (the Weather has kept me from the outdoors, I’m dying to get back in the pine woods and subway to ‘sound’!). Digging down to the essence of the place was surprisingly harder without the group. I found myself defaulting to improvising the immediate soundscape rather than interpreting, as here in the stairwell with a dominant extractor sound;

http://

until I began to think about the language of vOICe and how it describes a shape or surface and I stopped fighting the filters or thinking so hard and began interpreting the geometry & surfaces of the space as here with a distinctive oval skylight followed by an interpretation of a right angled wall and some steps leading up to a wide open doorway.

http://http:// There’s so much further to take this..

Live version of Michael’s ‘Glimpsing Sound’ blog experiment

February 11th, 2014 by transspace_jane

Michael, I put some photography students from UCA Rochester, Kent in at the deep end with a brief vOICe demo and a live version of your experiment.  Obviously with such short experience of vOICe it took a while for them to grab the concept of the sounds being a direct translation of images of 2 diverse places, initially drawing on their memory and imagination to process and respond.  Here’s a recording of their responses:

http://

 

 

 

 

Cuxton Subway – short film from Medway Tabla experiment

January 14th, 2014 by transspace_jane

Hear a sound only field recording of the above here:

 

 

 

 

In trying to understand some the questions from ‘Medway Tabla’ further reading has led me here….I think it helps to answer some of them.

November 18th, 2013 by Sam Randhawa

Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in intensity, spectral, and timing cues to allow us to localize sound sources.[7] Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the zenith or vertical angle, and the distance (for static sounds) or velocity (for moving sounds).[8] The basis of localization is based on the slight difference in loudness, tone and timing between the two ears. Humans as most four legged animals are adept at detecting direction in the horizontal, but less so in the vertical due to the ears being placed symmetrically. Some species of owls have their ears placed asymmetrically, and can detect sound in all three planes, an adaption to hunt small mammals in the dark.[9]

http://en.wikipedia.org/wiki/Psychoacoustics

Evaluation for low frequencies
For frequencies below 800 Hz, the dimensions of the head (ear distance 21.5 cm, corresponding to an interaural time delay of 625 µs), are smaller than the half wavelength of the sound waves. So the auditory system can determine phase delays between both ears without confusion. Interaural level differences are very low in this frequency range, especially below about 200 Hz, so a precise evaluation of the input direction is nearly impossible on the basis of level differences alone. As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound’s lateral source, because the phase difference between the ears becomes too small for a directional evaluation.[citation needed]
Evaluation for high frequencies
For frequencies above 1600 Hz the dimensions of the head are greater than the length of the sound waves. An unambiguous determination of the input direction based on interaural phase alone is not possible at these frequencies. However, the interaural level differences become larger, and these level differences are evaluated by the auditory system. Also, group delays between the ears can be evaluated, and is more pronounced at higher frequencies; that is, if there is a sound onset, the delay of this onset between the ears can be used to determine the input direction of the corresponding sound source. This mechanism becomes especially important in reverberant environment. After a sound onset there is a short time frame where the direct sound reaches the ears, but not yet the reflected sound. The auditory system uses this short time frame for evaluating the sound source direction, and keeps this detected direction as long as reflections and reverberation prevent an unambiguous direction estimation.[7]
The mechanisms described above cannot be used to differentiate between a sound source ahead of the hearer or behind the hearer; therefore additional cues have to be evaluated.[8]

http://en.wikipedia.org/wiki/Sound_localization