Upcoming: The Feed

This Thursday I’ll be collaborating with David Gunn from Incidental, having my poems remixed and cut up by his ‘Feed’ app. To quote from David’s website,

Feed is a music app for iPad that enables complex, rich manipulation of a “live feed” of sound, drawn from the ipad microphone or an audio file. Users can record, playback, loop and modulate to spontaneously create their own compositions from musical sources, spoken word or ambient sound.

I’ll be reading new material from the Vox Lab residency whilst David mixes, rends and treats it. Documentation of the experiment will be posted here, or come along and see it live:

Thursday 14 June, 6.30-7.30pm
55DSL Store
2-4 Bethnal Green Road
London E1 6GY
Free entry.

In the Anechoic Chamber

On Friday I spent an hour in UCL’s anechoic chamber, a small room filled with noise-absorbing wedges that produce an almost noiseless environment, where no sounds echo back to the speaker but disappear into the walls.

I spent 15 minutes sitting silently and in the dark, and then Nadine Lavan, who very kindly supervised me, turned on the lights and the microphone.

First of all I tried to describe as closely as possible what I’d heard and seen, and then I spent around 20 minutes writing, followed by an improvised spoken reflection on the experience, which lasted around 17 minutes.

You can listen to my first audio description – featuring phasing sand, birdsong, silent pressure and Neil Young – on the Soundcloud clip below, and below that the piece I wrote in the chamber, lightly edited.

Anechoic Chamber Noise by jwilkes


starting with silence
which is not silence
as pressure and birdsong
emerges from a small room, imagine
being trapped
scrabble at the doorless door
feedback dampened
this, typing, still noises
my stomach still noises
gentle belch
the system wobbles in its noise
and cloth-eared hiss to myself

possible to create distractions
actually quite comfortable
was what I thought
locked away from the world
took several tries to speak
into the dark there seemed
no need almost
the lights went on it
vanished, recollected
the old chair still crunches
silence chased
by key tapping, leg shifting

it had real presence, real pressure
well let’s call it ‘silence’
seeing as it really scattered sand
dawn chorus, deep pressure, wet crackling
spatialised, a volume for living in
miner for a heart of gold
punctuated by belly creak
the voice from the speaker
shakes the metal frame of the floor
all is motion

I hadn’t expected the volume
another belly creak
that forms around me
or into which I dissolve
as to say birdsong top right
or phasing sand mid right
or pressure all across the left
is to explain how inner and outer
make little sense
in the dark, in the silence

turning L and R
proprioception gave me that
but the eyes were still deep blue of noise
and no-sound moved straight
through the skull

Upcoming talk for ‘At the Centre of Listening’

I’ll be presenting work from the residency and discussing voice and writing with Daniela Cascella and music critic Nick Coleman at this event for ‘At the Centre of Listening’.

This is the third in a series of events organised by Ultra-red and students from the Goldsmiths MA in Aural & Visual Cultures for the Serpentine Gallery’s Edgeware Road project. Full details are here

Where and when:

Weds 20th June 2012, 6-9pm.

Centre for Possible Studies
21 Gloucester Place
London W1U 8HR

Free entry.

Playing with databases

Below is a recording of a poem-sketch made using the MRC psycholinguistics database. You can just go ahead and listen to it, or read more about the process and background below.

vox lab poem sketch 1 by jwilkes

The poem consists of sets of words produced from the database by varying the level of “imageability”, or how easy a word is to visualise. Joanette, Goulet and Iannequin illustrate this by asking us to think about the difference between the words “anger” and “antitoxin”; the former is abstract but easy to visualise, whereas the latter is concrete but hard to visualise.

I set the level of imageability to the highest level consistent with producing only one word. This was 660, on a scale which goes from 100 to 700, and it produced the word ‘BEACH’, repeated three times. I then decreased imageability in steps of 5, from 655 down to 630. With each of these seven iterations, the set of words available increased, and in this work-in-progress I simply read them out in alphabetical order.

Psycholinguistics Geekout

Yesterday was an exciting day at the ICN for all sorts of reasons: I visited an MRI scanning suite for the first time, I tried reading some more poems under speech jamming conditions, and the BBC came and did some recordings and interviews with us (more on which anon…)

But even more exciting than all that was a comment Sophie made in passing about something called the MRC psycholinguistic database. This, I’ve discovered, is an absolute goldmine for someone with a fetish for words and is an even greater example of the internet adding to the sum of human knowledge than Lolcats.

Essentially, it’s an online dictionary for researchers who want to create lists of word-stimuli for experiments. It allows you to select from a database of more than 150,000 words, narrowing them down by criteria that are second nature for a psycholinguist but for a writer (or for this one, at least) are excitingly new ways of choosing vocabulary.

You can select words by their standardised scores on familiarity, concreteness, meaningfulness, age of acquisition and a host of other measures. You can filter the word sets by part of speech, irregular pluralisation or contextual status (Specialised, Archaic, Dialect, Nonsense, Rhetorical, Erroneous, Obsolete, Colloquial…) And once you’ve chosen your criteria, the output is presented to you in a beautifully stripped back aesthetic:

This delightful list (ETHER – GAUNTLET – LANCER – LICHEN – LYRE – RAMROD – WHALEBONE – WICKET if you can’t read it above) was produced by specifying words between 2-5 syllables, with a concreteness rating of greater than 500 out of 700 and a meaningfulness rating (Colorado norms) of less than 300 out of 700, and then filtering for nouns.

So how could sets like this be useful for poetry? The raw output could be used to create list poems, but the database could also be used to create differentiated vocabulary pools. Limiting your choice of words like this would, for a start, allow you to create a series of variations on a particular theme, in the manner of Raymond Queneau’s Exercises in Style. I’m sure it could be taken further than this though – constraint-based writing is more satisfying, for me, when it’s a means to get somewhere rather than an end.

“Don’t you understand trying to stammer?”

The phenomenon of DAF – Delayed Auditory Feedback – has been provoking interest lately, due to its use in a prototype “speech-jamming” gun invented by Japanese researchers. As Sophie has pointed out, the potential for using DAF to shut people up against their will needs to be strongly qualified, but I was interested to try it out on myself and see the effect.

So last week Zarinah Agnew kindly set me up with a pair of headphones in front of her computer, and loaded a programme which made everything I said repeat back in my ears at a variable delay. I’d brought a few different texts with me, to see if some material was harder to read than others. I had some Gerard Manley Hopkins – poems full of internal and end-rhymes, consonance and assonance – a short play by Gertrude Stein, wonderfully telegraphic (or should that be telephonic?), called ‘I like it to be a play’, and Maggie O’Sullivan’s Palace of Reptiles, with its poems rich in sound-play but not rhythmically constrained in the same way as Hopkins’ lines.

Zarinah started me off on a delay of 200ms, which usually causes maximum interference. I opened the Hopkins, and started reading. I wasn’t stopped dead in my tracks, but listening back to myself, I do sound a bit out of my head. The vowels are drawn out and slurred, the rhythm is all over the place, and when I read “pride and crared for crown” for “pride and cared for crown”, I make a mistake similar to the typical ‘preservative error’, with the ‘r’ getting carried over into the next word (Hashimoto and Sakai, 2003, give the example of “hypodermic nerdle”).

Here’s a snippet – low quality because recorded on my dictaphone, but you get the picture.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Next we tried the Stein, at the same level of delay, and I found it easier, though my reading isn’t particularly fluent and I do produce a little stutter as I read “she expected a distress”, which sets me off slurring the next few lines.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

And again, here, one mistake causes ripples and wobbles which pass through the next few lines (which end, “Don’t you understand trying? / Don’t you understand trying to stammer? / No indeed I do not.”) I like this (unintentional) effect of a distorted sense of timing, as if I’m being played back on a faulty record player.

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

Overall though, I seemed to manage the Stein more easily, getting through the short, clipped exchanges, with their well-marked pause points, more easily than the enjambed Hopkins.

The O’Sullivan poem (‘Now to the Ears’) was relatively easy as well, even with the 200ms delay: perhaps the well-spaced syllables of this particular poem are less pressurised and compressed than those of ‘The Sea and the Skylark’, even if equally sonically complex – though listening to this MP3 of her reading, my version is much too slow.

I’m not sure why I found it easier: perhaps I was just getting accustomed to the reverb, listening instead to my undelayed voice coming in through my cheekbones. The artist Charles Stankievech has written a very interesting article about the history of headphones, identifying them with a “bracketing of the world”, and tracing their genealogy from 19th-century stethoscopes. He cites Jonathan Sterne on how the stethoscope created new relations between doctor and patient, and turned the voice from a carrier of meaning, bearing patients’ self-descriptions of their illnesses, to a potential symptom in itself, a “kind of sound effect – a container of timbre and an index of the states that shaped it”. According to Stankievech, the invention of headphones which followed created a new, impossible space filled with floating “sound masses”, an “in-head” experience of sound “between the ears”.

Listening to your own voice replayed with delay imparts a dislocating twist to this perception of headspace, if that is what it can be called. Two forms of proprioception are set against each other, as your ears and your flesh return contrary signals about what you’ve just said. Zarinah told me that she had sound recordings of people who are completely knocked sideways by this, either reduced to making single sounds, or trying to shout over their own voices, which does nothing but increase the feedback, causing an escalating loop of interference as people try to out-shout themselves.

Further reading:
Hashimoto, Y., & Sakai, K. (2003). Brain activations during conscious self-monitoring of speech production with delayed auditory feedback: an fMRI study. Human Brain Mapping, 20, 22-28.
Stankievech, Charles. (2007). From stethoscopes to headphones: an acoustic spatialization of subjectivity. Leonardo Music Journal, 17, 55-59.
Sterne, Jonathan. (2003). The Audible Past: Cultural Origins of Sound Reproduction. Durham: Duke University Press.
Takaso, H., Eisner, F., Wise, R., Scott, S. (2010). The effect of delayed auditory feedback on activity in the temporal lobe while speaking: a positron emission tomography study. Journal of Speech, Language, and Hearing Research, 53, 226-236.

Tip-of-the-Tongue Phenomena

“For the above example, subjects tended to guess /s/ as the initial phoneme and two as the number of syllables, and sound-related words like secant and sextet had come to mind (meaning-related words, e.g. compass, also occurred). Apparently there is much lexical-form information available in the TOT state.”
- Willem Levelt, Speaking: From Intention to Articulation (Cambridge, MA: MIT Press,1989), p.320.
Response 1:
it was a /r/
it was a /r/
it was a roastbeef butty
a rustbelt
some kind of berry
uhh… fruit sorbet
kind of pink fruit
on her head
a rose
buried beneath
a king
the kindness you find
in a strange land
Target phrase 1: Listen here
Response 2:
it’s /n/
it’s /n/
it’s numerous, a number of, of
Rhenish ballads gone bad, like
a nightly mind — reading tool
a rubberised cloud or
something about an airline?
it’s /n/ and /n/, it’s a noise anointed
rebel tune
Target phrase 2: Listen here