Wednesday 6 November 2013

Redundancies and Meaning in Music

One of the exciting things about playing with Ableton Live is the exploration of the effects of the generation of redundancy in music. I found that the repetitious sound generated by the computer stimulated me to make more sound. But the striking thing with Ableton was that it wasn't simply the idea of repeating things (which is perhaps most closely associated with the idea of 'redundant' information) like a rhythm or a melody, but the layering of different ideas on top of one another in different time periodicities. More than anything else, it was this layering - particularly poly-tonal layering which was most richly stimulating. So, given that my experience of this sound is one of being stimulated to create, and I associate that kind of stimulation with redundancies, I'm wondering if this polytonal sound is also in some ways an aspect of redundancy. But how?

It is important to remember that in information theory, redundancy is constraint on entropy, or uncertainty in being able to predict the next message. Redundancies shape the 'grammar' of what's happening. But this works strangely in music: a repeated chord as accompaniment approaches zero entropy (we expect the chord to be repeated almost 100%), but as it does so, we might expect absolutely anything else to occur on top of it (so, the entropy increases as the chord is repeated because there is high uncertainty about what might come next). That means there appears to be multi-layer entropies and redundancies.

Harmonies themselves may have entropy values. A major triad has lowish entropy because it frames possible things that might harmonise with it. There's not a lot that's redundant in a major triad. A polytonal triad (say, C and F#) produces high uncertainties about what might follow it, in a similar way to a repeated accompanimental chord. That suggests that there may be high redundancy in a polytonal chord (like the repeated pattern) which opens the door to any other possibility.

What might we generalise here: a layer that produces zero entropy through high redundancy stimulates high entropy at a new layer. Emergent patterns of redundancy limit the entropies of the new layer until it too might approach zero, and so the cycle continues. What emerges is a stratified information model, not a linear one. So the question then is how the different strata move over one another. Or rather, there is an information model with diachronic and synchronic aspects.

One way of considering this is to characterise it as a multi-dimensional communication situation with mutual redundancies between layers. An idea fertilizes another idea when the constraints behind the production of messages of one idea are identified as the constraints behind the production of another idea. With mutual constraint there is transfer between the two levels. At this point it may be that a new level is born.

But Shannon's theory is about machine communication. In human communication, what appears to happen is that expectations arise above a sea of information and redundancy. When we listen to a tonal piece, there is an emerging expectation that messages will fall within a particular 'key' (this is not, I think, simply socially constructed - although its ontology may implicate some kind of deviation or compliance with a social norm). It is in this domain of expectation that Asavief's 'intonation' may occur (see http://dailyimprovisation.blogspot.co.uk/2013/08/four-climaxes-and-theory-musical.html). An intonation is a selection: so an expectation is selected from a set of possibilities. It is a way of filtering out 'semantic noise'. What are the criteria for selection of an expectation? I think the most likely cause will be the mutual redundancies that exist at different levels of experience. A new idea is a new expectation.

No comments: