Sunday 4 September 2016

Levels of Constraint in Music and Recursive Distinguishing

A while ago I wrote a computer program which analysed the midi signals coming from my piano as I improvised, and made a range of entropy calculations on-the-fly using a sliding window technique (so the entropy was relative to a window of recent events): see http://dailyimprovisation.blogspot.co.uk/2015/09/entropy-and-aesthetics-some-musical.html. I was fascinated to watch the numbers shift as I played, and to make observations of my emotions correlating with these numbers. I have some reservations about the sliding window idea, but it was pragmatic and certainly interesting. I ought to write it up.

In my last post I wrote about information and Shannon entropy in the Schenker graphs. As one moves up the levels from foreground to background, I think there is decreasing uncertainty and increasing constraint. However, what I didn't say was the fact that I think prolongation - which is the fundamental concept in Schenker - exists because of the inter-relationship between levels: it is the dynamics between different kinds of constraint which produce the levels in the first place. Each level only exists because of the constraints relations it has with harmony, rhythm, etc. There is no melody without rhythm, no harmony without melody, no music without the whole thing. I hinted at this in a rather weird article I wrote for Kybernetes a few years ago using Beer's VSM as a tool to think with: see http://www.emeraldinsight.com/doi/abs/10.1108/03684921111160304) And the "whole thing" is always an out-of-reach thing-in-itself. The analyst (Schenker in this case) brings constraints to bear on the music.



Since Bach's first prelude is Schenker's most famous (and simplest) example, what is its entropic structure? Well, using the technique in my software, there is little rhythmic information: basically it's all semiquavers. That means there is high constraint. There's also little information in the notes that are used: the broken chords repeat themselves each bar. If we just looked at the chords, then of course, there is difference, as the harmonic scheme revealed by the chords unfolds: that carries more information. If we look for motifs, then the accompaniment breaks down into the first rising broken chord, and then the repetition of the last three notes of the rising chord (and this pattern repeats). If we look at intervals, there is something interest that happens when the accompaniment uses 2nds rather than 3rd and 4ths: that is a difference.

If I was to simply look for notes that are different, then the chromatic notes appearing later on are striking - they move the music on. Also we could look for the entropy of register: the bass goes lower (as in so many Bach preludes and fugues). We know we are coming to the end when the patterns are broken (when the left hand on plays once a bar and the right extends its idea over the whole bar).

What I notice most about this kind of approach is the fact that counterpoint is fundamentally an overlapping constraint: something is kept the same whilst something else is changed. What happens at the end? The constraints in a variety of different dimensions all come together.

Information theory is powerful, but basically it is simply about counting. My program counts things it has been told about: rhythm, harmony, intervals, notes, registers, etc. As the music unfolds, there are new things to count which might not be immediately obvious when the music starts: motifs, articulations of form and so on. As a musician, I can use my ears (what's that?) to guide the selection of these things that might be counted. But I'm curious as to whether the selection of things-to-be-counted might be arbitrary: the point that matters is not the distinctions that are made and the counting which occurs; it is the relations between the entropies of the different counted distinctions.

A machine-learning algorithm, for example, makes distinctions about features which might be counted to identify a class of object. The important point is that the algorithm can be consistent in identifying new forms of regularity. Some of them will be insignificant in their relations to others. But other distinctions will be highly significant in their relations to others.

There's something going on at the moment in cybernetics called 'recursive distinguishing' developed by Louis Kauffman and Joel Isaacson (see http://homepages.math.uic.edu/~kauffman/RD.html; this is particularly interesting: https://dl.dropboxusercontent.com/u/11067256/JSPSpr2016.pdf). Shannon's information equations are a crude instrument for doing this kind of stuff, and I'm increasingly aware of the need for some form of recursive measurement. It helps me to blog this - this is all very speculative! The important thing in information theory is counting, and identifying things to count. Shannon measures 'surprise', but surprise arises over time. Time unfolds like music: it refers to itself (actually that's tricky - but that's another post on 'what is self-reference?')... what that means practically is that there are continually emerging new categories of things-to-count. But the specific categories of things to count are not important; what matters are the relations between the constraints identified in counting whatever it is we count (in fact, 'to count' is itself a relation).

It's late. I need to think about this.

No comments: