Tuesday, 16 January 2018

Learning Analytics, Surveillance and Conversation

In the noisy discourse that surrounds learning analytics, there are some basic points which are worth stating clearly:
  1. Learning Analytics, like any “data analysis” is basically counting: complex equations which promise profound insights are in the end doing nothing other than counting. 
  2. Human beings determine what is to be counted and what isn’t, and within what boundaries one thing said to be the same (and counted as the same) as another thing. 
  3. Learning analytics takes a log of records – usually records of user transactions – and re-represents it in different ways.
  4. The computer automates the process of producing multiple representations of the same thing: these can be visual (graphs) or tabular 
  5. Decisions are facilitated when one or many of the representations automatically generated by the computer coincides with some human’s expectation. 
  6. If this doesn’t happen, then doubt is cast over the quality of the analysis or the data.
  7. Learning analytic services typically examine logs for multiple users from a position of privilege not available to any individual user. 
  8. Human expectations of the behaviour of these users is based on bias surrounding those aspects of individual experience that a person in privilege will have: typically this will be knowledge of the staff ("the students have had a miserable experience because teacher x is crap")
  9. Often such high-level services exist on a server into which data from all users is aggregated with little understanding by users as to what might be gleaned from it. 
  10. The essential relationship in learning analytics is between automatically generated descriptions and human understanding.  
  11. Data analytic tools like Tableau, R, Python, etc all provide functionality for programmatically manipulating data in rows and columns and performing functions on those rows and columns. Behind the complexity of the code, this is basically spreadsheet manipulation. It is the principal means whereby different descriptions are created. 

So the real question about learning analytics is a question about automatically-generated multiple descriptions of the data, and how those multiple descriptions influence decision-making. 

Of course, decisions made from good data will not necessarily be good decisions, nor are decisions made with bad data necessarily bad. What matters is the relationship between the expectations of the human being and the variety of description they are presented with. 

In teaching, communication, art, biology or poetry, multiple descriptions of things contribute to the making of meaning. Poets assemble various descriptions to convey ideas which don't have concrete words. Composers create counterpoint in sound. When we discuss things, we express different understandings of the same thing. And teaching is the art of expressing a concept in many different ways. What if some of these ways are generated by machines?

AI tools like automatic translaters or adaptive web pages are rich and powerful objects for humans to talk about. As such tools adapt in response to user input, people talking about those tools understand more about each other. Each transformation reveals something new about the people having a discussion. 

This is important when we consider analytic tools. The richness of the ability to generate multiple descriptions means that there is variety in the different descriptions that might be created by different people. The value of such tools lies in the conversations that might be had around them. 

With the emphasis on conversation, there is no reason why analytic tools should be cloud-based. There is no reason why surveillance is necessary. They could be personal tools, locally-installed instead. Their simple job is to process log files relating to one user or another. Through using them in conversation, individuals can understand each other's understanding better. They should be used intersubjectively.

Recently I've been doing some experiments with personally-oriented analytical tools which transform spreadsheet logs of activity into different forms. The value in the exercise is the conversation. 

Whatever we do with technology, it is always the conversation that counts!

Saturday, 13 January 2018

Learning as an Explanatory Principle - a response to Seb Fiedler

Seb Fiedler (University of Hamburg) wrote this (http://seblogging.cognitivearchitects.de/2018/01/11/on-learning-as-an-explanatory-principle/) earlier last week in response to my post about a "logic of learning" (see http://dailyimprovisation.blogspot.co.uk/2017/12/a-logic-of-learning.html)

My original post was about the impossibility of saying anything sensible about learning. Bateson's idea of "explanatory principles", which Seb uses, was his way of pointing out the essentially relative nature of anything we say about anything. Gravity? It's an explanatory principle!

Seb highlights Jünger's view that "learning is an explanatory model for the explanation of change".

The effect of any explanatory principle is to allay uncertainty about the environment. We are generally uncomfortable with uncertainty, and seek to explain it away. If it's not  God, it's the Government, or "human nature".... Because we attribute learning to so many aspects of change in the world to which we are uncertain, we have established institutions of learning to do an industrial-scale mopping-up of this uncertainty!

Explanatory principles - particularly when they are institutionalised - wash over the details of different people's interpretations of an explanatory principle. When the institution defines what learning is, individuals - learners and teachers - can find themselves alienated from their own personal explanatory principles. A common experience in education is for a learner to be told that they've learnt something when they feel just as confused (or more so) about the world as they did before they started.

At the heart of Bateson's argument about explanatory principles was the epistemological error which he feared would lead us to ecological catastrophe. He believed, as many believe in cybernetics, that one has to correct the epistemology. Bateson's attempt to articulate the logic upon which the epistemological error was based revolved around his work on the "double-bind". Double bind logic is a dialectical logic of levels of contradiction and resolution at a higher level. This is the logic which I think we should be looking at when we look at education and the discussion about learning. 

The use of the explanatory principle of "learning" is a bit like a move in a strategic game. When x says "this is learning" they are maintaining a distinction through a process of transducing all the different descriptions of their world and what they observe into a category. They then seek to defend their distinction against those who might have other distinctions to make. It's not the distinction that matters. It's the logic of the process whereby the distinction comes to be made and maintained. 

The logic behind the double-bind which produces the distinction is not Aristotelian. Bateson did not fully explore the more formal properties of the double bind logic. Lupasco did, and Joseph Brenner is able to tell us about it. Also I think Nigel Howard's theory of Metagames is also able to articulate a very similar kind of logic in a formal way using game theory.

Tuesday, 2 January 2018

Partial Notation of Improvisation and Creative Processes

I experimented with creating an instrumental voice (a flute) using some music notation software (Staffpad) and then improvising some kind of accompaniment to it on the piano. The notation process was interesting because it was effectively a process of creating space in the score. The gaps between the instrumental sections were more important than what occurred in those sections. I improvised into the gaps.

This worked quite well. It struck me that the process is a bit like doing a drawing where you demarcate the background and work towards the figure. The instrumental sections were pretty random - but it was just a frame. The colour was filled in with the improvisation.

I listened to the ensemble and started to add another voice which reinforced some of the features of the piano. Eventually I imagine I could dispense with the improvised bit completely.

When we sing along, or improvise with existing music, what is happening is the making of an alternative description of it. It's rather like taking Picasso's bare skeleton of a bull, and gradually filling in the bits which are missing. The bare bull is still a bull. What we add are alternative redundant descriptions.
This is what my improvisation is in relation to the fragments of notated melody on the computer. Gradually more and more description is added, and more and more redundancy is created.

One further point: thinking about my interest in Ehrenzweig's work on psychotherapy and the creative process (see http://dailyimprovisation.blogspot.co.uk/2017/11/ehrenzweig-on-objects-and-creativity.html), the notated score with its bare bones and large gaps is a means of creating what Ehrenzweig calls "dedifferentiation" in the psyche. It breaks things up and creates a framework for the drawing up of new forms and ideas from the oceanic primary process. Ehrenzweig talked about serialism doing this. This is the first time I have had the feeling that technology might actually be able to do it too. My experience with technology and musical creativity generally has been that it gets in the way because it reinforces the superego's "anal retentive" demand that things must be done in such and such a way.

I have not felt this with this particular exercise. Of course, it's not great music. But the process promises something...