Sunday 28 February 2016

Learning Analytics and Social Relations: A reorientation

Learning Analytics has become very popular in recent years. With increasing use of online courses and resources in education, it was perhaps obvious to consider how clicks on resources and words in submissions might be counted and aggregated so as to reveal some mysterious intentional forces lurking behind learner behaviour. It might even help prevent learners dropping out of courses, or help teachers design better courses.

Clearly, with online content, clicks and words do become countable. And with this countability comes a range of statistical resources which can detect average trends, probabilities, and so on (although an average learner is not easy to come across in reality!). However, in reality, such analysis often amounts to little more than "students who never log in fail", just as students who never turn up for class tend not to do very well either. Most teachers are likely to spot the trend quicker than an algorithm.

Part of the ambition of Learning Analytics is to identify causal relations for patterns of engagement. For example, by categorising particular pedagogical practices, those practices which lose most of the students can be identified and eliminated. Once again, however, this is both neither rocket science nor is it terribly reliable. Lectures can be brilliant or terrible, learning activities hit and miss. We might say, with a good teacher, the learners learn most from the hits, and the teacher learns most from the misses.

Part of the problem here is that it is unclear exactly what Learning Analytics is analysing. More fundamentally, Learning Analytics cannot analyse learning because nobody can see learning: we do not possess the capacity to look into each others heads. We can only speculate on learning processes. To do any more is to fall victim to an ideology.

If Learning Analytics does not analyse learning, what does it analyse? I believe the answer to this question can help us be more focused on the analytical tools at our disposal.

Learning Analytics analyses the constraints within which humans organise their learning. 

What does that mean? It means that clicks on a web page are indicators of constraint. Now, of course, constraint has to be defined. In Information Theory, constraint, or redundancy, is the background of information. Information, for its part, is the measurement of the surprisingness of a sequence of events. Those events may be words in a sentence, or letters in a word, or notes in piece of music or patterns on a carpet. Words in a sentence have the degree of surprisingness they have because of the structures which the words have to fit into: the grammar. Grammar is a significant constraint of language. Clicks on a web page have a pattern of surprisingness too: that pattern will partly be determined by the design of the web page. This in itself may be thought of as a kind of grammar. However, the design isn't separable from the content of the page, and other constraints regarding content also affect the pattens of clicks.

Constraints are difficult to separate. The design of a tool is hard to separate from the social context within which the tool is to be used (although crafty tech companies use clever design to trick people into ignoring other constraints - for example in pernicious legal agreements which we ought to spend more time reading before we click 'agree'). In an online course, the dominant constraints are social, not design-oriented. So counting clicks or counting words is an indication of social constraint.

In education, social constraints may be managed. Indeed, this is fundamentally what teachers do when they organise activities, or find different ways of managing conversations. It is unfortunate that with much current Learning Analytics, the search for causal relations has led to over-focus on where there is data, and general ignorance of where there is little data. The problem is that where there is little data, there is the most constraint! Whilst teachers will always work hard with the taciturn learner in the face-to-face setting to work out where the blockages are, online it's much easier to ignore them and focus on the 'success' of their pedagogy by looking at where the data is. This is probably the single biggest mistake of online learning.

If analytics was turned to analysing constraint, and questions asked about the constraints which are encountered, then I'm sure we would have far better online education than we do. There are ways of coordinating conversations online where constraints can be identified and moved if necessary. In order to do this, we need a much more precise and technical focus on what analytics does and does not mean. George Siemens recently asserted that the biggest problem with the current state of MOOCs is the inability to understand emotion:


I think the question about emotion wouldn't appear as surprising if our analytics was constraint-oriented, rather than cause-oriented. Emotions are always the biggest constraints of all.

No comments: