Wednesday 6 July 2011

MOOCs, Methodology and Nuremberg Funnels

One of the curious things about the recent MOOC debate between Stephen Downes, George Siemens and David Wiley is the fact that it's erupted around an issue of epistemology: whether there is or isn't knowledge 'transfer'. I don't get the impression that either party is advocating a Nuremberg Funnel - the oldest example of crude 'knowledge transfer', so I am left wondering what people are actually arguing about.

The deeper issue seems to be a debate about whether MOOCs, with their implicit constructivist foundations, 'work'. But then, that is a discussion we could have about anything in e-learning: nothing really 'works' in e-learning... or rather everything 'sort-of' works (or doesn't, depending on your point of view). It appears that the battlelines have been drawn not about the sort-of working-or-not of MOOCs, but on the epistemological foundations of their ideology.

That in itself seems strange, and somewhat illogical. But I think it's an indication of a deeper methodological problem we have in e-learning which makes it very difficult to grasp the nettle of things 'sort-of' working. That may be partly because currently used methodologies for evaluation are generally poor. But I think it has more to do with the fact that inevitably, evaluation is at some level political. A MOOC is a political proposition as well as a technological/educational configuration. Behind it sit a whole load of values, mostly of those who've invested time and effort in the idea. How are they to defend the values they believe in? They have to prove the causal efficacy of their underlying theories. That's how we get to "there is no knowledge transfer".

But saying "there is no knowledge transfer" is not a proof of causal efficacy! Nor are the various statements and extra distinctions (neural whatevers!) that are brought in to back it up. None of this is defensible (as indeed, neither is the opposing position).

Standing back from this, I think we really need to think about what 'sort-of' working means, and what we can learn from all of the 'sort-of' working interventions (with their associated values) in learning technology. This requires a different type of methodology and a different philosophical underpinning. It's probably not the only approach, but my own work has used 'Realistic Evaluation' (see http://www.evidence-basedmanagement.com/research_practice/articles/nick_tilley.pdf) with its underpinning in Critical Realist philosophy. This approach asks a simple question of 'sort-of' working things: "given that we see all of these things happening (i.e. works here, doesn't work here, some like it, some don't, etc), what must the world be like?". That's an appeal for what philosophers would call a "transcendental argument". Transcendental arguments in Realistic Evaluation take the form of descriptions of 'causal mechanisms', and the idea is to think of loads of possibilities and test them out.

I like this because it allows me to consider lots of possible theories, and some of them fit better than others. The value in it all lies in the fact that good theories will not only have explanatory power, but will also show predictive power. And predictive power gives us better control... which may be the best we can ever hope for in our efforts to grapple with learning technology!

No comments: