Sunday 30 July 2017

Ecology

A bit more work to do here (I'm using Yumpu.com which is great at hosting and updating the versions of documents. It means that this chapter will magically improve over time!)

Thursday 27 July 2017

Beer and Illich on Institutional Change: Uncertainty at the heart of the system

Stafford Beer's "Platform for Change" is an extraordinary book which sets out  diagrammatically to document the processes by which the world might move from pathological institutions, markets, exploitation and environmental destruction, to a viable world which lives within its means. The diagrams get more complex, as the book goes on, culminating in this:
Which is a bit daunting. However, there are things to notice. Within each of those boxes, there is a smaller box at the bottom with a "U" in it: this is "Undecidability". I think it could equally be called "Uncertainty", but it is worth noting that around every heavy-type box (in bold), there is a lighter type box which is connected to the "U" box, and which is labelled with things like "Metasystem", "conscience", "reform", and so on. 

Beer's point in platform for change is that the way society manages its "undecidability", or uncertainty, causes pathology. This is most clear from his diagram about the difference from old institutions to new institutions:
What manages uncertainty in the pathological "old" institutions at the top? The "Metalanguage of Reform". This is the drive for "restructuring", "privatisation", "outsourcing" and so on. What does structure mean in the first place - it's in the middle of the box - the hierarchical organisation of most institutions. 

Feeding in to the whole thing in the pathological institution is "Homo Faber" - the maker of increasingly powerful tools which dictate how people should live and drive people into increasing technocratisation. On the other side, we clearly see that this comes at the cost of the "Exploitable earth", with exploitable people, and cost-benefit analysis. On the right hand side at the top, Beer sees the "conservation" movement as the management of uncertainty about the exploitable earth with a metalanguage of "conscience" which is managed by the conservationist's discourse. Of course, this is a reaction to the pathology, but it also appears as part of the overall system of the problem. 

What do to about it?

Uncertainty (or undecidability) has to be managed in a different way. In the lower part of the diagram, Beer imagines a different kind of institution which facilitates the coordination of uncertainty among the different people who engage with it. The Undecidability box is connected to a "Metalanguage of Metasystem" - a way of having a conversation about the way we have conversations. 

Technology works with this not as the continual pathological product of Homo Faber who produces ever-more powerful tools, but as an appropriate response to establishing synergy in the system. Feeding it and monitoring it is "Homo Gubernator" - whose actions are dedicated to maintaining viability, providing safeguards and monitoring the eudemony in the system. 

Of course, it all raises question - but they're good questions. But I've been struck by the similarity between Beer's thought and those of Ivan Illich in his Tools for Conviviality.

For Illich, the problem of the pathological institution (the top of the diagram) is the declaration of "regimes of scarcity": the need to maintain institutional structures in the face of environmental uncertainty, which often takes the form of increasing specialisation, educational certification, division between people in society, and the ever increasing power of tools. This is a positive feedback mechanism whereby increasingly powerful tools generate more uncertainty in the environment which entails a need for more institutional defence, more scarcity declarations, and so on. It is this pathological way of dealing with uncertainty which is the underlying mechanism of the appalling inequality which we are now experiencing. 

For Illich, education lies at the heart of the means to transform this into what he calls a "convivial society". The education system we have produces scarcity declarations about knowledge, and supports professionalisation which alienates people and creates division (we've seen this with populism). 

The solution to this is to invert education - to make knowledge and learning abundant rather than scarce, and to create the conditions for conviviality. Conviviality is an alternative way of managing uncertainty. Its diagrammatic representation is in the "New Institution" box at the bottom of Beer's diagram. Quite simply, conviviality is where each person manages their uncertainty by engaging directly with each other person. Intersubjectivity is the most powerful mechanism for dealing with uncertainty that we have. We do not have to create institutions to manage uncertainty, nor do we need to create ever more powerful tools.

Illich closes the system loop because he sees the limiting of tools as the critical factor in the establishment of a viable convivial society. This limiting is a politicising of technology: it is where a convivial society determines through dialogue what tools are needed, what should be limited, and how it should manage its resources. In effect it is a communitarian approach to managing the commons of education, environment, tools and people  - very similar to that which was studied by Eleanor Ostrom. 

To do this, educational technology is a critical component. We need abundance of information and skill. We need open education and open resources for learning. 

But the most important thing is to see that the route to viability (and the root of our current pathology) is understanding uncertainty. 

Thursday 20 July 2017

Risky Transactions


Wednesday 12 July 2017

Winograd and Flores on Computers and conversation

Winograd and Flores wrote this in 1984. Have things changed much?
Computers do not exist, in the sense of things possessing objective features and functions, outside of language. They are created in the conversations human beings engage in when they cope with and anticipate breakdown. Our central claim in this book is that the current theoretical discourse about computers is based on a misinterpretation of the nature of human cognition and language. Computers designed on the basis of this misconception provide only impoverished possibilities for modelling and enlarging the scope of human understanding. They are restricted to representing knowledge as the acquisition and manipulation of facts, and communication as the transferring of information. As a result, we are now witnessing a major breakdown in the design of computer technology - a breakdown that reveals the rationalistically oriented background of discourse in which our current understanding is embedded. 

[...] Computers are not only designed in language but are themselves equipment for language. They will not just reflect our understanding of language, but will at the same time create new possibilities for the speaking and listening that we do - for creating ourselves in language. (Understanding computers and cognition, p78)

Later on Winograd and Flores defend their argument that computers are tools for keeping track of commitments that people make to each other through recording speech acts. They argue:

New computer-based communication technology can help anticipate and avoid breakdowns. It is impossible to completely avoid breakdowns by design, since it is in the nature of any design process that it must select a finite set of anticipations from the situation. But we can partially anticipate situations where breakdowns are likely to occur (by noting their recurrence) and we can provide people with the tools and procedures they need to cope with them. Moreover, new conversational networks can be designed that give the organisation the ability to recognise and realise new possibilities.   (p158)

I'm curious about this because it resonates with many of the aims of big data today. Winograd and Flores were anti-AI, but clearly the mass storage of speech acts does serve to reveal patterns of recurrence and breakdown which do provide anticipatory intelligence (which is what Google Now does).

I think the real issue concerns a deeper understanding of language and conversation, and particularly the inter-subjective nature of conversation - that is, the con-versare nature of it (dancing). 

Tuesday 11 July 2017

Communicating Uncertainty


Saturday 8 July 2017

Interoperability and the Attenuation of Technological Possibility: Towards Socially Responsible Hacking?

I owe at least 10 years of my career directly or indirectly to generous funding from JISC in the UK and the EU commission. The underpinning rationale which attracted this research money was interoperability in educational technology. It was presence of the Centre for Educational Technology and Interoperability Standards (CETIS) at the University of Bolton which created the conditions for engagement in a wide range of projects. The University of Bolton, of all places, had the greatest concentration of technical experts on e-learning in the world (something continually reinforced to me as I meet colleagues from overseas: Bolton? You were a world-leader!).

Now that most of the project funding opportunities have gone (JISC survives in very different form, but on a mission to keep itself going on a commercial footing which has become problematic), the EU closed its Technology Enhanced Learning strand a couple of years ago (hardly surprising since there were rather too many very expensive projects which delivered little - even for the EU!), and CETIS survives as an independent Limited Liability Partnership (LLP), albeit in a role of more general IT consultancy for education, rather than a focused mission to foster interoperability. The international agency for interoperability in education, IMS, seems to have largely ceded the debate to the big commercial players like Blackboard, who talk the language of interoperability as a salespitch, but have little interest in making it happen.

Now that I am settled elsewhere, and I'm pleased to say, soon to be joined by a former CETIS colleague, it seems like a good time to think about interoperability again. In my current role, interoperability is a huge issue. It is because of interoperability problems that my faculty (just the faculty!) runs four different e-portfolio systems. It is because of a lack of interoperability that the aggregation and analysis of data from all our e-learning platforms is practically impossible (unless you do something clever with web automation and scraping, which is my current obsession), it is because of interoperability problems that individual parts of the faculty will seek new software solutions to problems which ought to merely require front-end adjustments to existing systems, and interoperability problems coupled with pathological data security worries create barriers to systems innovation and integration. Eventually, this becomes unsustainable.

So given all the effort that went into interoperability (my first JISC project was an investigation of interoperable web services in E-portfolio in 2004 - the project concluded that the available interoperability models didn't work and that something should be done about it), how have we got here?

Any new technology creates new possibilities for action. The ways of acting with a new tool may be very different from the ways of acting with existing tools. This means that if there is overlap in the functionality of one tool with another, users can be left with a bewildering choice: do I use X to do a,b and c, or do I use Y to do a, c and z? The effect of new technologies is always to increase the amount of uncertainty. The question is how institutions should manage this uncertainty.

CETIS was a government-funded institutional attempt to manage the uncertainty caused by technology. It served as an expert service for JISC, identifying areas for innovation and recommending where calls for funding should be focused. CETIS is no longer funded by government because government believes the uncertainties created by technology in education can be managed within institutions.. so my university ends up with 4 e-portfolio systems in one faculty (we are not alone). This is clearly bad for institutions, but not bad in terms of a libertarian philosophy to support competition between multiple providers of systems. Having said this, the interoperability battle was lost even when CETIS was flourishing. The dream of creating an educational equivalent of MIDI (which remains the golden child of systems interoperability) quickly disappeared as committees set about developing complex specifications for e-portfolio (LEAP, LEAP2, LEAP2a - see http://cetis.org.uk/leap2/terms/index), the packaging of e-learning content (SCORM, IMS-Content Packaging), the sequencing of learning activities (IMS Learning Design, IMS Simple Sequencing), and more recently, Learning Analytics (xAPI).

All of this activity is bureaucratic. Like all bureaucratic processes, the ultimate result is a slowing down of innovation (importantly, this is NOT what happened with MIDI). Whilst technology creates new possibilities, this also creates new uncertainties, and bureaucratic processes act as a kind of weir to stem the flow of uncertainties. Institutions hate uncertainty. In the standards world, this is achieved by agreeing different shaped boxes into which different things can be placed. Sometimes the boxes are useful: we can say to a vendor of e-portfolio, does it support LEAP2a (for example). They might say "yes", meaning that there is an import routine which will suck in data from another system. However, much more important is the question "Does it have an API?" - i.e. can we interact with the data without going through the interface and do new things which you haven't thought about yet? The answer to this is almost always, No! The API problem has also become apparent with social media services too: APIs have become increasingly difficult to engage with, and less forthcoming in the data they provide. This is for a simple reason - for each of the clever things you might want to do with the data, each company wants to provide as a new "premium service".

An alternative to the institutional bureaucratic approach to the interoperability problem would seek to manage the uncertainties created by technology in a different way. This would be to embrace new uncertainties, rather than attenuate them,  and create situations within institutions where processes of technical exploration and play are supported by a wide range of stakeholders. One of the problems with current institutionally attenuative approaches to technology is that the potential of technology is underexplored. This is partly because we are bad at quantifying the new possibilities of any new tool. However, in working with most institutional tools, we quickly hit barriers which dictate "We can't do that", and that's the end of the story. But there are usually ways of overcoming most technical problems. This is what might be called the "Socially responsible hacking" approach to interoperability. With the failure of bureaucratic interoperability approaches, this may be the most productive way forwards.

Socially Responsible Hacking addresses the uncertainty of new technology in dialogue among the various stakeholders in education: programmers who see new ways of dealing with new and existing tools, teachers who seek new ways of organising learning, managers who seek new opportunities for institutional development, learners who seek new ways of overcoming the traditional constraints of institutions, and society within which educational institutions increasingly operate as something apart, rather than as an integral component. 

Wednesday 5 July 2017

Science


Sunday 2 July 2017

Saturday 1 July 2017

Ivory Towers and the Grenfell Tower: The problem with Evidence

The Grenfell Tower fire represents a collapse of trust in expertise and evidence, and will bring about a reawakening of scepticism. Newsnight's report on "How flammable cladding gets approved" - http://www.bbc.co.uk/news/uk-40465399 raises questions about the role of evidence beyond fire safety. In policy in health, education, welfare, economics and housing evidence is the principal aid for decision-making. What Enid Mumford calls "dangerous decisions" are supported by studies which demonstrate x or y to be the best course of action. The effect of these studies is to attenuate the range of options available to be decided between. Of course, in that attenuation, many of the competing descriptions of a phenomenon or subject are simplified: many descriptions are left out, some voices are silenced. Usually, the voices that are silenced are those "on the edge": the poor, immigrants and the occasional "mad professor". From Galileo to Linus Pauling, history tells us that these people are often right.

Understanding "evidence" as "attenuation" helps us to see how easily "evidence-based policy" can become "policy-based evidence". Evidence can be bent to support the will of the powerful. The manifestations of this exist at all levels - from the use of econometrics to produce evidence to support austerity to the abuse of educational theory in support of educational interventions (which so many educational researchers, including me, are guilty of). But it helps academics to get published, to raise our status in the crazy academic game - and, once established in the sphere of the University, the habit sticks. Effective decision-making is intrinsic to effective organisation. If organisational pathology creeps in, decision-making within a pathological organisation will be constrained in ways which obscure real existent problems.

The deeper problems concern academia's and society's allergy to uncertainty. We hold to an enlightenment model of scientific inquiry, with closed-system experiments and the identification of causal relations through the production of event-regularities. Too often we pretend that the open systems with which we engage are closed systems whose event regularities are no longer physical events, but statistical patterns. So Stafford Beer's joke that "70% of car accidents are caused by people who are sober" entailing that we should all drink and drive, highlights the dangers of any statistical measure: it is an attenuation of descriptions - and often an arbitrary one at that.

The computer has changed the way we do science, and in almost all areas of inquiry from the humanities to physics, probabilities are what we look at. These are maps of uncertainty, not pointers to a likely successful outcome, or a statistically proven relation between an independent variable and a probability distribution. What is an independent variable, after all? It is a single description chosen out of many. But its very existence is shaped by the many other descriptions which are excluded by its isolation. And we don't seem to care about it! I review endless depressing papers on statistical approaches to education and technology, and I see these assertions being made without the slightest whiff of doubt - simply because that is how so many other papers which are published do it. I reject them all (although always gently - I hate horrible reviews - but always inviting authors to think harder about what they are doing).

Uncertainty is very difficult (probably impossible) to communicate through the medium of the academic journal article. The journal article format was devised in 1662 for an enlightenment science which is radically different from our own. Of course, in its time, the journal was radical. The effect of printing on a new way of conducting and communicating science was only just opening up. Printing was doing to the academic establishment what it did to the Catholic church a century before. Enlightenment scholars embraced the latest technology to harness their radical new practices.

We should be doing the same. The experiments on building cladding are easily demonstrable on YouTube. Equally, uncertainties about scientific findings can be expressed in rich ways using new media which are practically impossible in the journal. The scientists should learn from the artists. Furthermore, technology provides the means to democratise the making of descriptions of events. No longer is the description of an event the preserve of those with the linguistic skill to convey a compelling account in print. The smartphone levels the playing field of testimony.

Our decisions would be better if we became accustomed to living with uncertainty, and more comfortable living with a plurality of descriptions. The idea of "evidence" cuts against this. We - whether in government or academia - do not need to attenuate descriptions. Uncertainties find their own equilibrium. Our new media provide the space where this can occur. Universities, as the home of scholarly practice in science, should be working to facilitate this.