Saturday 31 October 2015

What is Empircism? Back to Hume...

The distinction between a-priori and a-posteriori is one of those polar distinctions which philosophy teachers love as they lead students into the murky depths of thought. Despite the terrible brain-aching tangles that philosophy presents, at least we can rely on the difference between proposition that derives from experience, and propositions whose truth might be determined analytically or intuitively. Aposteriori justifications of truth are empirical. But what, exactly, does that mean?

Where is the boundary between empirical experience and intuitive understanding? This is why the philosophy of science occupies such a critical place in philosophy more generally - because unless we can grasp the relationship between experience and knowledge, the grounds for further philosophical judgement are fragile. David Hume's work - which focused directly on this issue - occupies the central turning point in modern enlightenment thought. It is through the critique of Hume that modern-day realists have established their own new philosophies of science; it is through Hume's insights that Kant revolutionised philosophy by being half as sceptical as Hume in arguing for a 'natural necessity', or the supposition that there are causal regularities in nature, which Hume rejected; it is Hume who inadvertently codified scientific empirical practice which gradually withdrew from his social-constructivist, reality-sceptical stance, and moved towards objectivism and positivism; it is still Hume to whom others, including 'speculative realists', turn and wonder if he was right all along; and it is Hume who ought to be more widely known among cyberneticians in their efforts to establish a coherent sceptical scientific approach.

As far as Hume was concerned, an experiment was a practice of acquiring knowledge through establishing the material conditions for the production of regular successions of events which could then be discussed amongst scientists. Scientific knowledge was discursively produced, but it required some material grounding so as to coordinate the discussion. Hume did not consider the material grounding of scientist's bodies, or the factors bearing upon their material practices inherent in their pre-established ideas, or custom and practice in the construction of apparatus. All of these questions have been raised more recently by science studies scholars like Karen Barad. But what Hume did believe was that the actual nature of the world was essentially unknowable; that there was no reason why there were actual regularities in nature (his famous scepticism about the movement of billiard balls), whatever scientists might agree on through their experiments: what mattered was the discussion, and in order for the discussion to give rise to scientific judgement, it had to be coherently grounded.

The engagement with the problem of bodies, matter and ideas now throws open this fundamental problem of coherence of discourse. Modern science risks intoxication with the results of instruments, where the impact of technological instrumental results frames society. Technologies change society, and if there is a lack of critical engagement with the computer technologies that perform our analyses, we will enslave ourselves to technology. This is the warning to neuroscientists believing the coloured lights in brain-maps as much as it is to the big-data scientists staring at node-clusters, or the genetic engineers putting too much faith in the genome. Too much science has become instrumental rather than intelligent. There is of course a critical discourse which points these things out - but crucially, there is no connection between the scientists working with their instruments, and the critics warning of the dangers. There urgently needs to be this connection if knowledge is not to be lost.

The problem is that we have not yet found a way of coordinating a critical debate about science, instruments, technology and experiments alongside a sociological and psychological debate. The discourse gets fragmented, people talk at cross-purposes, identities are challenged and egos do their worst. This is where Hume's attempts to ground scientific discourse in the 18th century may help us as we try to ground the scientific discourse which our science needs today. We need to find a new way of grounding the way we talk about knowledge. In fact, in the world of neuro-imaging, big-data and genetics, there is an opportunity to bring a deeper coherence. At the root of it is the study of information.

The astonishing thing is that Hume was almost there. Today's talk of information he would have recognised as the 18th century talk about probability. Hume's critique of probability is important, just as those later critics like Keynes who engaged with it can help us to make sense of today's information science.

Perhaps more importantly, the fundamental empirical domain where all these issues collide is in education.

Friday 30 October 2015

Three Information Theoretical Approaches and their contribution to a Reflexive Science

In my previous posts, I have argued that different second-order cybernetics approaches vary according to fundamental theoretical orientations, whilst superficially appearing to be allied in their adherence to notions of reflexivity and observation [This is really a paper which I'm gradually chewing my way through!]. The differences between these approaches creates conditions where coordinating discussions among second-order cyberneticians is deeply challenging, and it is not uncommon for proponents of one theoretical perspective to accuse proponents of another of objectivism (which both ostensibly oppose) or universalism. The root of this problem, I have argued, lies partly in a failure to articulate the fundamental differences between approaches, but most importantly in the failure to critically articulate those principles which all second-order theories have in common, amongst which the principle of induction plays a central role. In further dissecting induction with regards to second-order cybernetics, I have argued, following Hume and Keynes, that the issue of analogies - things being determined to be the same thus providing the conditions for adaptation - is one which can be used to differentiate different approaches to second-order cybernetics. Without a grounded account of what counts as the 'same', second-order cybernetic discourse throws open the possibility of misunderstanding and incoherence.

Information Theory presents a quantifiable model which relates perceived events to an idealised inductive process. The application of information theoretic approaches demands that the issue of analogy is explicitly addressed: the 'sameness' of events must be determined in order for one event to be compared to another and for its probability (and consequently its entropy) to be calculated. Whilst the use of information theory may (and frequently does) slip into objectivism, the opportunity it presents is for a coordinated critical engagement with those deeper issues which underlie second-order cybernetic theory. At the heart of this critical engagement is a debate regarding the distinguishing of events which are 'counted' (for example, letters in a message, the occurrence of subject keywords in a discourse, the physical measurements of respirations of biological organisms). Reflection concerning the identification and agreement of analogies entails a process of participation with the phenomena under investigation, as well as reflection and analysis of the discourse through which agreement about the analogies established in that phenomenon is produced. At a deeper level, the extent to which any information theoretical analysis could be interpreted in an objectivist way, the problems inherent in Shannon's information theory, and other aspects of self-criticism presents a deeper level of reflection as to how research results may be presented, interpreted, misinterpreted, and so on. But at the root of it is the identification of analogies.

Among the empirical applications of information theory which are explictly aware of cybernetic and reflexive nature of information theory's application, the statistical ecology of Ulanowicz, the discursive evolutionary economics of Leydesdorff and the synergetics of Haken present three examples whose contribution to a more coherent and stable second-order cybernetic discourse can be established.

Ulanowicz's statistical ecology uses information theory to study the relations between organisms as components of ecologies. The information component in this work concerns measurements of the respiration and consumption in ecological systems. Drawing on established work on food chains, and also cognisant of economic models such as Leontieff's 'input-output' models, Ulanowicz has established ways in which the health of ecosystems may be characterised through studying the 'mutual information' between the components, and that the measurement of 'average mutual information' is particularly useful in the identification of the health of the ecosystem. Calculations produced through these statistical techniques have been compared to the course of actual events, and a good deal of evidence suggests the effectiveness of the calculations.

Ulanowicz has, however, also considered the value in Shannon's equations. In particular, he has engaged with criticism that Shannon's measure of uncertainty (H) fails to distinguish (in itself) the novelty of events (which, by virtue of being low probability, have high uncertainty), and those events which confirm what already exists: in other words those events which are analogous to existing events. Whilst building on his existing empirical work, Ulanowicz has sought to refine Shannon's equations so as to account for the essentially relational nature of the things that Shannon measures. In this regard, Ulanowicz has distinguished between the mutual information in the system (a measure of analogies of events), and the 'flexibility' of a system: a measure of the extent to which the system proves itself to be adaptable for future shocks. At the heart of the contrast between mutual information and novelty, Ulanowicz has suggested alternative ways of measuring novelty by using Shannon's concept of redundancy.

Echoing earlier arguments by von Foerster, Ulanowicz argues that Shannon's redundancy measure may be more significant that his measure of uncertainty. The statistical measurements involving mutual information have led to the ground to articulate the need for a re-evaluation of the information uncertainty measures, and with it have led to further debate and developments among statistical ecologists. At the heart of Ulanowicz's thinking is the connection between constraint and redundancy, and the possibility that a calculus of constraints presents a way of thinking about complex ecological phenomena which overcomes some of the deep problems of multivariate analysis. If patterns of growth are seen as flexible adaptive responses which emerge within constraints, and if those constraints may be measured using statistical tools, then a different causal orientation may be established between variable factors and likely events can be established.

Ulanowicz's ecological metaphor explicitly relates itself to Gregory Bateson's epistemology. Ulanowicz argues that his statistical project is a realisation of Bateson's project to articulate an 'ecology of mind': in this way, the statistical evidence of ecosystems can also provide evidence for ecological relationships in observing systems.

Leydesdorff's Triple Helix, like Ulanowicz, uses Shannon's equations as a way of delving into an empirical domain. Leydesdorff's empirical domain is the study of discourse. Following Luhmann's second-order cybernetic theory, Leydesdorff argues for the possibility of a calculus of 'meaning' by studying the observable uncertainty within discourses. Like Ulanowicz, the principle focus of this has been on mutual information between discourses in different domains. Drawing on Luhmann's identification of different discourses, Leydesdorff has layered a quantitative component, apply this idea to innovation activities in the economy. The essential argument is that the highlighting of mutual infomration dynamics within discourses implies deeper reflexive processes in the communication system. Like Ulanowicz, Leydesdorff suggests two mechanisms for producing this. On the one hand, the measurement of mutual information shows the coherence between discourses, whilst the measurement of mutual constraint or redundancy demonstrates the possible flexibility in the system. Following Ulanowicz, Leydesdorff identifies autocatalytic process which generate redundancies within the discourse.

Like Ulanowicz, measurements of mutual redundancy introduce the possibility that multivariate analysis of complex interactions may be done simply through an additive calculation, avoiding the necessity to calculate (or guess) the causal power of individual variables. Calculations of mutual redundancy and mutual information together produce a rich picture of communication dynamics. Shannon's equations for mutual information produce fluctuating signed values where there are more than two domains of interaction; calculations for mutual redundancy always produce a positive value irrespective of the number of dimensions. The fluctuating sign of mutual information is an indicator, Leydesdorff argues, of the generation of hidden (redundant) options in a discourse.

Whilst the calculations of the Triple Helix have gained traction within a section of evolutionary economics, its application has tended to be econometric, and the techniques seen as a new kind of economic measurement. As with statistical ecology, objectivism potentially remains a problem. However, in Leydesdorff's empirical work there is a co-evolution of theory with empirical results. Whilst the Triple Helix is closely tied to economics, and in particular, econometrics, the articulation of deep cybernetic theory about communication and the inspection of that theory in the light of attempts to measure discourses make the empirical investigative part of the Triple Helix a driver for further development in second-order cybernetic theory. Most impressive with the Triple Helix is the fact that despite the apparent shallowness of measuring the co-occurrence of key terms in different discourses, convincing arguments and comparative analysis can then be made about the specific dynamics of discourses, which can generate hypotheses which can then be tested against particular interventions in policy. The reflexive aspect of this activity concerns the deeper identification of the scope of what is claimed by the Triple Helix. For example, Triple Helix analysis shows Japan to be one of the most innovative economies in terms of the discourse between Universities, Government and Industry. Does this mean that the Japanese economy is one of the most successful? What would that mean?

The Triple Helix deals with the problem of double-analogy in second-order cybernetics by using information theory to examining the talk between scientists, government and industry. Ultimately, analogies are detected between words which are published in scientific journals and which can be inspected in a relatively objective way. However, the analogies that it highlights can be challenged in various ways, as (for example) the product of institutional processes whose discourse is not inspected (the processes of academic publication, for example), or for making assumptions about the power and status of academic journals as opposed to other forms of discourse. The other side of the double-analogy relies on the social systems theory of Niklas Luhmann. Whilst this is a powerful theory, it remains essentially a metaphysical speculation. In this way, the Triple Helix's attempt to work towards a more coherent second-order cybernetics has to defend itself on the ground of its empirical distinctions about which there are assumptions which inevitably have to be made.

When the empirical grounding of a theoretical approach relies more on physical phenomena, there is at least some hope of a more effective coordination of discourse. Haken's synergetic theory grounds itself in the physical behaviour of photons in lasers. Using this as a starting point, Haken has sought to metaphorically extend these physical properties into other domains, from biology to social systems. Whilst Haken's work has unfolded in parallel to cybernetic theories (and there has been little contact between the two), he has made extensive use of Shannon's information theory to express mathematically the 'synergetic' processes which he argues underpin all forms of self-organisation.

Haken's physical observations were an important stage in the development of lasers. He realised that photons would coordinate their behaviour with each other if they could be contained within a domain of interaction for a sufficient amount of time. In the laser, photons are maintained within the domain of interaction through the use of mirrors, which only allow the escape of light at a particular energy level. Haken argues that similar dynamics under similar conditions also produce self-organising behaviour. In recent years, he and colleagues have performed analysis of social phenomena like the construction and development of cities to identify this.

I think Haken uses the observable analogies of the laser to describe a universalist principle. Whilst this can be a foundation for coordination of discourse about the physics of lasers, the identification of analogies in other domains is more problematic. 

Tuesday 27 October 2015

Information Theory's Necessary Empirics of Second-Order Cybernetics

Second-order cybernetic theory is diverse in its orientation towards its foundations: what it sees as its stance towards objectivity and subjectivity, and what it sees as an orientation towards 'universal' principles. Yet all second-order cybernetic theories are united in their adherence to the foundational role of induction in the process of adaptation. More precisely, inductive process is always in response to the determination of 'similar' events - effectively the determination of 'regularity'. Yet, how it is that similarity between events is established (the analogy between events) is poorly inspected, and second-order cybernetics differs in the different domains where it perceives its analogies.

The combination of universal acceptance of induction as a foundation together with difference as to where analogies are to be determined within different second-order cybernetic theories is a recipe for dispute. The situation is further complicated by the fact that in being observer-oriented, second order cybernetic theory requires the determination of two analogies: on the one hand there are analogies to be determined by events, or perturbations (for example, disturbances from the environment, or regularities in discourse); but then there is also the determination of analogies within the perceiving system: induction occurs through the organisational transformation of the perceiver, and in any given perception situation, there is 'sameness' of the perceivers structure, and differences. Any account of induction must also account for the sameness and difference of the perceiver. When am I 'the same'? When am I 'different'?

At its heart, second-order cybernetics aims to escape an objectivist viewpoint and proposes a relational one. However, the link between relationality and objectivism cannot be entirely broken: relations depend on some degree of objectivity, and without consistent grounding of shared objectivity the coherence of the second-order cybernetic discourse is challenged. Information theory, in the broadest sense, presents a model of the relation between events and observations. As such, it provides a measurable counterpart to Ashby's Law of Requisite Variety. Its utilisation can be compared to the work of cyberneticians like Stafford Beer, who counted the variety of different components in an organisation in order to determine the most effective way of organising them. Measuring the entropy in a message is a way of measuring the variety of the producer of the message; measuring the mutual information between sender and receiver is a way of measuring the requisite variety between them.

Like all measurements, this is an imperfect representation. Additionally, as some commentators have noted, Shannon's equations confuse issues of analogy with induction. Having said this, Shannon's approach presents an opportunity for developing and refining the techniques. It provides for an approach to modelling the relationship between events and observers where measurement of events in the lifeworld can be defended and a calculus of relations between observers and events be used to generate patterns and theories which can further be investigated. No approach to second-order cybernetics challenges the principles of Ashby's Law, and so establishing a measure of the variety of a system, whether the system is biological, communicative, or physical remains defensible across a range of different second-order cybernetic approaches, even if the assertion of the 'objectivity' of such a measurement may invite legitimate criticism.

In a discursive situation where second-order cybernetics is divided between different approaches to objectivism, universalism, and its foundations of induction and analogy, finding an orientation which reintroduces measurement into the study of a shared lifeworld can help coordinate second-order approaches and strengthen the relationship between second-order cybernetics and second-order science. 

Monday 26 October 2015

The Colonisation of the Private Realm and Love's Intersubjective Revolution: Forwards to Technology and Education?

In a striking and somewhat depressing article on the opendemocracy website, Byung-Chul Han argued "Why revolution is no longer possible" (see https://www.opendemocracy.net/transformation/byung-chul-han/why-revolution-is-no-longer-possible - thanks to Oleg for this reference!). Han's basic concern is the capitalist colonisation of the private realm as well as the public realm. Capitalism has transferred its inherent contradictions (which Marx always believed it would collapse from) from an overt class-war with opposing forces pitted in a battle for emancipation, into an inner existential struggle. Each individual has been seduced into the capitalist colonisation of their private life with ego-massaging social media: pro-sumers (like me!) are not merely victims of the multinational players like Google, Microsoft, Amazon, etc - they have become the very instruments of a capitalism which many of them (like me) will use social media platforms to criticise. Capitalism wins by subsuming the critique against it.

The price is high: this is a Faustian bargain that everyone has been drawn into (and Mephistopheles is raking it in!) Han holds up South Korea as the epitome as the "readjusted" neo-capitalist society, which after its financial crisis, made its population docile: a "vast consensus" alongside depression and burnout producing the world's highest suicide rate. "People enact violence on themselves instead of seeking to change society. Aggression directed outward, which would entail revolution, has yielded to aggression directed inward, against oneself." Failure in today's neoliberal economy is nobody else's fault but our own. There is nobody else to be angry at but ourselves.

I think Han gets it spot-on in a way that Paul Mason, for all his impressive arguments about the "sharing economy" is naively optimistic. The naivety Mason advocates entails walking further into the capitalist trap: Airbnb and Uber spell trouble, not freedom. Mason does however make the point that there are aspects of intimate life which neoliberalism cannot (yet?) touch. At the bottom of it is love. Our understanding of this is very limited, and that may present a glimmer of hope.

The dating industry and the sex industry will try to colonise love: Apps like Tinder and Grindr, together with the digitalisation of pornography and prostitution all amount to this. It's interesting to reflect when we ask "why is there so much porn on the net?" that love is the holy grail for the total domination of capitalism - the defeat of the one thing that could still keep it in check. But in reality, these developments are thankfully deficient, only offering gratification. The investment into ever-better AI, Siri-like assistants that you might fall in love with, or sex robots all provide further examples of the fundamental unending motivation of capitalism's quest. Yet the best that the neoliberal system can hope to achieve is to reframe the common conception and expectation of love from a profound spiritual union, to a form of pleasure exchanged for cash or attention. The disruptive effect of the technologies on real relationships is a worrying sign that such a strategy might work. Niklas Luhmann might agree that the technology could potentially reshape the linguistic 'code' of love, just as that code was reformed by feudalism, or then later, 19th century romantic fiction. But great though Luhmann's book on love is (his best book, I think), he may be wrong.

There is obviously something missing in capitalism's game-plan, some deep root to its inherent contradiction which is deeper than the class-oriented social contradiction that Marx identified. The fundamental problem is that capitalism must conceive of subjectivity as specifically individual. The notion of the collective is the notion which threatens it, which can be galvanised into revolution. Whilst Han is right that the social collective has been usurped by capitalism's seduction of the individual, we can still chip-away at capitalism's notion of the individual. To paraphrase Thatcher, "There's no such thing as an individual; there is only intersubjectivity".

Wanting, seduction, ego-massaging, economic exchange, and the rest of capitalist logics are all things which exist not within individual minds, but in-between them. Capitalism's operating principle is to present the myth of the individual mind. Yet if the individual mind can be at all described, we must say that it is constrained, and that the constraints that bear upon it are constraints of other minds in their own interactions with the constraints of a shared lifeworld. Han's capitalist colonisation of the private realm is one constraint in the lifeworld, where other constraints also carry fundamental influence. Most important are the mutual constraints which are understood between (at least) two people who love each other. To deeply understand each others' constraints is to see the capitalist constraint as fundamentally unimportant. The revolution is in the mind, and it is powered by love.

Revolutions in the mind belong to the domain of education. Much education has marketised itself into a vast profitable industry: Blake's "Dark Satanic Mills" which in his mind were the Universities of the 18th century, are now plate glass. Sometimes this has involved distancing itself from authentic intersubjective discovery, instead industrialising the 'measurement of learning' by ticking off lists of learning outcomes and avoiding any difficult or inconvenient questions of inquiring minds. Yet there remain things that we want education to do, and people to do, for which tick-lists of learning outcomes and competencies are deficient. Despite amazing technologies at its disposal, education has ossified its technology in its crudest form of text-based forums and the mega-VLEs which are MOOCs. Such tools do little to promote intersubjective understanding, offering a pale version of the consumer culture of the web and the ego-stroking habits of social media. There may however be educational forces which push for better technology and richer relationships.

Education may need richer experiences and more profound ways in which persons may share their constraints, whether it is a doctor in Bolivia, or a philosopher in Shanghai. Richer connections can be made between the technological capture of intersubjective lived experience and more reflective commentary. The needs of training and the development of 'soft skills' requires opportunities to deeply empathise with each other and recognise each others' constraints. And with better intersubjective technologies, what then when a more connected consciousness turns to look at the neoliberal system around them? Will the fundamental contradiction of the myth of the individual be exposed? Might it usher-in something more hopeful?



Friday 23 October 2015

Second-order cybernetics and Induction

The relationship between observer and observed within second-order cybernetics is one of organisational adaptation within structurally-determined and organisationally-closed systems. However, the description of what is involved in ‘adaptation’ varies from one second-order cybernetic theory to another. As a foundation for second-order theories, different loci of adaptation may be determined (for example, whether adapation is biological, discursive, cognitive, atomic and so on) and the constituent processes of adaptation may be further disentangled. 

Of particular importance is the constituent role of induction and analogy in adaptation. Induction itself is widely recognised as the root of adapation within descriptions of second-order cybernetics. For example, in the descriptions of biological adapation of Maturana’s cells and organisms to environmental 'niches’, Maturana argues:
 “the living system, due to its circular organisation, is an inductive system and functions always in a predictive manner: what occurred once will occur again. Its organisation (both genetic and otherwise) is conservative and repeats only that which works.” (1970)
Additionally, recurrence and regularity of events is characterised elsewhere in autopoietic theory (For example, Varela describes ‘in-formation’ as “coherence or regularity”) and this suggests the need for a more specific conception of 'regularity' and its relation to adaptive processes which result. Fundamentally, there is a distinguishing of events which cohere with existing structural conditions of the organism, and those which demand organisational transformation. This appears to be the case whether the second-order cybernetic theory concerns biological cells, logical structures emerging from self-reference (what von Foerster identifies as 'eigenvalues'), or coherences and stabilities within a discourse (for example, Luhmann's social systems or Beer's 'infosets').

A revealing example of adaptation is provided by Piagetian 'assimilation' and the 'schema'  theory of learning. Von Glasersfeld illustrates Piaget's concept: “if Mr Smith urgently needs a screwdriver to repair the light switch in the kitchen, but does not want to go and look for one in his basement, he may ‘assimilate’ a butter knife to the role of tool in the context of that particular repair schema.” What von Glasersfeld here calls 'induction' involves two logical moves:

  1. the identification of analogy between the butter knife and the screwdriver
  2. the confirmation of existing ways of organising according to this analogy
There are some implicit assumptions behind von Glasersfeld's example, and behind the broader identification of 'adaptation' within second-order cybernetics more generally.

Fundamentally, the sameness of repetition depends on the identification of analogy. The first philosopher to identify this problem was Hume, who gave a famous example as to how we might acquire an expectations of the taste of eggs. Hume argues that the process partly requires the identification of the 'likeness' between many eggs. With many examples of eggs tasting the same way, an expectation is created concerning the taste of eggs. The process of analogy occurs because of a ‘fit’ between the recognition of analogy in perception and the repetition of that analogy over many instances. 

In second-order cybernetics, the observer is considered alongside the observation that is made. All varieties of second-order cybernetics entail a description of the observer as an adaptive mechanism. Given this, where Hume considered the 'likeness' of eggs, second-order cybernetics would infer a likeness in the relationship between an observer and the perception of eggs. In effect this means that second-order cybernetics has to consider two levels of analogy: 
  1. The analogy of the observer's organisation;
  2. The analogy of the perturbation.

The locus of analogy of the observer's organisation varies from one second-order cybernetic theory to another. In Luhmann, for example, the 'observer' is a discursive organisational structure which maintains itself in the light of new discursive performances. The identification of differences in discursive structure form a fundamental plank in Luhmann's differentiation of social systems. In order for a discourse to adapt (for example, through innovation), the discourse must be able to identify those aspects of linguistic performance which are analogous to existing discursive structure, and then to reformulate its discursive structure such that subsequent discursive events may be anticipated. In Maturana, the observer is the biological entity, whose organisation has its own implicit analogies, together with the analogies of the perturbations which confront it. 

The point is that whilst the same principle of mutual coordination between observer and environment is the underpinning of second-order cybernetic theory, we should ask how can the analogies of perturbation be determined and compared if the analogies of structure are so varied across different cybernetic theories? In other words, how is it possible to have a coherent and stable second-order cybernetic discourse where quite different interpretations can be created for the same perceived events?

Here, Hume's empirical theory and his separation between analogy and induction is useful. Whilst much second-order cybernetics has tended to eschew empiricism as first-order reasoning, Hume's concept of the shared empirical inquiry presents a solution to the mismatch between analogies of observational structure and analogies of perturbation. The question concerns the way reproducible empirical experiences create, at the very least, a foundational context for debate and discussion. Indeed, the shared role of experience within discourse remains already empirical in the way that Hume envisaged it: the experience of discourse itself presents a shared 'life-world' for participants to reflect not only on the substance of their discussion, but on the dynamics of the discourse itself. Discourse itself carries its own observable analogies. 

This presents a way of viewing Krippendorff’s ‘reflexive turns’ as a way of identifying specific analogies of reasoning. Krippendorff has argued for  four 'reflexive turns' within second-order cybernetics, which can be summarised (broadly) as:
  1. the reflexivity of the observer
  2. the reflexivity of participation in observation and action
  3. the reflexivity of discourse
  4. the reflexivity of ethics and responsibility

Each of these locates a different identification of analogies in different aspects of observation. The first is a distinction between the analogies of an observer and the analogies in the observed event. Participation concerns the shared lifeworld of engagement and the analogies of perception in comparison to the analogies action. The reflexivity of discourse concerns the analogies of ways of describing things (a variety of action) as opposed analogies of expectations of communication. Finally, ethical concerns relate to the analogies of embodied subjectivity in the light of analogies of other forms of reflexivity. 

It is into this sea of reflexivity that it is possible to consider the only cybernetic approach which combines analogy and induction with a measurable empirical component: Shannon's information theory. This is not to say that Shannon's theory in its own right has a special status, but rather that it occupies an important (and currently rather lonely) position as a theory which unites coherent articulations about the lifeworld with a model of the observer as an adaptive system. By bridging the gap between analogies of perception and analogies of perturbation, Shannon's theory (and its variants) might at least be able to create the conditions for a coherent second-order cybernetic discourse.


Thursday 8 October 2015

What Second-Order Cybernetics stands against

Second-order cybernetics is a broad church and there is significant internal tension as a result. Ostensibly defined as the "cybernetics of observing systems", there are a variety of interpretations of what that might mean. For example, both Niklas Luhmann and Humberto Maturana are both second-order cyberneticians, and yet each has criticised the other for an inconsistent application of second-order cybernetic principles. This isn't helped by the fact that each wishes to define second-order cybernetic principles.  Luhmann's borrowing of Maturana's theory of autopoiesis as a way of developing sociological theory (particularly developing Parson's Social Systems theory), and its entailed view that communication systems are 'autopoietic' (i.e. it is an organisationally-closed, structurally-determined system which regenerates its own components) appears to impute some kind of reality to the communications system which subsumes psychological, perceptual and agential issues. Luhmann famously declared he was not interested in people, but in the dynamics of communication systems. Imputing the existence of a communication system existing beyond the biological boundary of the organism is the opposite of Maturana's thinking when he conceived of autopoietic theory. He argues:
"a cognitive system is a system whose organisation defines a domain of interactions in which it can act with relevance to the maintenance of itself, and the process of cognition is the actual (inductive) acting or behaving in this domain"
There is no information, only self-organisation of the organism. The cognitive system organises itself within a domain of interactions.  Luhmann's redescription of sociology in terms of autopoiesis has been taken by Maturana and his followers as something of a betrayal and distortion. And yet, Luhmann's redescription of sociology has been the most influential social-cybernetic theory, attracting the attention of Habermas (who disagrees with Luhmann, but clearly takes him seriously) and many others, for whom systems thinking would otherwise have been sidelined. Few cybernetic thinkers (apart from possibly Bateson) can claim such extensive influence.

In unpicking the distinctions between different positions regarding second-order cybernetics, two approaches might be used. On the one hand, it is possible, following Harre's example (in his "varieties of relativism"), to identify the differences between positions with regard to what they oppose. Alternatively, it is possible to determine the differences in what the positions support. Here I want to deal with the former.

Harre identifies three major themes which intellectual positions concerning relativism stand against:

  1. Objectivism: the belief that there are objects and concepts in the world independent of individual observer;
  2. Universalism: that there are beliefs which hold good on all contexts for all people;
  3. Foundationalism: the belief that there are fundamental principles from which all other things can be constructed.
There are differences between different versions of Second-order cybernetics with regard to these categories. Objections to Objectivism would appear to be the most clear issue: as the cybernetics of observing systems, second-order cybernetics clearly opposes the assumption of a mind-independent reality. However, on examining different theoretical stances, there are discernable and differentiated traces of objectivism in each variety. For example, Maturana's philosophy derived from biological evidence. A common criticism therefore cites implicit objectivism in its biological foundation. Luhmann, by contrast, escapes this charge. 

With regard to Universalism, there is an implicit view within second-order cybernetics which allies itself to philosophical scepticism: that there is no 'natural necessity', or naturally-occurring regularities in nature: von Glasersfeld calls this a 'pious fiction'. However, second-order cybernetics does appear to uphold the law-like nature of its own principles, arguing for these as a foundation for processes of construction of everything else. At the heart of this issue is the nature of causation inherent within universal laws. Second-order cybernetics upholds a view that rather than universal causal laws in operation, self-organising systems operate with degrees of freedom within constraints. However, in taking this position, different varieties of second-order cybernetic differ in their understanding of what those constraints might be, and how the system might organise itself with regard to them. Maturana's constraints are biological; Luhmann's are discursive. 

With regard to foundationalism, all varieties of second-order cybernetics appear to wish to maintain their principles as foundational. Whatever constraints bear upon the self-organisation of a system in its environment, there is little consideration of the constraints that bear upon the second-order cybernetician who concocts the ideas of systems self-organising within constraints. Perhaps closest to the post-foundational position is von Glasersfeld, who has argued for his 'radical constructivism' as sitting on the fence with regard to an external reality or a human construction. He emphasises the in-betweenness of the intellectual position, albeit with a somewhat strident certainty that all is construction. Although Luhmann's social systems seem foundational, in his adoption of Parsons's ideas of 'double-contingency' of communication, the intersubjective flux of being which this presents is closely related to sociomaterial, post-foundational ideas about entanglements between subjectivity and objectivity. 

Sunday 4 October 2015

Keynes and Hume on Probability - what would they make of Big Data?

Hume dedicates some attention to the problem of probability in his theory of scientific knowledge. One of the most penetrating commentaries on his approach and its relation to Hume's contemporaries was produced by John Maynard Keynes in his "Treatise on Probability" of 1921. Keynes analysis is not often mentioned today when probability plays an increasing role in underpinning the statistical approaches to big data and information theory. Keynes himself only had to worry about the statistical inferences in economic and social theory - what would he have said about Shannon's information theory?

Ernst Ulrich von Weizsäcker argues that Shannon's H measure conflated two concepts of 'novelty' and 'confirmation' inherent in meaningful information (see http://www.amazon.co.uk/gp/search?index=books&linkCode=qs&keywords=9783319036625 and  Robert Ulanowicz's paper http://www.mdpi.com/2078-2489/2/4/624/pdf). However, this point about the conflation between novelty and confirmation is something that is picked-up on by Keynes:
“Uninstructed commonsense seems to be specially unreliable in dealing with what are termed 'remarkable occurrences'. Unless a ‘remarkable occurrence’ is simply one which produces on us a particular psychological eect, that of surprise, we can only define it as an event which before its occurrence is very improbable on the available evidence. But it will often occur—whenever, in fact, our data leave open the possibility of a large number of alternatives and show no preference for any of them—that every possibility is exceedingly improbable à priori. It follows, therefore, that what actually occurs does not derive any peculiar significance merely from the fact of its being ‘remarkable’ in the above sense.”
Keynes builds on Hume's thinking about causes, which emphasises the role of confirmation in causal reasoning:
"All kinds of reasoning from causes or effects are founded on two particulars, viz. the constant conjunction of any two objects in all past experience, and the resemblance of a present object to any of them. Without some degree of resemblance, as well as union, ’tis impossible there can be any reasoning"
"When we are accustomed to see two impressions conjoined together, the appearance or idea of the one immediately carries us to the idea of the other.... Thus all probable reasoning is nothing but a species of sensation. ’Tis not solely in poetry and music, we must follow our taste and sentiment, but likewise in philosophy. When I am convinced of any principle, ’tis only an idea, which strikes more strongly upon me. When I give the preference to one set of arguments above another, I do nothing but decide from my feeling concerning the superiority of their influence.”
Unless scientists can produce event regularities, there is no ground for reasoning about causes. However, if all regularities simply confirmed each other, there would be nothing that each repetition of the confirmation would add. The basis of reasoning is the repetition to produce some difference, as Keynes notes:
"The object of increasing the number of instances arises out of the fact that we are nearly always aware of some difference between the instances, and that even where the known difference is insignificant we may suspect, especially when our knowledge of the instances is very incomplete, that there may be more. Every new instance may diminish the unessential resemblances between the instances and by introducing a new difference increase the Negative Analogy. For this reason, and for this reason only, new instances are valuable. "
Keynes's starting point is Hume's thinking about the expectation of the taste of eggs. Here again, Hume indicates the need for balance between novelty and confirmation:
"Nothing so like as eggs; yet no one, on account of this apparent similarity, expects the same taste and relish in all of them. ’Tis only after a long course of uniform experiments in any kind, that we attain a firm reliance and security with regard to a particular event. Now where is that process of reasoning, which from one instance draws a conclusion, so different from that which it infers from a hundred instances, that are no way different from that single instance? This question I propose as much for the sake of information, as with any intention of raising difficulties. I cannot find, I cannot imagine any such reasoning. But I keep my mind still open to instruction, if any one will vouchsafe to bestow it on me."
Keynes argues that Hume's argument combines analogy with induction. There is analogy in the identification of the likeness of phenomena (eggs being alike), and there is induction in having experienced so many eggs, a supposition about their taste arises: "We argue from Analogy in so far as we depend upon the likeness of the eggs, and from Pure Induction when we trust the number of the experiments."  Keynes also find echoes of Hume's distinctions in Cournot's theory of probability:
“Cournot, [...] distinguishes between ‘subjective probability’ based on ignorance and ‘objective probability’ based on the calculation of ‘objective possibilities,’ an ‘objective possibility’ being a chance event brought about by the combination or convergence of phenomena belonging to independent series.”
Keynes points out that the balance between analogy and induction is incomplete in Hume's thinking, and that Hume's identification of the contribution of many identical experiments to induction loses sight of the fact that some variation in experiments is a necessary condition for the construction of knowledge:
"His argument could have been improved. His experiments should not have been too uniform, and ought to have differed from one another as much as possible in all respects save that of the likeness of the eggs. He should have tried eggs in the town and in the country, in January and in June. He might then have discovered that eggs could be good or bad, however like they looked. This principle of varying those of the characteristics of the instances, which we regard in the conditions of our generalisation as non-essential, may be termed Negative Analogy. It will be argued later on that an increase in the number of experiments is only valuable in so far as, by increasing, or possibly increasing, the variety found amongst the non-essential characteristics of the instances, it strengthens the Negative Analogy.
If Hume’s experiments had been absolutely uniform, he would have been right to raise doubts about the conclusion. There is no process of reasoning, which from one instance draws a conclusion different from that which it infers from a hundred instances, if the latter are known to be in no way different from the former."
It seems to me that Keynes's 'negative analogy' is a deliberate probing for the constraints of a general principle. The implication is that Hume's regularity theory does not really depend on strict regularities; it requires a certain degree of difference.

So what about probability and information? The striking thing about both Keynes and Hume is that the human psychological aspect of probability is clearly on display: this is not a mathematical abstraction; probability cannot escape the human realm of expectation. Shannon's 'engineering problem' of information based on probability loses sight of this - his 'novelty' and 'confirmation' appear as a single number indicating the degree of 'uncertainty' of a symbol's value. Behind it, however, lies the analogical and inductive reasoning which is deeply human.

Information however, creates its own reality. It can create its own realm of novelty and confirmation to the point where what is confirmed to us is an artificial representation of some other reality, whose actual nature would not produce the same confirmation. Keynes's point about negative analogy would provide a corrective to this. We should explore our expectations against "a variety of non-essential characteristics" of instances.

Instead the designers of big data algorithms want to show that they "work". They can exploit the creation of 'false confirmation' and argue their case. And yet regularities of any sort are hard to identify, let alone the varying of non-essential characteristics. How is this scientific? The human expectation on viewing the results of big data analysis are already framed by technologies which are underpinned by the same formulae that produce the analyses. Part of the problem lies in the subsumption of phenomena within Shannon's formulae, which on the one hand, is blind to its human hinterland of "species of sensation", whilst on the other creates equivalences among phenomena which in reality are not equivalent. Unlike things become alike; everything becomes eggs!

And yet there is something important in Shannon's work - but it lies not in blind application of the equations. Instead it lies in the negative analogies produced and the novelty and confirmation that arise between Shannon's powerful generative ideas and the encounter with the real world. It is in discovering the contours of fit between Shannon's abstractions and the human intersubjective world of expectations and surprises. And this may fit with Hume's own thinking about probabilities.