Saturday 9 March 2024

Music and Breathing

There is an oscillation in my academic work between thinking about things which are practical and of importance - either in health or education (or both) - and thinking about music. Music, of course, is extremely practical and very important, but few people will support research work into music directly. They should of course. I've found that techniques for thinking about music become applicable to more practical stuff. Most specifically, developing information theoretical techniques of analysis of music is highly valuable across many fields. In addition to education, I'm currently working on the organisational impact of AI, AI and information theory (specifically focusing on my work on diabetic retinopathy diagnosis), and work-based stress. 

Why is music so important? Quite simply because it protects us against hubris in our analytical thinking. Whatever social theory one might have, it has to work for music, or it is no good. Or at least, not good enough. Most cybernetic theories fall short because they can't "breathe" - and that is the key. Much as I admire and find very useful the work of Beer, Luhmann, Bateson, Von Foerster and Maturana, in each case their theories don't breathe properly. Not in the way that music does. The wisest of them (particularly Beer and von Foerster) knew it. 

This is partly why the deep physiological ontology of John Torday, Bill Miller, Frantisek Baluska, Denis Noble and others has attracted me, and music has often been at the centre of discussions with Torday and Miller. By situating consciousness with the smallest unit of biology - the cell - breathing becomes foregrounded because it is obviously biologically fundamental. This is really what my recent paper for Progress in Biophysics and Molecular biology was about (see Music, cells and the dimensionality of nature - ScienceDirect)

Within this biological perspective, there are two fundamental principles: the maintenance of homeostasis and the endogensation of the environment through symbiogenesis (i.e. how cells absorb factors in their environment like bacteria, which become mitochondria). The two principles are deeply related in ways which challenge the conventional cybernetic view of homeostasis. 

Endogenisation turns the cell into a history book - a memory of environmental stresses from the past, for which adaptive strategies can anticipate the recurrence of similar stresses in the future. Cells are anticipatory agents which maintain a deep homeostasis - not only with their immediate environment, but with the entirety of their developmental history. That history is itself a vector which points to some originary state, and through the commonalities of these vectors, a deeper level of biological coordination can be organised. No current AI can reproduce this. If we were to have an AI in the future which could, it's architecture would be so fundamentally different from what we have at the moment: more like biology. 

ChatGPT and the like are clever illusions, behind which lie some deeper truths about nature - not least it's recursive structure, and the anticipatory capability that recursion provides. But it is nonetheless a useful illusion. And it might be able to write great text (although the more I use it, the more I can detect it's hand), it remains rather poor at music. It simply cannot breathe. 

Current social theories, theories about stress, methods of epidemiological study, etc, all have a breathing problem. You can often tell, because the champions of these theories tend to be a bit breathless in the way they articulate them. They desperately WANT to have the answer, for their pet theorists (Beer, Luhmann, Giddens, Bhaskar, whoever...) to be able to blow away the cobwebs of confusion. But it never works and it's always breathless.

This is not to disregard those theories - they are all great. But the high priests of those theories knew the limitations of the theory, where the clergy who slavishly follow them do not. This is why I stay close to music. It is to stay close to breathing amid a lot of breathless exhaustion.  

 

Sunday 11 February 2024

Cybernetic Boa Constrictors

Brahms described the symphonies of Bruckner as "symphonic boa constrictors". After going to a performance of Bruckner's 3rd symphony last night in Manchester, I knew what he meant. I needed some music after sitting in a rather constricting online session on consciousness from the American Society for Cybernetics. But I didn't need to have all the life squeezed out of me. That had already been the experience in the meeting.

Damn it - what's wrong? Not with Bruckner - that, unfortunately is a matter of taste (I just thought I might give the snake a second chance. I'll know better next time). But what's happened with cybernetics?

To put it very simply (and perhaps, rudely), cybernetics started as science - Wiener, Ashby, von Foerster, Bateson. But it has ended up as religion. There is no longer cybernetic analysis - no consideration of what "variety" means - or homeostasis, transduction, viability, difference, information (ok, that's tricky), entropy, regulation, recursion, distinction, construction, ontology, epistemology, etc. Evan Thompson - who was the star turn - asked the most intelligent question "What is a system?" - but then there is a pretence that anyone knows the answer to that most basic of questions for the systems sciences. 

There is a reasonable definition that says "systems are constructed by observers" - but that doesn't say very much. It doesn't say what a system is, but merely says that a process of observation is involved in their coming to be. Ok. But can we say more about this process?

Systems, like words, are selected. There are any number of possible selections that might be made, and out of that set of possibilities, something is chosen as "system". And of course, we are remarkably inconsistent in choosing what is selected: at one moment we choose system x, and at another system y, often forgetting that the operating principles of system x are completely incompatible from those of system y. The cybernetic boa constrictor sets to work when the inconsistency between what is professed, and how people actually behave is at its most acute.  

It's a mechanism well-known to cyberneticians - the double-bind. It's well-deployed by boa constrictors... "oooh warm and cosy... shit I can't breathe.... oooh so cosy... arghh!" So how do we get out of it? Bateson tells us - we need to step outside the double-bind and describe what is happening.

Yes - systems are selections made by an observer. But, what constructs the mechanism that performs the selection? That question was often suggested by Loet Leydesdorff, and his approach to constructivism has been most useful to me, and he pointed back to the origins of phenomenology to defend his approach. 

What is constructed is not "knowledge", or "system", or even "reality". What is constructed is a mechanism that selects "things that we know", "patterns of operation within an environment", or "beliefs and conjectures". How is the mechanism constructed? Well, Leydesdorff had a powerful insight that an effective selection mechanism would have to be anticipatory. It would have to be a "good regulator" - to have a model of its environment. How could a system which has a model of an ambiguous environment be constructed? 

One sub-question here is whether such a "good regulator" could be constructed all at once out of thin air, or whether it would have to emerge, or evolve, over time. I cannot see how the latter case is not likely. So the construction of a selection mechanism is evolutionary - from the smallest units to the emanations of modern consciousness.

At each stage of evolution in the construction of a selection mechanism, there must be selection taking place. So a selection mechanism selects its ongoing evolution. Rather like music improvisation. But where does this process start?

Does it start in physics? The problem here is that we cannot conceive of a physical world beyond our own biology.  We know (at least we select!) that our cells are made from molecules, some of which like cholesterol, appear to be astrobiological fossils. The behaviour of those molecules must have something to do with physics, and physics does have a selection mechanism of sorts - the geometry of the four forces, Pauli exclusion, the spins of electrons, etc. But only through biology do we have that knowledge. There is no physics without biology. There is no observation without biology.

Biology brings observation and with observation there is increasing sophistication in the selection mechanisms that are constructed. Why would the universe create biology? Does it need it? If so, how?

There is a clue to this question in how biology works. Biological selection mechanisms work by endogenising their environment. The cell becomes a fractal of environmental history, where the capacity to anticipate revolves around the fact that what is to come rhymes with what has gone before. This includes the "what has gone before" in terms of the fundamental laws of physics. But deep down, the fundamental laws of physics and the anticipatory selection mechanisms of biology have one thing in common: they both operate to maintain homeostasis: that is, the balance between some locality in the universe (an atom, cell, star, planet or a plant), and the non-local context. 

Selection shifts the balance of the whole. Constructing selection mechanisms is about maintaining stability in the balance of future selections, and to do that, increasingly sophisticated phenotypic mechanisms are required to convey information about an increasingly complex environment. The universe needs life because it needs to maintain homeostasis between the local and nonlocal. 

Was there a point in the evolution of the universe where life wasn't inevitable? I suspect not. Any more than I suspect there wasn't a point in Bruckner's 3rd symphony where a catatonic state of boredom wasn't inevitable. 

Thursday 8 February 2024

Agency from the Zygote Up

I've never understood what "agency" is. We do stuff. Is to say that "doing stuff" or maybe "selecting what stuff to do (and then doing it)" is "agency" to say anything at all? It's agency to say what agency is, after all. Not sure that gets us anywhere. Agency doesn't explain anything. 

Can we rob people of agency? People talk about giving person x agency, by which they mean person x has the option of doing things that (perhaps) they might not have otherwise had. But even in cases where people have very limited options for acting, they still do stuff. It's generally a good idea to increase the options for people to act, and sometimes people act in way which reduce the options of other people to act. Agency doesn't explain this though. 

But I want to know what it's all about, and "agency" doesn't help. So how about looking at this differently...

The problem may be with Darwin: we act to survive, because acting is selection.... to reduce the options for acting is to reduce the chances for survival. But do we act to survive? Or is survival a biproduct of something else? Disastrous actions which lead to a swift demise perhaps amuse us in jokes, or myths and allegories giving warnings like "don't do this". Those myths and stories are important for the survival of the species. But that is about information. 

So this is the perspective I am interested in: Phenotype as Agent for Epigenetic Inheritance - PubMed (nih.gov)

Paraphrasing this argument, acting gives rise to "information" - differences that make a difference. At a fundamental level, that information must be biological - the differences that make a difference are in the physiology of every cell. What are its dynamics?

The hormonal responses to "differences that make a difference" make a difference to cellular machinery. Specifically, there are epigenetic transformations to stress and other factors in the environment which will either be exposed through acting, or which will cause subsequent actions. Those epigenetic changes are carried back to the core of reproductive physiology - to the gametes. Why might this happen? Well. it's quicker than natural selection... 

The zygote that is the result of future interaction between male and female gametes therefore carries some blueprint of whatever environmental conditions imprinted themselves epigenetically on the agent's gametes at some point in their earlier existence. In other words, the information is carried forwards as a pre-programming of the next generation. 

Now is it too far-fetched to suggest that the point of "doing stuff" is that it is all about this "pre-programming". After all, it is the survival of the species which must be the abiding concern of evolution. And in considering this, species is not a collection of phenotypes - people, birds, insects, bacteria, etc. It is a process involving a collection of information-gathering entities which collectively perform information-harvesting in an ambiguous environment in which future generations will need to adapt and perform the same function. Fundamentally, the whole thing is a homeostatic process. 

I like this because it suggests that the practice of science and art is deeply related: both are about discovering information, and that this process is driven by the physiological imperative which feeds information discovery back to successive generations. Beethoven and Einstein were phenotypic agents performing this function, and - in their case - because of particular conditions, their information harvesting operation was particularly profound. 

I also like it though because it means that there is no life that is not profound. There is no life which does not contribute to the future possibility of human flourishing. No life is wasted. Yet there are questions here about those who are truly evil, or who inflict suffering which I need to think about. The uncomfortable answer to that is that information about evil is necessary. I suspect Shakespeare might agree. 

Thursday 11 January 2024

Self-Provisioning of "Tools for Knowing" using AI

In my own teaching practice, I have become increasingly aware that preparation for sessions I have led has involved not the curation/creation of content (for example, in the form of Powerpoint slides), but the construction of tools to support activities driven by AI. The value of this is that the technology can now do something that only complex classroom organisation could achieve, namely the support of personalised and meaningful inquiry. I have been able to create a wide variety of activities ranging from drama-based exercises, to simulated personal relationships (usually around health). I am aware that the potential scope for doing new kinds of activities appears at this stage enormous: powerful organisational simulations (for example, businesses or even hospitals) with language-based AI agents are all possible, allowing students to play roles and observe the organisational dynamics. 

Of course, a lot of this involves coding or other technical acts, which I quite enjoy, even if I'm not that good at it. At some point the need for coding may reduce and we will have platforms for making our own tools for learning (actually, we kind-of already have it with OpenAI's GPT Editor). But the real trick will be to allow teachers and students to create their own tools supporting different kinds of learning activity, provide different kinds of assessments, and maybe even provide ways of mapping personal learning activities to professional standards. 

A lot of focus at the moment is falling on how teachers might use chatGPT for producing learning content - basically amplifying existing practices with the new tech (e.g. "write your MCQs with AI!"). But why shouldn't learners do the same thing? Indeed, what may be happening is the establishment of a common set of practices of "learning tool creation", which may be modelled by teachers, and then adopted and developed by learners. Everyone creates their own tools. Everyone moves towards becoming a teacher empowered by tools they develop. 

Why does that matter? Because it addresses the two fundamental variety management problem of education. Firstly, it addresses the problem that teachers and learners are caught between the ever-increasing complexity of the world, and the constraints of the institution. My paper on Comparative judgement and the visualisation of construct formation in a personal learning environment: Interactive Learning Environments: Vol 31, No 2 (tandfonline.com) (long winded title, I know - but this paper is interesting me more now than when I wrote it). It argued that the basic structure of the pathology of education is this (drawing on Stafford Beer's work): 


The institution wants to control technology, but personal tool creation means that it is individuals who could create and control their own tools. This is to shift much of the "metasystem" function (the big blue arrow) away from the institutional management to the individuals in the system. This was always the fundamental argument of the Personal Learning Environment: it's just that we never had tools which could generate sufficient variety to meet the expectations of individuals. Now we do. 

The second problem is the problem of too many students and too few teachers. That is a problem of how the practice of "knowing things" can be modelled in such a way that a wide variety of different people can relate to the "knowledge" that is presented to them. This problem however may be addressed if we see knowledge not as resulting from a "selection mechanism that chooses words", to instead being a "selection mechanism that chooses practices" - particularly practices with AI tools which then perform the business of "selecting words". If teachers model a "selection mechanism that chooses practices" which can result in a high variety of choosing words, then a wide variety of students with different interests and abilities can develop those same practices to lead to the selection of words which are meaningful to them in different ways. In fact, this is basically what is happening with chatGPT.

Teaching is always modelling. It is the teacher's job to model what it is to know something - to the point of modelling what they know and what they don't know. Really, they are revealing their own selection mechanism for words, but this selection mechanism includes their own practices for inquiry. Good teachers will say things "I can't remember the details of this, but this is what I do to find out". Students who model themselves on those teachers will acquire a related selection mechanism.  

The key is "This is what I do to find out". Many academics are likely to say "I would explore this in chatGPT". That is a technical selection made by a new kind of selection mechanism in teachers which can be reproduced in students. Teachers might also say "I would get the AI to test me", or "I would get the AI to pretend to be someone who is an expert in this area that I can talk to", or "I would get the AI to generate some fake references to see if anything interesting (and true) comes up", or "I would ask it to generate some interesting research questions". The list goes on.

Is "Knowing How" becoming more important than "Knowing That"? To ask that is to ask what we mean by "knowing" in the first place. Increasingly it seems that "knowing how" and "knowing that" are both selections. ChatGPT is an artificial mechanism for selecting words. It begs the question as to the ways in which we humans are not also selection mechanisms for words - albeit ones which have a deep connection to the universe which AI doesn't have. 

We are moving away from an understanding of knowledge as the result of selection towards an understanding of knowledge as the construction of a selection mechanism itself. This may be the most important thing about the current phase of AI development we are in. 

Tuesday 31 October 2023

Iconicity and Epidemiology: Lessons for AI and Education

The essence of cybernetics is iconicity. It is partly, but not only, about thinking pictorially. More deeply it is about playing with representations which open up a dance between mind and nature. This is distinct from approaches to thought which are essentially "symbolic". Mathematics is the obvious example, but actually, most of the concepts one learns in school are symbols that stand in relation to one another, and whose relation to the world outside has to be "learnt". This process can be difficult because the symbols themselves are shrouded in obscure rules which are often unclear and sometimes contradictory.

Iconic approaches make the symbols as simple as possible: a distinction, a game, a process - onto which we are invited to project our experience of a particular subject or problem. It was something that was first considered by C.S. Peirce who developed his own approaches to iconic logic (see this for example: Peirce.pdf (uic.edu)). Cybernetics followed in Peirce's footsteps, and the iconicity of its diagrams and technical creativity makes its subject matter transdisciplinary. It also makes cybernetics a difficult thing for education to deal with, because education organises itself around subjects and their symbols, not icons and games. 

But thinking iconically changes things.

I am currently teaching epidemiology which has been quite fun. But I'm struck by how the symbols of epidemiology - not just the equations, but the classifications of study types, problematisation of things like bias and confounding, etc, all put barriers in the way of understanding something that is basically about counting. So I have been thinking about ways of doing this more iconically.

To do this is to invite people into the dance between mind and nature, and to do that, we need new kinds of invitations. I'm grateful to Lou Kauffman who recommended Lancelot Hogben's famous "Mathematics for the Million" as a starting point. 

Hogben's book teaches the context and history of mathematical inquiry first, and then delves into the specifics of its symbolism. That is a good approach, and one that needs updating for today (I don't know of anything quite like it). Having said that, there are some great online tools to do iconic things: The "Seeing theory" project from Brown university is wonderful (and open source): https://seeing-theory.brown.edu/  (again, thanks to Lou for that)

Then of course, we have games and simulations - and now we have AI. Here's a combination of those things I've been playing with inspired by Mary Flannagan's "Grow a Game" Grow a Game - Mary Flanagan

My AI version http://13.40.150.219:9995/



Basically enter a topic, select a game and chatGPT will produce prompts suggesting rule changes to the game to reflect the topic. Of course, whatever the AI comes up with can be tweaked by humans - but its a powerful way of stimulating new ideas and thought in epidemiology. 

There's more to do here.

Friday 27 October 2023

Computer metaphors and Human Understanding

One of the most serious accusations levelled against cognitivism is that it imposed a computer metaphor over natural processes of consciousness. At the heart of the approach is the concept of information as conceived by engineers of electronic systems in the 1950s (particularly Shannon). The problem with this is that there is no coherent definition of information that applies to all the different domains in which one might speak of information: from electronics, to biology, to psychology to philosophy, theology and physics.

Shannon information is a particularly special case and unique in the sense that it provides a method of quantification. Shannon himself, however, made no pretence in applying this to other phenomena than the engineering situation he focused on. But the quantified definition contains concepts other than information - most notably, redundancy (which Shannon, following the cyberneticians including Ashby identified as constraint on transmission) and noise. Noise is the reason why the redundancy is there - Shannon's whole engineering problem concerned the distinguishing of signal from noise on a communication channel (i.e. a wire). 

Shannon was involved with the establishment of cybernetics as a science. He was one of the participants at the later "Macy conferences" where the term "cybernetics" was defined by Norbert Wiener (actually, it may have been the young Heinz von Foerster who is really responsible for this). Shannon would have been aware that other cyberneticians saw redundancy rather than information as the key concept of natural systems: most notably, Gregory Bateson saw redundancy as an index of "meaning" - something which was also alluded to by Shannon's co-author, the philosopher Warren Weaver.

But in the years that followed the cybernetic revolution, it was information that was the key concept. Underpinned by the technical architecture that was first established by John von Neumann (another attendee of the Macy conferences), computers were constructed from a principle that separated processing from storage. This gave rise to the cognitivist separation of "memory" from "intelligence". 

There were of course many critiques and revisions: Ulrich Niesser, for example, among early cognitivists, came to challenge the cognitivist orthodoxy.  Karl Pribram wrote a wonderful paper on the importance of redundancy on cognition and memory (The Four Rs of Remembering6 see karlpribram.com/wp-content/uploads/pdf/theory/T-039.pdf). But the information processing model prevailed, inspiring the first wave of Artificial Intelligence and expert systems from the late 80s to the early 90s. 

So what have we got now with our AI? 

What is really important is that our current AI is NOT "information" technology. It produces information in the form of predictions, but the means by which those predictions are formed is the analysis and processing of redundancy. This is unlike early AI. The other thing to say is that the technology is inherently noisy. Probabilities are generated for multiple options, and somehow a selection must be made between those probabilities: statistical analysis becomes really important in this selection process. Indeed, within own involvement with AI development in medical diagnostics, the development of models (for making predictions about images) was far less important than the statistical post-processing that cleaned the noise from the data, and increased the sensitivity and specificity of the AI judgement. It will be the same with chatGPT: there the statistics must ensure that the chatBot doesn't say anything that will upset OpenAI's investors!

Information and redundancy are two sides of the same coin. But redundancy is much more powerful and important in natural systems, as has been obvious to researchers in ecology and the life sciences for many years (notably, statistical ecologist Robert Ulanowicz, economist Loet Leydesdorff, Bateson, Terry Deacon, etc). It is also fundamental to education - but few educationalists recognise this.

The best example is in the Vygotskian Zone of Proximal Development. I described a year or so ago how the ZPD was basically a zone of  "mutual redundancy" (here: Reconceiving the Digital Network: From Cells to Selves | Request PDF (researchgate.net) ), drawing on Leydesdorff's description. ChatGPT emphasises this: Leydesdorff's work is of seminal importance in understanding where we really are in our current phase of socio-technical development. 

Nature computes with redundancy, not information - and this is computation unlike how we think of computation with information. This is not to leave Shannon behind though: in Shannon, what happens is selection. Symbols are selected by a sender, and interpretations are selected by a receiver. The key in the ability to communicate is that the complexity of the sending machine is equivalent to the complexity of the receiving machine (which is a restatement of Ashby's Law of Requisite Variety - Variety (cybernetics) - Wikipedia). If the receiver doesn't have the complexity of the sender there will be challenges in communication. With such challenges - either because of noise on the channel, or because of insufficient complexity on the part of the receiver, it is necessary for the sender to create more redundancy in the communication: sufficient redundancy can overcome a deficiency in the complexity of the receiver to interpret the message. 

One of the most remarkable features of AI generally is that it is both created with redundancy, and it is capable of generating large amounts of redundancy. If it didn't, its capacity to appear meaningful would be diminished. 

For many years (with Leydesdorff) the nature of redundancy in the construction of meaning and communication has fascinated me. Music provides a classic example of redundancy in communication - there is so much repetition, which we analysed here: onlinelibrary.wiley.com/doi/full/10.1002/sres.2738. I've just written a new paper on music and biology which will be published soon which develops these ideas, drawing on the importance of what might be called a "topology of information" with reference to evolutionary biology. 

It's not just that the computer metaphor doesn't work. The metaphor that does work is probably musical.

Monday 4 September 2023

Wittgenstein on AI

Struck by what appears to be a very high degree of conceptual confusion about AI, I've been drawn back to the basic premise of Wittgenstein that the problems of philosophy (or here, "making sense of AI") stem from lack of clarity in the way language is used. Wittgenstein's thoughts on aesthetics come closest to articulating something that might be adapted to the way people react to AI:

"When we make an aesthetic judgement about a thing, we do not just gape at it and say: "Oh! How marvellous!" We distinguish between a person who knows what he is talking about and a person who doesn't. If a person is to admire English poetry, he must know English. Suppose that a Russian who doesn't know English is overwhelmed by a sonnet admitted to be good. We would say that he does not know what is in it. In music this is more pronounced. Suppose there is a person who admires and enjoys what is admitted to be good but can't remember the simplest tunes, doesn't know when the bass comes in, etc. We say he hasn't seen what's in it. We use the phrase 'A man is musical' not so as to call a man musical if he says "Ah!" when a piece of music is played, any more than we call a dog musical if it wags its tail when music is played."

Wittgenstein says that expressions of aesthetic appreciation have their origins as interjections in response to aesthetic phenomena.  The same is true of our judgements to writing produced by AI: we said (perhaps when we first saw it) "Wow!" or "that's amazing". Even after more experience with it, we can laugh at an AI-generated poem or say "Ah!" to a picture. But these interjections are not indicators of understanding. They are more like expressions of surprise at what appears to be "understanding" by a machine. 

In reality, such interjections are a response to what might be described as "noise that appears to make sense". But there is a difference between the judgement of someone who might interject after an AI has returned a result who has a deeper understanding of what is going behind the scenes, and someone who doesn't. One of the problems of our efforts to establish conceptual clarity is that it is very difficult to distinguish the signal "Wow!" from its provenance in the understanding or lack of it in the person making the signal. 

Aesthetic judgement is not simply about saying "lovely" to a particular piece of art. It is about understanding the repertoire of interjections that are possible in response to a vast range of different stimuli. Moreover, it is about having an understanding of the constraints of reaction alongside an understanding of the mechanisms for production of the stimuli in the first place. It is about appreciating  a performance of Beethoven when we also have some appreciation of what it is like to try to play Beethoven. 

Finally, whatever repertoire one has to make judgements, you can find others in the social world with whom you can communicate the structure of your repertoire of reactions to AI. This is about sharing the selection mechanism for your utterances and in so doing articulating a deeper comprehension of the technology between you. 

I'm doing some work at the moment on the dimensionality of these different positions. It seems that this may hold the key for a more rational understanding of the technology and help us to carve a coherent path towards adapting our institutions to it. But in appreciating the dimensionality of these positions, the problem is that the interconnections between the different dimensions breaks. 

It is easy to fake expertise in AI because few understand it deeply. That means it is possible to learn a repertoire of communications about AI without the utterances being grounded in the actual "noise" of the real technology. 

It is also easy to construct new kinds of language game about AI which are divorced from practice, but manage to co-opt existing discourses so as to give those existing discourses some veneer of "relevance". "AI ethics" is probably the worst offender here, but there's lots of words spent of discussing the sociology of "meaning" in AI. 

Equally it is possible to be deeply grounded in the noise of the technology but to find that the concepts arising from this engagement find no resonance with people who have no contact with the technics, or indeed, are in some cases almost impossible to express as signals. 

It is in understanding the dynamics of these problems which is where the dimensionality can help. It is also where experiments to probe the relationship between human communications about the technology and the technology itself can be situated.