Is the universe conscious? It seems impossible until you do the maths
The question of how the brain gives rise to subjective experience is the hardest of all. Mathematicians think they can help, but their first attempts have thrown up some eye-popping conclusions
THEY call it the “unreasonable effectiveness of mathematics”. Physicist Eugene Wigner coined the phrase in the 1960s to encapsulate the curious fact that merely by manipulating numbers we can describe and predict all manner of natural phenomena with astonishing clarity, from the movements of planets and the strange behaviour of fundamental particles to the consequences of a collision between two black holes billions of light years away. Now, some are wondering if maths can succeed where all else has failed, unravelling whatever it is that allows us to contemplate the laws of nature in the first place.
It is a big ask. The question of how matter gives rise to felt experience is one of the most vexing problems we know of. And sure enough, the first fleshed-out mathematical model of consciousness has generated huge debate about whether it can tell us anything sensible. But as mathematicians work to hone and extend their tools for peering deep inside ourselves, they are confronting some eye-popping conclusions.
Not least, what they are uncovering seems to suggest that if we are to achieve a precise description of consciousness, we may have to ditch our intuitions and accept that all kinds of inanimate matter could be conscious – maybe even the universe as a whole. “This could be the beginning of a scientific revolution,” says Johannes Kleiner, a mathematician at the Munich Centre for Mathematical Philosophy in Germany.
If so, it has been a long time coming. Philosophers have pondered the nature of consciousness for a couple of thousand years, largely to no avail. Then, half a century ago, biologists got involved. They have discovered correlations between the activity of brain cells and individual instances of experience, known as qualia. But the harsh truth is that neuroscience has brought us no closer to answering the question of how neurons give rise to joy or anger, or to the smell of coffee.
This is what philosopher David Chalmers termed the “hard problem” of consciousness. Its unique difficulty stems from the inherently subjective nature of felt experience . Whatever it is, it isn’t something you can prod and measure. One philosopher called consciousness the “ghost in the machine”, and some people think we may never exorcise it.
But, as Wigner pointed out, maths has a track record with hard problems. That is down to its ability to translate concepts into formal, logical statements that can draw out insights that wouldn’t be exposed from just talking about things in messy human language. “This might help us to quantify experiences like the smell of coffee in ways that we can’t if we rely on plain English,” says Kleiner.
This is why he and Sean Tull, a mathematician at the University of Oxford, have begun formalising the mathematics behind the first and arguably only theory of consciousness with a halfway-thought-through mathematical underpinning (see “Models of experience”). Integrated information theory, or IIT, was conceived more than a decade ago by Giulio Tononi, a neuroscientist at the University of Wisconsin. His basic idea was that a system’s consciousness arises from the way information moves between its subsystems.
One way to think of these subsystems is as islands, each with their own population of neurons. The islands are connected by traffic flows of information. For consciousness to appear, Tononi argued, this information flow must be complex enough to make the islands interdependent. Changing the flow of information from one island should affect the state and output of another. In principle, this lets you put a number on the degree of consciousness: you could quantify it by measuring how much an island’s output relies on information flowing from other islands. This gives a sense of how well a system integrates information, a value called “phi”.
If there is no dependence on a traffic flow between the islands, phi is zero and there is no consciousness. But if strangling or cutting off the connection makes a difference to the amount of information it integrates and outputs, then the phi of that group is above zero. The higher the phi, the more consciousness a system will display.
Another key feature of IIT, known as the exclusion postulate, says that a group will explicitly display consciousness only when its phi is “maximal”. That is to say, its own degree of consciousness has to be bigger than the degree of consciousness you can ascribe to any of its individual parts, and simultaneously bigger than the degree of consciousness of any system of which it is a part. Any and all parts of the human brain might have a micro-consciousness, for example. But when one part has an increase in consciousness, such as when a person is brought out of anaesthesia, the micro-consciousnesses are lost. In IIT, only the system with the largest phi displays the consciousness we register as experience.
The idea has won adherents since Tononi first proposed it. “Theoretically, it’s quite appealing,” says Daniel Bor at the University of Cambridge. “We have this association between consciousness and intelligence: creatures able to recognise themselves in the mirror also seem to be the most intelligent. So some connection between consciousness and intelligence seems reasonable.” And intelligence has a link to gathering and processing information. “That means you may as well make the related connection that in some way consciousness is related to information processing and integration,” Bor says.
It also seems to make sense given some of what we know about consciousness in the human brain. It is compromised, for example, if there is damage to the cerebral cortex. This region has a relatively small number of highly interconnected neurons, and would have a large phi in IIT. The cerebellum, on the other hand, has a much higher number of neurons, but they are relatively unconnected. IIT would predict that damage to the cerebellum might have little effect on conscious experience, which is exactly what studies show.
IIT is less convincing when it comes to some details, though. Phi should decrease when you go to sleep or are sedated via a general anaesthetic, for instance, but work in Bor’s lab has shown that it doesn’t. “It either goes up or stays the same,” he says. And explaining why information flow gives rise to an experience such as the smell of coffee is problematic. IIT frames conscious experience as the result of “conceptual structures” that are shaped by the arrangement of parts of the relevant network, but many find the explanation convoluted and unsatisfying.
Philosopher John Searle is one of IIT’s detractors. He has argued that it ignores the question of why and how consciousness arises in favour of making the questionable assumption that it is simply a by-product of the existence of information. For that reason, he says, IIT “does not seem to be a serious scientific proposal”.
Perhaps the most troubling critiques of IIT as a mathematical theory concern a lack of clarity about the underlying numbers. When it comes to actually calculating a value for phi for the entirety of a system as complex as a brain, IIT gives a recipe that is almost impossible to follow – something even Tononi admits.
“As it’s currently given, phi is very difficult to calculate for a whole brain,” Tull says. That might be a bit of an understatement. Researchers have worked out that using the current method, calculating phi for the 86 billion neurons of the human brain would take longer than the age of the universe. Bor has worked out that just calculating it for the 302-neuron brain of a nematode worm would take 5 × 1079 years on a standard PC.
And when you calculate phi for things you wouldn’t expect to be conscious, you get all sorts of strange results. Scott Aaronson, a theoretical physicist at the University of Texas at Austin, for example, was initially excited by the theory, which he describes as “a serious, honourable attempt” to figure out how to get common sense answers to the question of which physical systems are conscious. But then he set to testing it.
Aaronson took the principles of IIT and used them to compute phi for a mathematical object called a Vandermonde matrix. This is a grid of numbers whose values are interrelated, and can be used to build a grid-like circuit, known as a Reed-Solomon decoding circuit, to correct errors in the information that is read off CDs and DVDs. What he found was that a sufficiently large Reed-Solomon circuit would have an enormous phi. Scaled to a large enough size, one of these circuits would end up being far more conscious than a human.
The same problem exists in other arrangements of information processing routines, Aaronson pointed out: you can have integrated information, with a high phi value, that doesn’t lead to anything we would recognise as consciousness. He concluded that IIT unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly ‘conscious’ at all”.
Aaronson walked away, but not everyone sees highly conscious grid-shaped circuits as a deal-breaker. For Kleiner, it is simply a consequence of the nature of the beast: we lack information because any analysis of consciousness relies on self-reporting and intuition. “We can’t get reports from grids,” he says. “This is the problem.”
Rather than abandoning a promising model, he thinks we need to clarify and simplify the mathematics underlying it. That is why he and Tull set about trying to identify the necessary mathematical ingredients of IIT, splitting them into three parts. First is the set of physical systems that encode the information. Next is the various manifestations or “spaces” of conscious experience. Finally, there are basic building blocks that relate these two: the “repertoires” of cause and effect.
In February, they posted a preprint paper demonstrating how these ingredients can be joined in a way that provides a logically consistent way of applying the IIT algorithm for finding phi. “Now the fundamental idea is well-defined enough to make the technical problems go away,” says Kleiner.
Their aspiration is that mathematicians will now be able to create improved models of consciousness based on the premises of IIT – or, even better, competitor theories. “We would be glad to contribute to the further development of IIT, but we also hope to help improve and unite various existing models,” Kleiner says. “Eventually, we may come to propose new ones.”
One consequence of this stimulus might be a reckoning for the notion, raised by IIT’s application to grid-shaped circuits, that inanimate matter can be conscious. Such a claim is typically dismissed out of hand, because it appears to be tantamount to “panpsychism”, a philosophical viewpoint that suggests consciousness is a fundamental property of all matter. But what if there is something in it?
To be clear, no one is saying that fundamental particles have feelings. But panpsychists do argue that they may have some semblance of consciousness, however fragmentary, that could combine to generate the various levels of consciousness experienced by birds or chimpanzees or us. “Particles or other basic physical entities might have simple forms of consciousness that are fundamental, but complex human and animal consciousness would be constituted by or emergent from this,” says Hedda Hassel Mørch at Inland Norway University of Applied Sciences in Elverum.
The idea that electrons could have some form of consciousness might be hard to swallow, but panpsychists argue that it provides the only plausible approach to solving the hard problem. They reason that, rather than trying to account for consciousness in terms of non-conscious elements, we should instead ask how rudimentary forms of consciousness might come together to give rise to the complex experiences we have.
With that in mind, Mørch thinks IIT is at least a good place to start. Its general approach, analysing our first-person perspective in terms of what we perceive when certain brain regions become active and using that to develop constraints on what its physical correlate could be, is “probably correct”, she says. And although IIT as currently formulated doesn’t strictly say everything is conscious – because consciousness arises in networks rather than individual components – it is entirely possible that a refined version could. “I think that the core ideas underlying IIT are fully compatible with panpsychism,” says Kleiner.
That might also fit in with indications from elsewhere that the relationship between our consciousness and the universe might not be as straightforward as we imagine. Take the quantum measurement problem. Quantum theory, our description of the basic interactions of matter, says that before we measure a quantum object, it can have many different values, encapsulated in a mathematical entity called the wave function. So what collapses the many possibilities into something definite and “real”? One viewpoint is that our consciousness does it, which would mean we live in what physicist John Wheeler called a “participatory universe”.
There are many problems with this idea, not least the question of what did the collapsing before conscious minds evolved. A viable mathematical model of consciousness that allows for it to be a property of matter would at least provide a solution for that.
Then there’s University of Oxford mathematician Roger Penrose’s suggestion that our consciousness is actually “the reason the universe is here”. It is based on a hunch about quantum theory’s shortcomings. But if there is any substance to this idea, the framework of IIT – and its exclusion postulate in particular – suggests that information flow between the various scales of the universe’s contents could create different kinds of consciousness that ebb and flow depending on what exists at any particular time. The evolution of our consciousness might have, in IIT’s terms, “excluded” the consciousness of the universe.
Or perhaps not. There are good reasons to remain sceptical about the power of maths to explain consciousness, never mind the knock-on effects for our understanding of physics. We seem to be dealing with something so involved that calculations may not even be possible, according to Phil Maguire, a computer scientist at Maynooth University in Ireland. “Breaking down cognitive processes is so complex that it is not feasible,” he says.
Others express related doubts as to whether maths is up to the job, even in principle. “I think mathematics can help us understand the neural basis of consciousness in the brain, and perhaps even machine consciousness, but it will inevitably leave something out: the felt inner quality of experience,” says Susan Schneider, a philosopher and cognitive scientist at the University of Connecticut.
Philip Goff, a philosopher at Durham University, UK, has a similar view. Consciousness deals with physical phenomena in terms of their perceived qualities, he points out – the smell of coffee or the taste of mint, for example – which aren’t conveyable in a purely quantitative objective framework. “In dealing with consciousness, we need more than the standard scientific tools of public observation and mathematics,” Goff says.
But Kleiner isn’t put off. He is developing a mathematical model that can incorporate ineffable, private experiences. It is currently undergoing peer review. And even if it doesn’t work, he says, something else will: “I’m fully convinced that in combination with experiments and philosophy, maths can help us proceed much further in uncovering the mystery of consciousness.”