Are Machines Self-Aware or Are They Actually Sentient?


By David K. Johnson, Ph.D., King’s College

Even if machines were self-aware, we’d probably need a separate reason for thinking they are conscious. Why? If machine brains couldn’t produce consciousness, they couldn’t be aware of their own consciousness; consequently, they couldn’t be self-aware. Therefore, they wouldn’t be sentient.

Robot working with digital tablet in a factory.
The problem with sentience in robots is that we don’t understand it yet, so determining what is sentient becomes a headache. (Image: Phonlamai Photo/Shutterstock)

The Hard Problem of Consciousness

Such a reason might eventually be provided by something called integrated information theory. Many theories try to explain the mind by explaining how the brain produces it; integrated information theory attempts to explain the brain by describing the mind’s properties.

Other theories, like identity theory and property dualism, are trying to solve what David Chalmers called ‘the hard problem of consciousness’, which asks how the brain (a physical system with only objective material properties) can produce a mind (a mental system with only subjective phenomenal properties). 

But the success of these theories has been limited, causing some—like Paul and Patricia Churchland—even to deny the existence of the mind. Integrated information theory takes the opposite approach.

This is a transcript from the video series Sci-Phi: Science Fiction as Philosophy. Watch it now, on Wondrium.

Integrated Information Theory Might Be the Answer

First, inspired by Descartes, it takes the existence of the mind as an undeniable given. I may not know whether, for example, my experiences are accurate—I could be dreaming or whatever—but it’s undeniable that I am having them. Second, it takes seriously the properties the mind has and suggests that they can tell us about what properties a being’s brain must have if we are to say that it is minded.

So in 2004, Giulio Tononi, the theory’s founder, started by identifying the kind of properties that conscious mental states have and then proposed that anything that instantiates them must have the same corresponding properties. The former he called axioms, the latter he called postulates. 

The axioms are that conscious states exist, have a certain structure, contain integrated information, and have certain exclusions, like how fast they occur. The same things must be true of any physical system that is said to be conscious. It must exist, have the same structure, contain integrated information, and have certain exclusions.

Learn more about Kubrick’s 2001 and Nietzsche’s Ubermensch.

Meaning of Integrated Information

Now, obviously, given the name of the theory, we’re forced to wonder what it means for a system to have integrated information. According to Tononi, the information in, for example, a visual experience is integrated because it cannot be separated out. 

He asks us to consider the experience of seeing a blue book. We don’t see a book with no color and then a color with no book. We just see a blue book. There aren’t two experiences, just one. It’s impossible to divide it up.

The information in the corresponding brain structure, he argues, is integrated in the same way. Given the way that its parts are causally related, if you separate them out, the information disappears. 

This makes the causal powers of the system irreducible; the information in them is integrated. You can’t explain them by explaining the causal powers of the individual parts of the system. As Tononi puts it, “Every part of the system must be able to both affect and be affected by the rest of the system.”

Learn more about the themes of feminism and religion in The Handmaid’s Tale.

Measure of a Sentient Being and One Who Acts Self-Aware

Robot arm touches the tip of a human finger
If robots start acting like they are sentient because we program them to, is that enough to be considered sentient? How can we measure such a thing? (Image: Gorodenkoff/Shutterstock)

Integrated information can, in principle, be mathematically measured. Hypothetically, you should be able to tell whether a system contains it and to what degree. Integrated information theorists suggest measuring it with what they call the phi metric.

Therefore, if the theory is right and mindedness is just integrated information, we could determine whether a machine is sentient in principle. All we’d have to do is look at its ‘positronic brain’ and measure its phi. If its phi rating is as high as ours, it’s sentient; if not, then not. 

And there would be no wondering whether it could have integrated information but just be acting like it is intelligent, or conscious, or self-aware. Again, on this theory, the integrated information is mindedness—no two ways about it.

The problem, of course, like before, is that this is just a theory, and not a widely accepted one at that. To boot, it’s in its preliminary stages. It’s not like we have the ability at this point to actually measure a human brain’s phi, much less a machine’s—it’s just hypothetically measurable. So we may never have a way of determining directly whether a machine is sentient.

Empathy Leads Us to Believe Machines Have Minds

Robotic hand covered in translucent human skin-like material
The fact that we feel empathy toward a being, even if it isn’t actually sentient, makes us feel like it is. (Image: maxuser/Shutterstock)

It wouldn’t mean that we couldn’t still come to a rational conclusion about whether a machine can be minded. In the movie A.I., a character called David is a clear example of what we could call an android—a mechanical being that looks and acts just like a human. 

It’s impossible to watch the movie without reacting to David as if he is sentient. You feel his loneliness and you delight with him. If David existed in the real world, and you didn’t know he was artificial, you would conclude that he is minded. And it seems you’d be rational to do so.

Common Questions about Whether Machines Are Self-Aware or Sentient

Q: According to David Chalmer, what is ‘the hard problem of consciousness’?

It addresses the question of how our brains translate to minds. Because brains have specific material properties and it’s unclear as to how they produce minds, which are mental systems with subjective phenomenal properties. If we solved the problem, then we would know if future machines just acted self-aware or if they really have minds.

Q: What is the difference between integrated information theory and other theories that try to explain the mind?

The main difference is in their approach to explaining if a being has a mind or is just self-aware. Most theories try to figure out how the brain produces the mind in the first place, but integrated information theory tries to explain the properties of the mind first and then get to explaining the brain.

Q: How do integrated information theorists think we should measure sentience in a being?

We can’t do it yet, but it seems possible to use something like the phi metric on paper, which means that we would measure the phi rating of a machine and see if it’s as high as ours. If so, then we could see if the machine is acting self-aware or it actually is self-aware.

Keep Reading
Perceiving the World: ‘Arrival’ and the Sapir-Whorf Hypothesis
Radical Translation and ‘Arrival’: Talking to Aliens
Contacting Alien Civilization: What the Drake Equation Tells Us