Could Artificial Intelligence Actually Think Like Humans?


By David K. Johnson, Ph.D., King’s College

The question isn’t merely whether machines could be artificially intelligent. It’s whether they could have minds in the same way humans do. What’s the difference? Intelligence is only part of human mindedness—our ability to use and understand language, make plans and decisions, solve problems, and strategize.

Robotic hand placing cylinder in a shape sorting toy
Someday, robots might be able to solve problems, but could they become genuinely self-aware? (Image: maxuser/Shutterstock)

Sentient Beings

The human mind also includes subjective experiences: emotions, memories, sensory perceptions (like vision and hearing)—what we might call ‘consciousness’. And humans are also self-aware—they have a kind of meta-consciousness where they are aware of their own awareness and of themselves and their own ego. We can say that humans are sentient: they are conscious, intelligent, and self-aware. And so, the question of artificial intelligence is whether machines could, one day, be sentient.

Stanley Kubrick was arguably the first director to take seriously the idea that machines could be sentient. A close look at his 1968 classic 2001: A Space Odyssey, and its corresponding novel, clearly indicates that Kubrick intended the HAL 9000—the computer that serves as the brain for a spaceship called ‘Discovery One’—to be intelligent, conscious, and self-aware. Indeed, HAL’s circuitry is intended to mimic the configuration of the human brain.

HAL’s activation heralded the dawning of his ‘consciousness’. HAL’s circuitry includes circuits for cognitive feedback, ego-reinforcement, and auto-intellection. He can reason, understand and use language, appreciate art, make plans and decisions, beat a human at chess—something thought forever impossible for a computer in the 60s. HAL even feels threatened and fears his deactivation. And his notion that he should be the one to complete Discovery One’s mission clearly indicates that he is self-aware. HAL is sentient.

This is a transcript from the video series Sci-Phi: Science Fiction as Philosophy. Watch it now, on Wondrium.

Our Fictitious Predictions of Artificial Intelligence

HAL isn’t the only machine in sci-fi treated as being sentient. There’s K9 from Doctor Who, Bishop from Alien, Johnny 5 from Short Circuit, Data from Star Trek, the terminators from The Terminator, Andrew from Bicentennial Man, Marvin from Hitchhiker’s Guide to the Galaxy, Sonny from I, Robot, Ava from Ex Machina, Samantha from Her … The list goes on and on. But sci-fi has not always been so friendly to the idea of mechanical sentience.

The laser-eyed Gort from The Day the Earth Stood Still and Robby, the Robot from Forbidden Planet, are clearly not thought to be sentient. They’re not even conscious. They are just what Descartes would have called ‘automatons’—machines that mindlessly approximate human behavior. 

The droids from Star Wars, like R2-D2 and C-3PO, exist in a kind of gray area. They clearly behave like they’re sentient—they use language, make plans, show fear and concern—but they are treated as if they don’t. But this raises the question: Should we consider machines that behave like us to be sentient like us?

Learn more about Westworld and artificial intelligence.

Souls of the Machines

Male and female robots embracing each other
Using soul as an argument for robot sentience doesn’t work because it cannot be proven. (Image: Mykola Holyutyak/Shutterstock)

We can’t simply declare that machines can’t be sentient because they don’t have souls. First, the notion that humans have souls is very problematic. If ensoulment is required for sentience, then humans aren’t sentient either. 

Second, if ensoulment is required, then machine sentience is just the question of machine ensoulment. Indeed, those who say that machines can’t have souls usually just mean that they can’t have minds—they can’t have conscious experiences. But whether they can is the issue at hand. You can’t just declare that they don’t and think you have established anything. You’d need an argument.

Learn more about Transcendence and the dangers of AI.

What Is Considered Sentient?

One way of presenting such an argument would be to demonstrate what is sufficient and/or necessary for producing the elements of sentience—intelligence, consciousness, and self-awareness—and then show that machines necessarily lack them. Needless to say, this would be a difficult task. But there have been some suggestions.

A real theory that might help answer our question about machines’ sentience is psychologist Julian Jaynes’s bicameralism theory. Jaynes argues that, as recently as 3000 years ago, the two hemispheres of the human brain, while connected, were not unified. Instead of acting as one unit, the dominant vocal left hemisphere experienced the commands and decisions of the right as auditory hallucinations. 

When faced with a novel situation, the person would not reason out what to do; the person would hear what they took to be the voice of a god coming from their right hemisphere and obey unquestionably. This kept a person from making decisions, explaining why they did what they did, or reflect at all on their own mental states. Indeed, they may not have even been aware of their own egos.

Learn more about Star Wars.

The Evidence and Problems With Julian Jaynes’s Theory

Among the evidence Jaynes cites is: a) literature from more than 3000 years ago, all of which seems to lack an author with self-awareness; and b) studies on modern schizophrenics, who also hear voices telling them what to do. So the hemispheres integrated, allowing one to reflect on the processes of the other, ultimately leading to self-awareness. 

There would be two problems with using Jaynes’s theory to answer our question about machine sentience. First, it’s just a theory and a contested one at that. So we don’t know if it’s right. And second, it’s a theory about how self-awareness arose, but that’s just one aspect of sentience. 

But what if machine brains can’t even produce consciousness? Then they couldn’t be aware of their own consciousness, could they? They couldn’t be self-aware. Thus they wouldn’t be sentient. So, even if machines acted self-aware, we’d probably need a separate reason for thinking they are conscious.

Common Questions about Artificial Intelligence

Q: What’s the difference between being intelligent and minded?

One is part of the other. Intelligence is only a part of mindedness which deals with how we understand and solve problems. Another part would be the one that deals with emotions and memories. So maybe building artificial intelligence won’t lead to building mindedness.

Q: What’s Julian Jaynes’s theory of bicameralism?

The theory suggests that the two hemispheres of the human brain became unified around 3000 years ago. Although they were connected, they acted as two different units. If proved, the theory would be applicable to the argument surrounding true artificial intelligence.

Q: What evidence does Julian Jaynes cite for his theory of bicameralism?

Reading literature from more than 3000 years ago that doesn’t suggest self-awareness of the authors at the time, and studies that have been done on schizophrenics led Julian Jaynes to his theory which, if proved with more evidence, could change our attitude towards artificial intelligence in the future.

Keep Reading
What Best Explains Human Actions?
Destiny, God, and Conspiracies Theories: The Fallacy of Fatalism
‘The Adjustment Bureau’: The Death of Free Will and the Problem of Fate