By David K. Johnson, Ph.D., King’s College
The Turing test was introduced by one of the earliest computer scientists; his computer genius even helped crack Nazi codes that enabled the allies to win WWII. At the time, the big question was whether computers would ever mentally grasp the meanings of words.

The Turing Test
To answer the question discussed above, Alan Turing, of the Turing test, argued that if machines ever gained the ability to use language as humans do, the answer would be yes. To establish this, he imagined a person holding two lengthy conversations, one with a human and one with a computer, without the person knowing which was which.
Both conversations simply involved a text interaction. Turing suggested that if people couldn’t tell which was which—this is called ‘passing the Turing test’—then you should conclude that the machine truly understands the language it’s using.
Turing was only concerned with whether a machine could understand language, but we could expand the example to include all of the human behavior to draw a conclusion about whether a machine is sentient as well. We might call this ‘the mega-Turing test’.
If, in a personal interaction with a human and a machine, you can’t tell which is which, then you should conclude that whichever is the machine is sentient. The basis of the test is found in the solution to another philosophical problem: the problem of other minds.
This is a transcript from the video series Sci-Phi: Science Fiction as Philosophy. Watch it now, on Wondrium.
The Problem of Other Minds
The problem of other minds observes that the only mind that one is directly aware of is his or her own. So, for example, for all I know, everyone else in the world doesn’t have a mind and instead only acts as if they do. Consequently, the argument suggests, I can’t know that anyone else is minded.
The solution, however, is simple: I can know others are minded because knowledge doesn’t require certainty. The best explanation for why others behave the way they do is that they are minded. I know my mind drives my behavior. Since others behave pretty much like me, I should conclude that they have minds driving their behavior too. Doubting that others have minds, while possible, is not reasonable. And I can know something if it’s beyond a reasonable doubt.
If the fact that other humans behave like I do is a reason to conclude that they are minded, then the fact that a machine behaves like I do is too. That’s why, if we ever do one day invent androids, we should conclude that they are minded—what we’ve been calling sentient.
Learn more about capitalism in Metropolis, Elysium, and Panem.
Wires and Circuits Don’t Create Life
A number of objections have been raised suggesting that even androids shouldn’t be considered sentient. Some say: “This is all just a result of anthropomorphic bias, the human tendency to ascribe agency to things that display any human-like behavior.” If we merely relied on our emotional reaction to androids, that might be true.
But, we go on to consider an argument, based on the fact that we could not distinguish an android’s behavior from that of a human, and then used inference to the best explanation to derive the conclusion that the robot, along with many other such machines, is sentient. So our conclusion is the result of a rational inference, not an instinctual bias.

Another objection might be: “They’re made of the wrong kind of material. Wires and circuits can’t think.” This objection just begs the question. The issue at hand is whether wires and circuits could think; you can’t settle the issue by just declaring they can’t.
Indeed, since we don’t yet know what is necessary for consciousness, we don’t know that being made of organic material is necessary for consciousness.
Learn more about the prime directive and postcolonialism.
Can Something That Is Programmed Be Sentient?
There are people that claim: “Androids would be programmed, so they can’t be minded.” Well, first, they might not be programmed. We may just artificially create infant-like brains and then bring them up like babies.
But even if they are programmed, so what? So are we, by our genes and environment. Being programmed may prevent androids from having free will, but never once, in doubting our own free will have we been tempted to think that we don’t have minds.

The fourth objection we encounter toward this subject might be: “All computers do is shuffle symbols—exchange one symbol for another. And symbol shuffling could never produce linguistic understanding, much less consciousness.”
Computers aren’t actually symbol shufflers. We’ve invented symbol shuffling languages to describe how we program them, but there aren’t really symbols floating around in there. And that whole “0 and 1” thing is just a metaphor for circuits being on or off. We could actually do the same thing with the neurons of your brain—describe their firings with a series of 0s and 1s—but that wouldn’t mean that you aren’t conscious.
At the base level, the parts of your brain and an android brain would be doing the same thing: sending complex information to one another by firing electrical impulses at each other. If one such process produces a mind, why wouldn’t the other?
Common Questions about the Turing Test
The Turing test is basically like a blind taste test where a human holds a conversation with two parties, one of which is a robot. If the human cannot tell which is a robot, then the robot passes the test.
The problem of other minds states that we can never really know if other people are minded because the only mind every person is actually aware of is their own. The Turing test itself is based on this problem.
There is a chance that we build artificial intelligence and raise it like an infant, so it learns on its own, and maybe one day it will pass the Turing test with what it has learned. On the other hand, if being programmed is an objection to being sentient, then so are we. We are programmed by nature and nurture, which may lead us to doubt our free will, but it never leads us to doubt ourselves being sentient.