By David K. Johnson, Ph.D., King’s College
Although they may not appear as soon as some have suggested, the eventual existence of artificially intelligent beings that behave like humans is likely. And when they do appear, the existence of conscious machines will raise not only metaphysical and moral questions but also epistemological ones as well; questions about what people think they know.

Declaring Artificially Intelligent Beings as Conscious
When artificially intelligent beings do appear, it should likely be concluded that they are conscious, intelligent, and self-aware—that they are sentient.

Not only would that be the consistent and rational conclusion given their behavior, but morally, that is the conclusion that should be drawn.
Since people can’t know for sure whether or not androids would be sentient, everyone should err on the side of caution. If they are treated as disposable, when in fact they are sentient, humans would once again be guilty of one of the most heinous moral crimes.
This is a transcript from the video series Sci-Phi: Science Fiction as Philosophy. Watch it now, on Wondrium.
Human-Like Cyclones
Consider the reboot of Battlestar Galactica. The last survivors of a human-like civilization are fleeing from the Cylons, a race of mechanical beings they created that then turned on them. Although the early versions of the Cylons were clunky tin-can-looking robots, eventually, there are 13 models of Cylons that look and act just like human beings. They even give off human-like life signs when evaluated with biomedical devices.
In a nod to Blade Runner, the writers of BSG have the humans call these Cylons ‘skin jobs’. And, indeed, it seems that the skin that covers their body is biological. Although this technically makes them cyborgs—beings that are part biological and part mechanical—many skin jobs were able to exist among the humans completely undetected.
In fact, what’s revealed as the series progresses, is that numerous characters who think they are humans are actually Cylons. They’ve been programmed with memories of a human childhood so as to ignore any evidence that they are Cylons. They are convinced, just as much as anyone else, that they are human.
The greater Cylon plan is to wake up these sleeper units at the appropriate time. Ironically, some of the characters who hate Cylons most of all turn out to be Cylons themselves. But this raises an interesting question. Suppose there was a society in which androids—machines that are indistinguishable from humans—were common. How could a person know that they themselves were not an android?
Learn more about the Prime Directive and postcolonialism.
How Could Someone Know?
Now, someone may remember their childhood and even have pictures of themselves with their mother right after they were born. But in such a world, couldn’t all of that be fabricated? A person might bleed when cut, they might occasionally get sick, but all that could just be part of the deception.

Someone could even open up their own skull and look at their brain in a mirror, but they could just be programmed to see gray matter when in fact, they’re looking at circuits. The truth is, they could never know for sure.
Now, as with Descartes’s dream problem, this might be solved with an inference to the best explanation. If the vast majority of people in a society are biological, then chances are that a particular person is too.
Learn more about The Matrix and the value of knowledge.
Probability to the Rescue
Even if all androids think they are persons, the probability that this particular person is one of them would be low. If, however, the proportion of biological life is low compared to artificial life, then it would seem that they have a legitimate worry. Without a way of telling if they are an android, all they would have to go on is the probabilities. And probabilistically, it would be more likely that they are an android in such a world.
To illustrate the logic here, imagine a dark room with 1000 hats, where 999 are blue, and one is red. Someone is among 1000 people asked to go into the darkroom, pick out a hat, and put it on. They do so. What should they conclude about the color of their hat?
They can’t tell by looking because the room is dark. All they can do is go off the probabilities. Well, probabilistically speaking, it’s much more likely that their hat’s blue. If they had to bet, that’s what they’d bet. And so it seems that’s what they should conclude, even if they happened to be the one person who picked out the red hat.
In the same way, if society is composed primarily of androids that are, even to themselves, indistinguishable from humans, a person should probably conclude that they are an android, even if they aren’t.
Common Questions about whether We Are Human or Artificially Intelligent Beings
Artificially intelligent beings should be treated as conscious, intelligent, and self-aware, that is, sentient, just to be on the safe side. Because if they are treated poorly, like objects, and then it turns out this is wrong, it would be one of the most terrible crimes in human history.
Actually, no, because if artificially intelligent beings did live in the world, then there would be a chance that someone is also one of the artificial beings, but doesn’t know it yet for various reasons.
The only way to do so logically would be through probability. If the number of biological beings is superior to artificial ones like artificially intelligent beings, then there’s a good chance the person is also a biological being. But if it’s the opposite, then they can start worrying.