Moral Questions on Robots and Sentient Androids

FROM THE LECTURE SERIES: SCI-PHI: SCIENCE FICTION AS PHILOSOPHY

By David K. Johnson, Ph.D., King’s College

The presence of robots in our lives presents us with an important moral question. How should we treat androids if they are developed one day? Would it be acceptable to make them do all our dirty work—clean our toilets, fight wars? Would they be disposable? Of course, if we do mistreat them, they might rebel, but that’s a matter of practicality.

Abstract digital human face.
As we move into the future faster than ever, with the idea of robots becoming sentient comes the responsibility of behaving appropriately toward them. (Image: pinkeyes/Shutterstock)

Is Being Sentient Relevant to the Moral Questions We Have?

Suppose something could ensure that robots don’t rebel. If so, should we be free to treat them however we want? Or would they have rights that we would be morally obligated to respect?

If we know that androids are sentient, then they would have rights, because it is from our sentience that our rights are derived. I am, for example, obligated to not harm someone because they can feel pain. If an android can feel pain, I am obligated not to harm it.

But even if we can’t settle the issue of machine sentience, we can still answer this moral question. To do so, consider one episode of Star Trek: The Next Generation: ‘The Measure of a Man’.

Learn more about Inception and the interpretation of art.

Saving Lieutenant Commander Data

Illustration of human head and compass that points to ethics, integrity, values, and respect.
If robots do eventually enter our day-to-day lives, how should we treat them? (Image: Triff/Shutterstock)

In the aforesaid episode, the Enterprise is visited by Commander Maddox, an expert in cybernetics who wants to study Lieutenant Commander Data—an artificial lifeform—by taking him apart. Data refuses, but Maddox insists he can’t refuse because Data is the property of Starfleet. A trial ensues in which Commander Riker must argue for Maddox and Captain Picard for Data. After Riker convincingly argues that Data is a mere machine, Picard approaches his friend Guinan for advice:

GUINAN: Maddox could get lucky and create a whole army of Datas, all very valuable.
PICARD: In what way?
GUINAN: Well, consider that in the history of many worlds, there have always been disposable creatures. They do the dirty work. They do the work that no one else wants to do because it’s too difficult, or too hazardous. And an army of Datas, all disposable. You don’t have to think about their welfare, you don’t think about how they feel. Whole generations of disposable people.
PICARD: You’re talking about slavery.
GUINAN: Oh, I think that’s a little harsh.
PICARD: I don’t think that’s a little harsh. I think that’s the truth. But that’s a truth that we have obscured behind a comfortable, easy euphemism: Property.

Picard takes her argument to the courtroom. Maddox suggests that Data is property because he is not sentient. But after getting Maddox to admit that Data is intelligent and self-aware, he argues:

A single Data … is a curiosity, a wonder even. But thousands of Datas. Isn’t that becoming a race? And won’t we be judged by how we treat that race? Now, tell me … what is Data? … what is he? …You see he’s met two of your three criteria for sentience, so what if he meets the third? Consciousness in even the smallest degree. What is he then? I don’t know. Do you? …

This is a transcript from the video series Sci-Phi: Science Fiction as Philosophy. Watch it now, on Wondrium.

The Final Argument for Data

Your Honour… sooner or later, this man or others like him will succeed in replicating Commander Data. And the decision you reach here today will determine how we will regard this creation of our genius. It will reveal the kind of a person we are, what he is destined to be. It will reach far beyond this courtroom and this one android.

It could significantly redefine the boundaries of personal liberty and freedom, expanding them for some, savagely curtailing them for others. Are you prepared to condemn him and all who come after him to servitude and slavery?

Learn more about The Matrix sequels and human free will.

Being Sentient Might Not Matter in This Argument at All

So, even if we can’t know whether androids are sentient, we should treat them as if they are. After all, if they aren’t, but we respect their rights anyway, what have we lost?

But if they are, and we treat them as disposable, we will once again be guilty of the most heinous of all of humanity’s moral crimes. 

Human and robot arm wrestling on a table.
The invention of AI will force people to face one of the most important moral decisions in human history. (Image: Illya Nosov/Shutterstock)

Let us not forget that “they’re different from us; they don’t really feel pain” was said about African Americans before the Civil War, about the Chinese as they built our railroads, and about the Jews as the Nazis tried to exterminate them.

The invention of AI will force us to face one of the most important moral decisions in human history. And the outcome may not only be relevant to androids.

If some futuristic predictions are right about anything, it’s that as technology continues to advance, we will become more and more dependent on it, even incorporating it into our biology—our brains even. The question of whether an artificial brain can produce consciousness may one day be relevant to everyone on Earth.

Common Questions about Moral Questions on Robots and Sentient Androids

Q: Why don’t we harm sentient beings?

The main moral question here is: can an android feel pain? If they can, then we are obligated not to hurt them.

Q: Why did Maddox in Star Trek: The Next Generation think Data couldn’t decide if he wanted to give up his body for Maddox to study?

Maddox thought that Data was the property of Starfleet and could not decide for himself. But Picard insists that they should think harder before answering moral questions like these because Data might be sentient.

Q: Why did Picard argue that Data might be conscious?

Picard claims that if there is a chance that Data is conscious, then with the standards we have right now, it’s safer to assume he is than he isn’t. Because if we don’t assume this, and then later change our minds, we might be ashamed of our answers to the moral questions surrounding androids in the past.

Keep Reading
Genuine Free Will, Compatibilism, and the Question of Choice
‘The Adjustment Bureau’: The Death of Free Will and the Problem of Fate
Carl Sagan’s “Contact”: Balancing Religion and Science