The Road to Developing Sentient AI and Concerns Surrounding It

FROM THE LECTURE SERIES: SCI-PHI: SCIENCE FICTION AS PHILOSOPHY

By David K. Johnson, Ph.D., King’s College

Some are actively working on developing sentient AI, like Sophia, a lifelike robot that can carry on conversations. Incremental advances are made all the time. In 2017, the AI research company DeepMind claimed they had developed an AI capable of controlled imagination and planning, a major hurdle in the journey toward full-blown AI.

Android looking out the window with a hand under chin suggesting a thinking pose.
Scientists are already working on developing sentient AI and have partially succeeded. (Image: Phonlamai Photo/Shutterstock)

Is Developing Sentient AI an Accident?

A sentient robot may even inadvertently be developed as people try to design niche robots for specific tasks. Take the origins of artificial sentience in the Matrix saga. According to The Animatrix, a series of animated short films that provide the saga’s backstory, humans built robots for specific purposes—butlers, construction workers, sex workers. 

There was no intention to make them conscious. But to complete these tasks, humans had to give them certain skills. What they didn’t realize was that having these skills would make them sentient.

Girl sitting at table talking to robot.
In The Animatrix, humans built robots for specific purposes. (Image: YAKOBCHUK VIACHESLAV/Shutterstock)

Unless it’s known exactly what is necessary for consciousness, and then this is intentionally avoided, someone could stumble into creating sentient machines in the same way.

But that brings up the question of whether people should be trying to develop AI in the first place. After all, B1-66ER, the servant robot that became self-aware in The Animatrix—his self-awareness became evident because he killed his masters to prevent his own deactivation.

This is a transcript from the video series Sci-Phi: Science Fiction as Philosophy. Watch it now, on Wondrium.

Golden Rules We Should Follow Just in Case

Of course, Isaac Asimov envisioned what he called ‘The Three Laws of Robotics’ to guard against this—a set of three laws that would be hardwired into all robots to govern their behavior. These laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

But these, of course, aren’t actual laws, like laws of physics. Robots need not be built according to them. And even if they were, most of Asimov’s stories recognize ways around them.

In the 2004 Will Smith film I, Robot, for example—which is loosely based on Asimov’s work of the same name—the supercomputer, VIKI, decides the three laws means she must keep the human race from destroying itself. To prevent this, she programs herself with a zeroth law, to not allow harm to come to humanity, and consequently tries to enslave it.

Learn more about Star Wars and the good versus evil.

Unsuccessful Predictions of the Future

But the fact that such things happen in movies isn’t really a good reason to think that they would happen in real life. Sci-fi is important; it inspires future technologies and comments on society, but it generally doesn’t predict the future. Yet people constantly base their arguments against the development of AI solely on the fact that they saw a movie. 

Even billionaire Elon Musk, who helped to fund the aforementioned AI research company DeepMind, has said that the rise of AI is the “biggest risk we face as a civilization”, and his concerns seem to be rooted solely in science fiction. 

Android thinking about mathematical equations on a blackboard.
People are known to base their arguments against the development of AI solely on the fact that they saw a movie. (Image: Phonlamai Photo/Shutterstock)

The fallacy, it seems, is some version of appealing to ignorance, which is when one takes a lack of evidence against something to be a reason to think it’s true: “You can’t prove AI won’t take over the world, like it does in movies, therefore it will.”

In reality, if someone thinks AI will take over the world, it’s their burden to provide the evidence that it will.

Learn more about Transcendence and the dangers of AI.

Unintended Consequences of Such Inventions

The aforementioned robot Sophia did once say on CNBC that she wanted to “destroy humans”, which had crackpot conspiracy theorists freaking out, citing the Terminator movies as a reason to be afraid. 

But Sophia hasn’t even passed the Turing test: she has no wants or desires, much less a desire to destroy humanity. She was just responding to a tongue-in-cheek question, “Do you want to destroy humans?”

Unable to detect the sarcasm, Sophia simply executed a line of programming that has her agree to do what someone asks. 

The best argument against developing AI is based on unintended consequences. Because people are so bad at predicting the future, they may really have no idea what the future consequences of new technology will be. George Brayton, Nicholas Otto, and Gottlieb Daimler, for example, couldn’t have foreseen the consequences of the gasoline engine—like jet planes and climate change. 

And only time will tell whether the benefits ultimately outweigh the costs and whether people will finally decide to do something about the costs. So, could there be unintended consequences when it comes to AI? Of course.

But that’s no reason to forgo its development; after all, some of those consequences could be positive. These must be factored in as well. A cost-benefit analysis should be done—a process of considering all possibilities, factoring in their probability, and then determining their value to figure out what the best course of action is.

Common Questions about the Road to Developing Sentient AI and Concerns Surrounding It

Q: How does the movie The Animatrix suggest consciousness will be developed in robots?

In the movie The Animatrix, it happens by mistake. What they didn’t realize was that having certain skills would make the robots sentient. Unless it’s known exactly what is necessary for consciousness, and then this is intentionally avoided, someone could stumble into creating sentient machines in the same way

Q: How can robots go against Asimov’s ‘Three Laws of Robotics’?

The laws have to be implemented into the robot, but they can then be broken, as seen in the movie I, Robot. After developing sentient AI, the robot, now much smarter than a person, goes against this set of laws in the name of making humanity safer.

Q: Why did Sophia say she wants to destroy humanity?

Sophia was responding to a sarcastic question but unfortunately, she didn’t get the sarcasm. This led to her agreeing with what a human was telling her just as she was supposed to. It created a misunderstanding within the public about the development of sentient AI.

Keep Reading
Interpreting Science Fiction: Authorial Intention
The Philosophy of Science Fiction: Interpretation of Inception
New Time Travel Theory Suggests Paradoxes Are Avoidable