The Pros and Cons of Autonomous Learning Machines

FROM THE LECTURE SERIES: THE SURVEILLANCE STATE: BIG DATA, FREEDOM, AND YOU

By Paul RosenzweigThe George Washington University Law School

When discussing artificial intelligence, or what many prefer to call autonomous learning machines, an important distinction must be made, which lies in the word ‘learning’. There are many autonomous machines in existence already. While these machines can operate independent of human control, they’re not, generally, adaptive. They don’t learn from experience. They don’t adapt to unanticipated situations. They only do what they’re programmed to do.

Programmed robot arm packing metal components into a cardboard box
Not all autonomous machines are learning machines. (Image: Gorodenkoff/Shutterstock)

What Learning Means

Learning machines are different. They can adapt. They can learn from success or failure. They’re basically programmed to be capable of doing things that are unexpected and unanticipated.

Think, for example, of the programming required to operate a driverless car. It has to adapt, in the same way a human driver might, to situations that it hasn’t seen before. It turns out that many of the best-performing driverless vehicle algorithms share a common trait—humans have not explicitly programmed them. 

David Stavens, a Stanford University computer scientist, wrote about Junior, which is an autonomous car that earned Stanford second place in the Defense Advanced Research Projects Agency’s Urban Challenge. He said, “Our work does not rely on manual engineering or even supervised machine learning. Rather, the car learns on its own, training itself without human teaching or labeling.”

Learn more about Internet surveillance.

Robotic Ethics

American University law professor Kenneth Anderson wrote about some of the moral perils of automation in a provocative essay he titled “Law and Ethics for Robot Soldiers,” and published in Policy Review. Anderson asks, “Is it simply wrong per se to take the human moral agent entirely out of the firing loop?” In other words, is it wrong as a matter of principle to create robotic agents that act without human control?

A man in an autonomous driving test vehicle
The best self-driving cars rely on learning by themselves with training. (Image: riopatuca/Shutterstock)

The issue—Anderson writes—raises a further question as to what constitutes the tipping point into impermissible autonomy, given that the automation of weapons is likely to occur in incremental steps—Anderson believes that—autonomous weapons systems that remove the human being from the firing loop are unacceptable because they undermine the possibility of holding anyone accountable for what, if it had been done by a human soldier, might be a war crime. Who should be responsible for mistakes?

Thinking about that. Anderson offers these possible answers: The engineer or designer who programmed it in the first place—But he concludes—Without holding civilians or soldiers hostage—so to speak, to accountability—political leaders would be tempted to resort to war more than they ought. Precisely the same objection can be raised with respect to remotely-piloted drones.

Or, to put it more directly—if robots make mistakes and nobody is responsible, politicians may be more likely to use robots. They get the benefit of what the robots do without the burden of being blamed for what goes wrong. And what is true of weapon systems is equally true of any autonomous learning machine with surveillance capability. It is a real risk to society if we remove the element of human control from our machines.

This is a transcript from the video series The Surveillance State: Big Data, Freedom, and YouWatch it now, on Wondrium.

A Shared Fear of Autonomous Learning Machines

Some of the greatest thinkers of our time—men like Stephen Hawking, Bill Gates, and Elon Musk—all share similar concerns. They worry that artificial intelligence is a grave threat to humanity. We can’t be sure if they are right, but we can be sure that surveillance, data collection, and analytical capabilities are, in effect, a force multiplier.

Even under human control, such powerful systems shift the balance of authority toward those who control them. That’s what generates such a significant counter-reaction from the public. One does not need to be a Luddite—or an apocalyptic visionary—to see that if these various systems of surveillance were under autonomous control outside of the capability of human intervention, the shift in practical authority would be magnified.

Learn more about metadata.

On the Bright Side

To some, this speculative picture of the future is dystopian; and, without appropriate democratic controls, it might be. But one would be remiss if they didn’t also point out some more utopian possibilities. The idea of mind control of an inanimate object, for example, lies at the core of possibly revolutionary prosthetic devices.

A woman named Jan Scheuermann, who has been paralyzed from the neck down for years, had sensors embedded in part of her brain. As a result, she can move objects and lift them. This is an example of how machines and humankind—metal and flesh—might extend our potential.

Superior Treatment and Diagnosis

And if artificial learning intelligence scares you, take heart. You might know of IBM’s stellar computer program Watson, which won a Jeopardy game showdown against the greatest human champion to have ever played on this TV show—a man named Ken Jennings. 

Surgical room in a hospital with robotic technology equipment
In the future, robots might be able to help with medical diagnoses and treatments. (Image: MAD.vertise/Shutterstock)

To prevail, the IBM program used deep learning and cognitive computing to understand natural language problems and to reason to find the right answer. Today, IBM is taking that same deep-learning technology and deploying it to allow Watson to participate in health decisions. 

It can pull together vast quantities of data about health, disease, nutrition, lifestyle, and individual and collective medical histories. From this data, Watson can extrapolate and offer hypotheses about diagnosis and treatment in ways that are likely to lead to better health outcomes.

All of which is really simply stating a basic premise concerning surveillance—that technologies are generally neutral. It’s how they’re employed that is of concern. And that means that our principal focus should be on questions of accountability and oversight.

Common Questions about the Pros and Cons of Autonomous Learning Machines

Q: What is the difference between an autonomous machine and a learning one?

Autonomous learning machines can adapt to unexpected situations based on their previous experiences. But if the machine cannot learn, then it will only do what it’s programmed to do.

Q: What ethical problems might autonomous machines create?

If such machines are used for military purposes, the decisions made by autonomous learning machines would never be held accountable because there’s no one to hold accountable. If a person had made such a decision, it could have been considered a war crime.

Q: How can autonomous learning machines be helpful to humanity?

One example of helpful autonomous learning machines could be IBM’s computer program, Watson. Watson can gather data from various sources to extrapolate and determine appropriate solutions.

Keep Reading
The Question of Liberty between Complete Anonymity and Full Identity
Online Security and Anonymity: How Tor and Bitcoin Work
Anonymity: The Fine Line between Democracy and Surveillance