Genuine Free Will, Compatibilism, and the Question of Choice

FROM THE LECTURE SERIES: Sci-Phi: Science Fiction as Philosophy

By David K. Johnson, Ph.D., King’s College

The Matrix franchise deals with one of the most challenging problems of philosophy: if humans have no genuine free will to choose their actions, then it seems that everything is predetermined, and we are living in a deterministic universe. To solve this problem, some philosophers have put forward the theory of compatibilism, but this theory suffers from logical inconsistencies.

A 3D illustration of brain.
Our ultimate desires result from our brain structure and how our neurons wire and fire. (Image: adike/Shutterstock)

What is Compatibilism?

The essence of compatibilism is that an agent freely performs an action as long as that action flows or follows from some part of the agent. If the agent thinks about what to do, and then the outcome of that process causes the agent’s action, then the agent has acted freely. But this argument has some logical gaps and inconsistencies.

To see the problem, consider a scene from The Matrix Reloaded where the Merovingian gives a sexy blonde girl a piece of cake programmed to elicit a sexual response. Although it happens off-screen, the events that follow indicate that the Merovingian followed her to the ladies’ room to receive a sexual favor.

This is a transcript from the video series Sci-Phi: Science Fiction as Philosophy. Watch it now, on Wondrium.

The Problem of Compatibilism

Now, obviously, we think this is morally wrong—something very similar to using a date rape drug. But what if the program the Merovingian wrote reprogrammed her brain to rationally conclude that she should perform a sexual favor for the Merovingian?

Based on compatibilism, we would have to say that she chose to do what she did “of her own free will.” But clearly, this is not the case; what she did was not up to her—it was forced on her from the outside. Thus, her action was not free. The Merovingian is, indeed, guilty of rape. He is, after all, an analog for the devil.

This causes a problem for genuine free will because the outcome of our rational deliberations is not up to us either, but instead is forced onto us from the outside. Our ultimate desires are a result of our brain structure—how our neurons wire and fire—which is ultimately a result of our environment and DNA.

The Architect in The Matrix

The threats to free will get even worse in The Matrix franchise once Neo rescues the Key Maker, makes his way to the Source and meets the Architect—the program that designed the Matrix.

A man deciding between two choices.
The machines created Zion to give people a choice to reject the Matrix. (Image: Shutter_M/Shutterstock)

The Architect is the ultimate intellectual, probably the smartest program there is. He speaks in a highly analytic fashion with an immense vocabulary. This makes him incredibly difficult to understand. But if we pay careful attention and study the dialogue carefully, what he reveals is mind-blowing.

The first Matrix was a perfect world, without suffering or evil, that failed because its human subjects were unable to accept it. The Architect thus redesigned it, “based on … the varying grotesqueries of [human] nature,” to include evil. But it still failed.

Learn more about the nature and origins of evil.

The Necessity of Choice

Unable to understand why, the Architect consulted an “intuitive program, initially created to investigate … the human psyche.” This is the Oracle.

She realized that the Matrix couldn’t work unless those plugged into it, and humanity itself, had a genuine choice as to whether to accept or reject the Matrix. To create this choice, the machines created Zion, a city devoted to giving people a choice to reject the Matrix, which would also serve as a place for people to live if they did.

The Problem of Choice

This solution worked; 99.9 percent accepted the program. But this created a new problem. Over time, Zion would grow, freeing more and more people, and eventually, the Matrix would be empty. To deal with this, the machines decided that—when things started to get out of hand—they would just reset the entire system.

So the machines decided to select one exceptional individual and give him special powers; to essentially make him a messiah, and thus a spokesperson for the humans. They would then trick him into going to the Source with a prophecy about him being able to end the war.

Once there, however, they would reveal the deception and force him to instead choose between cooperating with the machines’ plan or allowing “the extinction of the entire human race.”

Neo is just the last in a line of persons chosen to be manipulated by the machines. The previous “Ones” weren’t in love with Trinity; Neo is. As Neo is making his decision, Trinity is about to be killed by an agent. So Neo rejects cooperation and instead chooses to save her.

Learn more about free will and the power of our decision making.

Genuine Free Will and Emotion

This is what the Architect says to Neo as he is making his choice:

A businessman showing different emotional states of calm, depression, and happiness.
Some people argue that we don’t make decisions based on a conscious deliberative process, instead, they are based on emotions. (Image: Somjai Jathieng/Shutterstock)

We already know what you’re going to do, don’t we? Already I can see the chain reaction, the chemical precursors that signal the onset of emotion, designed specifically to overwhelm logic and reason. An emotion that is already blinding you from the simple and obvious truth: she is going to die, and there is nothing that you can do to stop it.

This is worse than the Architect simply predicting Neo’s choice by looking at the deterministic mechanisms in his brain. It appears Neo’s decision isn’t even arising from a conscious deliberative process; it’s just coming from his emotions. And emotions are not deliberative and rational.

If our actions are not only the result of a predictable, deterministic process in our brains but of unconscious processes, it would appear that even on compatibilistic understandings of free will, we are not free.

Common Questions about Free Will, Compatibilism, and Choice

Q: How can the compatibilistic view of free will be refuted?

The compatibilistic view of free will is, in fact, incompatible with genuine free will because the outcome of our rational deliberations is not up to us but is forced upon us from the outside. Our ultimate desires are a consequence of our brain structure—how our neurons wire and fire—which is ultimately caused by our environment, DNA, etc.

Q: Why did the machines create Zion in The Matrix?

The Architect had previously designed several versions of the Matrix, but all of them failed to function properly. The problem was that humanity did not have genuine free will and choice to accept or reject the Matrix; to create this choice, the machines created Zion.

Q: Why cannot emotion provide a solid and logical basis for the existence of free will and free choice?

Emotions are not deliberative and rational, they cannot provide a solid ground for genuine free will.

Keep Reading:
Amazon Developing Wearable Tech That Reads Human Emotions
What Makes Us Human—Identifying a Universal Set of Emotions
Destiny, God, and Conspiracies Theories: The Fallacy of Fatalism