Percentages are often misleading, even in the simplest of situations, like flipping a coin, and definitely in more complex situations, like poll figures before an election. But why does this happen?
Judging the Fairness of a Coin
Let’s start with a situation that’s pretty easy. Let’s flip a coin. The first question we want to ask is: Is the coin fair? Will you get heads 50% of the time?
Say you flipped the coin 10 times and came up with 3 heads. So, is the coin fair? Well, the conclusion is “maybe”. And that’s because random stuff happens.
How often can you expect to get heads if you flip a fair coin 10 times? You expect to get 0 heads about 0.1% of the time, 1 head about 1% of the time, 2 heads about 4% of the time, 3 heads about 12% of the time, 4 heads about 21% of the time, and finally, 5 heads only 25% of the time. For 6 through 10 heads, it’s just the opposite.
It shows that when you flip a fair coin 10 times, you can pretty much get any outcome with reasonable probability. So, if you do flip a coin 10 times and see 3 heads, that’s a pretty common outcome and you can’t conclude that the coin is unfair.
What if the same experiment is done by flipping the coin 1000 times? If you flip a coin 1000 times, it’s most likely that you’ll get heads somewhere between 47 and 53 percent of the times.
So what’s the message here? The message is that using statistics doesn’t easily answer a question. If a coin is flipped 10 times, the outcome can be anything from 0 to 10 heads. Even if the coin is flipped 1000 times, you can still expect to get a number in the 47 to 53% range. That is still not a good enough measurement to determine if the coin wasn’t unfair by 51 to 49 percent.
This is a transcript from the video series Understanding the Misconceptions of Science. Watch it now, Wondrium.
Election Predictions and Polling Statistics
The bottom line is that if you want to determine a probability, you need to gather a lot of statistics. One place where statistics is used a lot is in elections. Prior to an election, not every person in the country can be asked who they’ll vote for. So pollsters call a few people to see how they’re going to vote.
On the other hand, there are hundreds of millions of possible voters. If only a couple of thousand people are asked, that’s not good enough to get an accurate estimate of how the vote will turn out. That’s why good newspapers give a margin of error. That’s an attempt to estimate the range of possible outcomes.
Learn more about induction within polling and scientific reasoning.
Sample Selection for a Fair Poll
However, the process is much trickier. Suppose, like is often in the case of US presidential election, the actual vote will be very close—very nearly 50-50. It’s pretty easy to imagine asking 1000 voters and getting an outcome, for example, that the people you polled came out for the Republicans by an 80-20 split.
Now, that might sound impossible, but suppose that the people that were asked were attending a National Rifle Association convention. You can never be sure, but it seems reasonable that NRA members might preferentially vote Republican. If you did the same exercise at a Greenpeace convention, you might find the opposite, with a majority of people being polled voting Democratic.
The point here is that it is extremely important when you do a statistical survey to pick an unbiased sample. In the case of polling for elections, you must select a small group of people who are similar, on average, to the typical voters.
Historically, older, white, rural voters are more likely to go to the polls than younger, urban, ethnically diverse voters. And pollsters have to take all of that into account in order to get an accurate prediction.
The thing to keep in mind is that when you read a poll, try to find out if there are problems with how the poll was conducted. If the number of people asked was pretty small, you have reasons to be suspicious of the conclusion. Similarly, if the poll was conducted by an organization that might select a group of people who differ from the likely voters, suspicions must arise there, too.
Thus, probabilities and polling results are things that are statistically complex and require a lot of analysis and a lot of data to be right.
Learn more about the concept of randomness and its quantification through probability.
Common Questions about Percentages and Predictions
If you flip a fair coin twice, the possible outcomes are heads-heads, heads-tails, tails- heads, and tails-tails. So, the chance of getting two heads is one in four, or 25%.
If you flip a fair coin 10 times, you can get 0 heads about 0.1% of the time, 1 head about 1% of the time, 2 heads about 4% of the time, 3 heads about 12% of the time, 4 heads about 21% of the time, and 5 heads about 25% of the time. Thus, the chances of getting 5 heads is about 1 in 4.
If you flip a coin 1000 times, it’s most likely that you’ll get heads somewhere between 47 and 53% of the times. That is still not a good enough measurement to determine if the coin wasn’t unfair by 51 to 49 percent.
Polls predicting election results need to be conducted carefully. If the number of people asked was pretty small, you have reasons to be suspicious of the conclusion. Similarly, if the poll was conducted by an organization that might select a group of people who differ from the likely voters, you should be suspicious there, too.