# The 3% margin of error- and how it can change a political debate

Standard

As I’m writing this post right now, we’ve been knowing for sure for several hours: with 55% of “no” votes, Scotland is staying in the UK. On a quick look at my Twitter feed, I’m getting a mixed bag of relief, celebration, introspection, ‘what next’ concern and that rant from Trainspotting (nsfw).

The one comment that caught my eye, however, came from Sussex Uni fellow Ben Stanley.

We surely do remember that YouGov poll:

(via)

Next thing we knew, we saw leaders of majors political parties urgently heading to Scotland, talks of further devolution, constitutional reform, as well as granting Scotland more power/a bigger share of the budget in the event of a “No” vote were on everybody’s lips, banks threatened to move to England in the event of a “Yes” vote, the pound sterling fell by almost one cent against the US dollar and, featuring Gordon Brown in a heartfelt ad, the Better Together campaign finally started to look like they are taking their task seriously.

(via)

The irony is- that one poll, in itself, may not have meant anything, statistically speaking. If you look at the poll tracker on the Telegraph website, we can see that since the first weeks of September, all polls showed the “Yes” and “No” campaigns neck to neck. For example, here’s what the polls on the eve of the referendum showed:

• Ipsos Mori: Yes 49%, No 51%

• YouGov: Yes 48%, No 52%

• Survation: Yes 47%, No 53%

Now…

Any poll of 1,000 people has a margin of error of 3%; and increasing your sample size can only take you so far; doubling your sample size, for instance, makes your poll not twice, but only $\sqrt2$ more accurate.  To go from a 3% margin of error to a 1% margin  you would need to increase sample size by a factor of 9- which doesn’t sound exactly feasible. (The YouGov poll in cause had a sample size of 1048. The ICM/Sunday Telegraph poll that also showed a lead for the “Yes” campaign had a sample size of 705).

Note also that the margin of error practically only tells us how likely it is for similar polls to give the same results under the same conditions; for instance, a 3% margin of error means that 19 out of 20 polls would give results where the figures do not differ by more than 3%.

Therefore, it’s not only possible, but actually statistically probable for one poll in 20 to give wildly inaccurate results- not because of bad methodology, but because of how statistics work. (Which was not even the case with the YouGov poll and the ICM poll, since they did not differ by more than 3% from most other polls undertaken around the same time).

Statistically, the fact that a poll yesterday showed yes-49%, no-51% and a poll today shows yes-51%, no-41% may well mean nothing. But as far as public consciousness is concerned, it can mean everything. If we were asked, based on either poll, whether we should expect the “yes” side or the “no” side to prevail- the answer would be the same: “We don’t know”.

Nonetheless, in the autumn of 2013, support for Scottish independence was barely above the 25% mark. By February 2014 it was at around 40%, and changed little until August. Knowing these figures, Scotland becoming an independent country was not something anyone expected to happen.

Publishing a poll result announcing “yes-51%, no-49%” did something that a poll result of “yes-49%; no-51%”  possible within the same margin of error would not have done: it changed our default expectations. A ““yes-49%; no-51%” could have been simply read in context as “we don’t know, but probably not”; whereas the “yes-51%, no-49%” results made the possibility of Scotland becoming a different country real in the public consciousness; which, in turn, had very real consequences on how the debate has been carried out and on what will happen after the “no” vote in the referendum.