Earlier this week, The Sun commissioned a poll about public attitudes to a referendum on changing the UK electoral system from the fusty old first past the post to its mildly controversial next of kin, alternative vote (AV). Watching the Murdochites writhe as the results come out the wrong way proves that, if you can’t prove anything with statistics, you can at least try. One thing you can prove with statistics, however, is that there’s a rather more radical idea when it comes to reforming democracy: replacing referenda with opinion polls.

The Sun found that 69% of people are in favour of a referendum on AV, but that figure drops sharply to 46% once you explain that a referendum is a very expensive dipstick of popular opinion. Indeed, a nationwide vote seems to cost about £80 million—enough to bail out the UK’s beleaguered physics research council STFC, or save nearly 200,000 lives if donated to the most cost-effective charities. It’s an expensive way to ask a question!

However, the question a referendum usually seeks to answer is very simple: does the nation want `x`, yes or no? Again usually, we go with a simple majority, and if 50% + 1 of the population think one thing or the other, that’s what we’ll do. It’s pretty easy to establish whether the percentage support is above or below fifty without bothering to sample everyone, unless the level of support for yes and no is very close to 50%. To understand why, we’ll need a few paragraphs of simple statistics.

Statistical uncertainty is a measure of how difficult it is to be sure of a value given a limited number of measurements. If you want to know exactly what the entire population thinks of something, the only way to be 100% sure is to ask them all—to hold a referendum. If you ask fewer people than everyone, your uncertainty rises as the number of people you ask falls. This is one of the phenomena which make voting intention polls unreliable—the others primarily being how you choose participants (which is best done totally randomly, but the pollsters often distort this in a perhaps-misguided attempt to improve their accuracy because their samples are so tiny) and the fact that a general election in which over 600 MPs are chosen in as many constituencies whose voters may swing in any direction for all kinds of reasons is somewhat more complex than a decision with a simple 50% threshold.

So, time for some numbers. Say 70% of the population support AV. If you only ask eleven people (picked at random), one in thirteen times you’ll get the referendum result wrong and find that less than 50%, so five or fewer of them, are in favour of AV. That’s quite surprising—you’ve asked just eleven people, out of nearly fifty million voters, and you’ll be wrong less than 10% of the time. Mad!

Still, you might want to get decisions of national importance right more than twelve times out of thirteen, and it does get a bit trickier if the proportion of support is closer to 50:50. Imagine instead that 49% of people support AV…how many people do we need to ask to be 95% sure we won’t do the wrong thing? Well, now our sample of eleven voters will be wrong about 47% of the time—we may as well toss a coin! If we sample more people though, we can do better—in fact, to be 95% sure that we will discern the 1% difference, we’ll need a sample of 10,000 voters—a pretty big survey, but nearly 5,000 times smaller (and correspondingly cheaper) than a nationwide vote. And we only need a survey this big if we’re within 1% of breaking even; for the previous case where 70% of the population support AV, a sample of 10,000 is so unlikely to get it wrong that I actually can’t calculate how unlikely that is. And I have a scientific calculator.

So, a sample of 10,000 is easily enough to discern popular support for any issue not balanced on a public opinion knife-edge, and would be significantly cheaper than holding a full-blown referendum. If it were to turn out to be very close-run, we could have a statistical uncertainty threshold above which we can send out another 10,000 ballots, and/or a point at which we decide an issue is close enough to 50:50 split that we should choose an option due to merits other than public adoration.

So, these mini-referenda are significantly cheaper than their full-population counterparts, and very rarely wrong. Should we implement them? This barganacious method of consultation could be used to sample the person on the street’s whim significantly more regularly than nationwide referenda. Currently, most people’s interaction with national government is to send it a very confused and almost inaudible message every five years: a cross on a ballot paper next to a candidate could be a vote for, amongst other things, that candidate, the party, the party leader, some of their policies, not the other candidate/party/policies, your favourite colour, or whatever, and will probably be lost in the noise of the other 50,000 people in your constituency voting. A referendum can get a targeted answer to a specific policy question. Another more tenuous possible pro is that being ‘specially selected’ (at random) to participate, plus the fact that your vote counts rather more than it would in a national referendum, might actually increase turnout.

This technique, just like holding a referendum, obviously isn’t appropriate for all policy questions. It’s not fair to expect us proles to be up on every nuance of complex issues of the day, and that’s why we have professional politicans, economists, generals, scientists and so forth. The prospect of more referenda would necessitate a debate about what kinds of questions it’s useful to ask the public. It might also be worth considering whether we should have different thresholds for action than a simple majority. For example, enormous constitutional upheaval might want 66% of people to be behind it, whilst a smaller-scale reform which doesn’t affect everyone might be appropriate to pass with 40% support.

This would be a comparatively cheap, statistically sound way to enact some direct democracy. Keen? Mini-referendum, anyone?

### For maths nerds

You may enjoy reading some more about margins of error. An easy-to-remember rule of thumb for small samples of a large population is that the 95% confidence limits are (100/√`n`)%—hence for a sample where `n` = 10,000, √`n` = 100 and the 95% CLs are 1%.

Also, there is actually a way of working out how unlikely it is that a sample of 10,000 in a population of 70% AV lovers would give a result below 50%. The standard error on the mean with a sample that big is 0.46%, so 70 − 50 = 20% is 43.6 standard deviations. This will happen basically never—far too infrequently for a calculator or even some l33t Python to evaluate. However, thanks to me mate Dan for pointing out that you can rephrase the question in terms of the complementary error function and plug it into Wolfram Alpha, which reveals that the probability of an error this large is around 10^{−416}; one of the few numbers in the Universe smaller than the chance of you finding a molecule of active ingredient in a homeopathic sugar pill.

Hey,

I think you have overlooked a key issue surrounding voting. That is voting trends and regional differences. If, for example the sample for AV was taken from a Lib Dem constituency then there is a higher probability that the sample will have a YES outcome even if the over all majority of people wanted NO.

Your equasions work, if people spread themselves evenly or u could proportionally select your sample by geographic location.

But then there is the issue of age representation, gender, ethnicity etc. These have to be considered in your sampling, to name but a few.

The best way to test my reasoning is to hang around the Conservative party Conference and sample 10k people on AV from in and around that vacinity. And we can see whether is matches the referendum in May.

Peace

Craig

Hi Craig! Perhaps I brushed over this slightly above—but I did mention that choosing participants is ‘best done totally randomly’. The idea would be to randomise the sample geographically too, otherwise, as you say, there would be a massive possibility for bias. Basically, you’d want to pick names from the nationwide electoral register out of a hat. This is the beauty of randomisation—if done properly, and with a big enough sample, all sources of bias are entirely eliminated.