Sign-Up for Newsletter

Is participatory democracy a lottery

Is participatory democracy a lottery? A few thoughts on stratified random sampling

Dr Crispin Butteriss

Dr Crispin Butteriss

Crispin is a founding director of Bang the Table and the Chief Practice Officer.

Participatory democracy has been the philosophical driver of community engagement practice since the 1960s. But where does random sampling fit in?

This post is a long time coming. Ever since we got into the business of providing open access online forums for community engagement, I have been confronted with the comment that the results are not “representative” and by inference, not quite valid. There is a growing mood within the world of community engagement to adopt Stratified Random Sampling (SRS) as a component (sometimes the dominant component) of a community engagement strategy.

This post is about why I believe it is this movement that is not necessarily valid.

First, I want to knock on the head any thoughts that I am arguing against SRS because I see it as a threat to our business. Stratified random sampling and online community engagement are not necessarily methodologically inconsistent. While our forums are open access, there is no reason why you couldn’t set up a password protected discussion forum for your “stratified random sample” of people from the affected community. My motivation is simply that I am a believer in participatory democracy and open and transparent government. And I’m not sure that SRS always lends itself to meeting the objectives of these twin ambitions.

Second, I want to make it clear that I am not arguing against SRS in any circumstances; Quite the opposite in fact. SRS has its place as part of a research methodology and can be a very useful component of a broader community engagement strategy. I was chatting with a colleague recently who facilitated a session of residents invited to a public meeting using a SRS process. She reported that the residents were so happy to be personally invited by their Council that they felt both a debt of responsibility to take the process seriously and left with a greater sense of inclusion in their community. Both excellent outcomes.

BUT, and it is a deliberately capitalised BUT, I will argue here that SRS has its own methodological weaknesses and these need to be considered before it is adopted by the industry as a new standard.

My principle objection to the notion of “representativeness” when it comes to community engagement is that I believe it misunderstands the principle role of the community engagement practitioner – which is to work within a socio-political context to construct sound social policy. This is fundamentally different from the principle role of a social or market researcher, which is to work within a semi rational-empirical context. Both models are useful in developing social policy. The later sees the conversations that take place with the community as part of a research task, whereas the former sees the conversations with the community as a more complex blend of small “p” politics, research, public relations, community advocacy, and much much more.

Why is SRS suddenly so popular?

Let’s start by looking at the reason that is usually given for needing the views of a more “representative” sample of the population to ensure the validity of a consultation.

The most common argument I have come across is that SRS ensures that we here from a broad range of voices rather than all of the usual suspects.

There are at least two ways of looking at this argument depending whether one is an optimist or a pessimist:

  1. It is about finding a methodology that helps democracy to reach beyond the inner-sanctum of government and the various lobby groups to the broader community; or
  2. It is about finding a methodology that makes life easier for governments and bureaucracies by circumventing all of the people they find it difficult to get along with.

On first glance the first argument seems pretty reasonable. It isn’t. I’ll explain why in a moment.

On first glance the second argument seems pretty cynical. It is. But it is also a pretty realistic. The most common question I am asked when talking to organisations about using online tools… “Isn’t this just another way for the people we always hear from to have their say?” Followed by, “I already know what they’re going to say. I want to hear from other people.”

Keep in mind that the “people we always hear from” in this discussion are both the heroes of democracy and the metaphorical splinter under the fingernail of closed organisations. They are the regular letter writers – to the local paper and the organisation in question. So much so that they occasionally find themselves on the list of correspondents never to be corresponded with again. They are willing to attend every public meeting to keep an eye on what is going on and report back to their networks. They are the people who take up endless hours of staff time at the customer service counter. They understand the Freedom of Information rules and public sector governance arrangements better than the vast majority of government employees would ever want to.

They can be sometimes be irritating, rude, confronting, antagonising and generally hard to get on with. BUT, and this is a very big BUT indeed, they are the bastions of democracy. We all owe them a great debt. Why? Because the rest of us, for the most part, can’t really be bothered to keep a particularly close eye on workings of our public institutions. They continually challenge organisational transparency. They ask for evidence and demand well reasoned justifications for decisions. So although they can be confronting and tiresome, they keep the people who run our institutions honest. They should probably be on the public payroll – a bit like mystery shoppers for the public sector.

So the question we need to consider is about motivation. Do we want to hear from new people OR do we NOT want to hear from these people who we find irritating? I put it to you that in the vast majority of cases, the later is a more accurate reading of the motivational forces at play in most organisations.

By this reading, SRS, rather than being used as a tool of participatory democracy is actually being used as a cynical tool to stifle the voices of the people who make our democracy vital – in the full sense of the word.

I return now to the idea that SRS helps democracy reach beyond the inner-sanctum to a broader community.

As I said, on first glance, this seems like a pretty reasonable argument. But is it really? I don’t think so.

First, it assumes that there is necessarily something wrong with people and organisations directly lobbying their government organisations. I think this is a profoundly wrong in a free democracy. We have the right as individuals and organisations to present our case to our political representatives and public sector organisations. It should be an essential element of organisational transparency. Those organisations would be vastly poorer if we did not have this right.

Second, it assumes that SRS is the best way to open up the conversation. And here we come to the crux of the issue. What is SRS and how should and is it used?

The Australian Bureau of Statistics has the following to say about SRS:

A general problem with random sampling is that you could, by chance, miss out a particular group in the sample. However, if you form the population into groups, and sample from each group, you can make sure the sample is representative.

In stratified sampling, the population is divided into groups called strata. A sample is then drawn from within these strata. Some examples of strata commonly used by the ABS are States, Age and Sex. Other strata may be religion, academic ability or marital status.

The ABS goes on to note that “Stratification is most useful when the stratifying variables are simple to work with, easy to observe and closely related to the topic of the survey.”

This, in my view is a key weakness of SRS in a community engagement context. The most usual strata for community engagement purposes is sex and age, but it can occasionally include education and/or income. My question is, are these the most useful strata for conversations about public policy?

My view is that a person’s age or sex is about as useful as their height or hair colour as an indicator of their values – and it is values after all, not age, which underlie all public policy (Note: I differentiate from public policy development from service delivery planning – where age and sex and clearly important considerations.)

It would be more useful to stratify the population according to a mixture of religion, ethnicity, education, political affiliations and television viewing habits. You would be more likely to get a good spread of value positions. In this context SRS could produce some really interesting ideas. I would particularly like to see these variables used in a market research context, rather than more deliberative context.

That’ll do for now. Interested to hear what any/everyone has to say. Happy to be challenged or proven wrong. Always up for a dialobate! (dialogue-debate)

Thanks for getting all the way to bottom! Subscribe to our monthly digest newsletter if you’d like to be kept up to date about community engagement practice globally. Take a look at our two product websites: EngagementHQ if you need a complete online engagement solution, and BudgetAllocator if you need a participatory budgeting solution. Or get in touch if you have a story idea you think is worth sharing.

Published Date: 8 July 2009 Last modified on May 31, 2017
Print Friendly, PDF & Email
Share on Pinterest
Share with your friends


Online Public Engagement

FREE E-Guide


  1. rlubensky says:

    Hi Crispin,

    You've written a provocative post and I agree that it is worthwhile to explore the merits of SRS. You may know that I wrote the software to do the SRS for the Citizens' Parliament. We stratified on gender, age, education, locality (1 per federal HoR electorate) and aboriginality.

    My interest is in public deliberative processes that generate policy recommendations that are acceptable by the whole community, bureaucracy and elected government. Of course, there is a place and a need for stakeholder processes for which SRS would be silly. But I'd argue that some difficult issues are better addressed with the former than the latter. A process like a Citizens' Jury overcomes the duality by bringing stakeholder interests in to persuade the minipublic.

    Confidence in public deliberation processes is a social construction, and one way to build that confidence is through SRS. Of course, it would be technically better with conscription and belief-survey (as you suggest), but we can't have those without upsetting people and defeating our purpose! So we go with what we can do and benefit by improved deliberative potential and in public perception of the process.

    I agree that in absolute terms age and gender mean little. But if we can't reliably survey belief systems, then we can use age and gender as second-order selectors, because beliefs tend to cluster with them. So stratification helps spread the diversity.

    Yes, SRS tends to reduce the influx of "usual suspects". Certainly the forum would benefit by their hard questions. But invariably they come with strongly-held prescriptions and an unwillingness to behave civilly in a deliberative forum.

    There is an additional thing that we ask of deliberating minipublics, especially in a small jury: we ask them to look beyond just their private views but to the diversity of views in the community that will be impacted by their recommendations. If they are randomly-selected, they identify themselves as representatives, and are more inclined to be generous and exploratory in their consideration of different perspectives.

    I agree with you that the usual suspects are well-meaning and should be able to make a contribution. The question is how? The solution we tried at the Citizens' Parliament was to have a public-access online discussion forum in parallel and informing the deliberating minipublic. (It needed promotion, though…)


  2. Hi Ron,

    thanks for getting in touch. And thanks for making such a thoughtful reply to my post. I'll start with an admission, I've noticed that unless posts are a little provocative they tend to be ignored in much the same way as our community forums are ignored when the issues are benign. And there is nothing worse than being ignored! The post was framed as a beginning rather than an ending of what I hoped would be a conversation about this issue.

    From reading your reply I think we are pretty much in furious agreement about just about everything. I've always been a big fan of Citizen's Jury processes precisely because of their ability to overcome the what can be built up as a false dichotomy between "open-access" and "by-invitation" processes. The deliberative nature of this kind of process is critical to the integrity of the policy recommendations. My major concern is more with the use of SRS in non-deliberative polling where there is no opportunity for learning. There seems too high a risk of "push-polling" being wrapped up in the guise of "good science".

    Your comment that we can't ask people questions about their values to help inform the sample without upsetting them has my curiosity piqued. Has there been much research around this issue? Australian's aren't too keen to discuss religion and closely held personal beliefs so I imagine that this must be a something of a conundrum.

    An issue I didn't raise in the post is the question of how random a random sample can ever be given the ability of people to "opt-out" of the process at the outset. Has there been much research around the potential for bias in the sample as a result of the willingness or otherwise of individuals to even begin a conversation with the researcher?

    I note your comment regarding the promotion of online discussion forums. This is something we confront everyday as part of our business. It can be hard, particularly when the issue being discussed doesn't have an obvious and immediate relevance to an individual. We find it much easier to attract attention to forums about infrastructure projects for example, than Annual Management Plans – which potentially have a much greater impact. We're experimenting with social media and tend to recommend mainstream media as the main driver of traffic.

    BTW. I'm in Melbourne – Brunswick East. Would love to catch up personally for a metaphorical coffee to chat about this and the Citizens Parliament. I'm the local IAP2 contact and think our membership would be very interested in some sort of forum.

  3. rlubensky says:

    Hi Crispin,

    I don't think a deliberative event should be presented as an experiment, research exercise or a newfangled focus group. While the organiser (which may include academics, okay) is there to design and facilitate a conversation amongst participants, its purpose must be to deliver a recommendation to an authority like a govt dept. Any conversation with a researcher is a distraction!

    SRS works in two stages. First, a lot of random invitations are sent out, and a short survey is included. They tick off gender, age, etc. Then the second stage is the stratified random selection from those who accept the invitation, based on their survey submissions.

    The problem of asking them questions about their beliefs (eg. who they voted for, social capital links, priority between individual or community pursuits) is that cynics would not believe that the selection is random!! I can reveal that many at the Citizens' Parliament thought that SRS is a fancy way of rigging the draw, even though that is completely untrue.

    The side effect of asking citizens to question the authority of experts and politicians is that they also question the designers of deliberative events! Mind you, in some cases (eg. Gordon Brown's "citizens' juries") that would be a good thing.

    You raise the question of attrition after selection. It's a challenge for accurate stratification, especially for last minute drop-outs. Go to this page to find the articles about how we did it at the CP:

    I'm an IAP2 member. Feel free to call me anytime.

  4. Hi Ron,

    Has there been any research surveying participants after an event to test the hypothesis that beliefs and values tend to cluster within age/sex cohorts and/or that the sample demographics reflected other values based strata like religious affliations, ethnicity, education and voting preferences? It would be interesting to know whether there is any sample skewing in terms of the willingness to participate brought about by the socio-cultural background of the participants.


  5. rlubensky says:

    Sorry for delay in response. I'd be interested in such research findings too! Certainly, opinion polls show voting patterns (an indirect measure of values) varying on age, education, etc. But I'm sure you're after something a bit more psychologically rigorous.

    I think your second point is so crucial, that is what we miss in those who don't participate. When Fishkin ran his Tomorrow's Europe Deliberative Poll, I wrote a scathing post just on that point. However, preliminary work by John Dryzek on our CP is perhaps surprisingly not showing a strong communitarian skew to the participants who accepted the invitation. And there were still plenty of cynics amongst participants, which is probably a good thing.

  6. Anonymous says:

    I think we are all missing the point.
    Bang the table is a cost effective tool that useful in informing the community on issues and gather public opionion for those who opt in to have their say.
    It cannot replace serious survey research and I am concerned at times with the Voting mechanism that false pictures can be drawn.

  7. Matt Crozier says:

    Hi Simon

    I think the point Crispin was trying to make is that 'serious survey research' is also completely beholden to those who opt to have their say. There is no way of making people take part and in fact market research seriously annoys many of the people it is done to. I am not sure how representative those who agree to take part could ever be.

    The ability to show that the sample comprises people with different ages, incomes etc is false comfort as these factors fail to recognise the different values people hold.

    In my view market research and community engagement can compliment each other but the danger is when people try to pass one off as the other.

    Just for the record the voting mechanism on Bang the Table sites is optional and can be switched off by clients according to their wishes and needs. However, I agree that care always needs to be taken when interpreting results.