Is participatory democracy a lottery? A few thoughts on stratified random sampling

Is participatory democracy a lottery

Participatory democracy has been the philosophical driver of community engagement practice since the 1960s. But where does random sampling fit in?

This post is a long time coming. Ever since we got into the business of providing open access online forums for community engagement, I have been confronted with the comment that the results are not “representative” and by inference, not quite valid. There is a growing mood within the world of community engagement to adopt Stratified Random Sampling (SRS) as a component (sometimes the dominant component) of a community engagement strategy.

This post is about why I believe it is this movement that is not necessarily valid.

First, I want to knock on the head any thoughts that I am arguing against SRS because I see it as a threat to our business. Stratified random sampling and online community engagement are not necessarily methodologically inconsistent. While our forums are open access, there is no reason why you couldn’t set up a password protected discussion forum for your “stratified random sample” of people from the affected community. My motivation is simply that I am a believer in participatory democracy and open and transparent government. And I’m not sure that SRS always lends itself to meeting the objectives of these twin ambitions.

Second, I want to make it clear that I am not arguing against SRS in any circumstances; Quite the opposite in fact. SRS has its place as part of a research methodology and can be a very useful component of a broader community engagement strategy. I was chatting with a colleague recently who facilitated a session of residents invited to a public meeting using a SRS process. She reported that the residents were so happy to be personally invited by their Council that they felt both a debt of responsibility to take the process seriously and left with a greater sense of inclusion in their community. Both excellent outcomes.

BUT, and it is a deliberately capitalised BUT, I will argue here that SRS has its own methodological weaknesses and these need to be considered before it is adopted by the industry as a new standard.

My principle objection to the notion of “representativeness” when it comes to community engagement is that I believe it misunderstands the principle role of the community engagement practitioner – which is to work within a socio-political context to construct sound social policy. This is fundamentally different from the principle role of a social or market researcher, which is to work within a semi rational-empirical context. Both models are useful in developing social policy. The later sees the conversations that take place with the community as part of a research task, whereas the former sees the conversations with the community as a more complex blend of small “p” politics, research, public relations, community advocacy, and much much more.

Why is SRS suddenly so popular?

Let’s start by looking at the reason that is usually given for needing the views of a more “representative” sample of the population to ensure the validity of a consultation.

The most common argument I have come across is that SRS ensures that we here from a broad range of voices rather than all of the usual suspects.

There are at least two ways of looking at this argument depending whether one is an optimist or a pessimist:

  1. It is about finding a methodology that helps democracy to reach beyond the inner-sanctum of government and the various lobby groups to the broader community; or
  2. It is about finding a methodology that makes life easier for governments and bureaucracies by circumventing all of the people they find it difficult to get along with.

On first glance the first argument seems pretty reasonable. It isn’t. I’ll explain why in a moment.

On first glance the second argument seems pretty cynical. It is. But it is also a pretty realistic. The most common question I am asked when talking to organisations about using online tools… “Isn’t this just another way for the people we always hear from to have their say?” Followed by, “I already know what they’re going to say. I want to hear from other people.”

Keep in mind that the “people we always hear from” in this discussion are both the heroes of democracy and the metaphorical splinter under the fingernail of closed organisations. They are the regular letter writers – to the local paper and the organisation in question. So much so that they occasionally find themselves on the list of correspondents never to be corresponded with again. They are willing to attend every public meeting to keep an eye on what is going on and report back to their networks. They are the people who take up endless hours of staff time at the customer service counter. They understand the Freedom of Information rules and public sector governance arrangements better than the vast majority of government employees would ever want to.

They can be sometimes be irritating, rude, confronting, antagonising and generally hard to get on with. BUT, and this is a very big BUT indeed, they are the bastions of democracy. We all owe them a great debt. Why? Because the rest of us, for the most part, can’t really be bothered to keep a particularly close eye on workings of our public institutions. They continually challenge organisational transparency. They ask for evidence and demand well reasoned justifications for decisions. So although they can be confronting and tiresome, they keep the people who run our institutions honest. They should probably be on the public payroll – a bit like mystery shoppers for the public sector.

So the question we need to consider is about motivation. Do we want to hear from new people OR do we NOT want to hear from these people who we find irritating? I put it to you that in the vast majority of cases, the later is a more accurate reading of the motivational forces at play in most organisations.

By this reading, SRS, rather than being used as a tool of participatory democracy is actually being used as a cynical tool to stifle the voices of the people who make our democracy vital – in the full sense of the word.

I return now to the idea that SRS helps democracy reach beyond the inner-sanctum to a broader community.

As I said, on first glance, this seems like a pretty reasonable argument. But is it really? I don’t think so.

First, it assumes that there is necessarily something wrong with people and organisations directly lobbying their government organisations. I think this is a profoundly wrong in a free democracy. We have the right as individuals and organisations to present our case to our political representatives and public sector organisations. It should be an essential element of organisational transparency. Those organisations would be vastly poorer if we did not have this right.

Second, it assumes that SRS is the best way to open up the conversation. And here we come to the crux of the issue. What is SRS and how should and is it used?

The Australian Bureau of Statistics has the following to say about SRS:

A general problem with random sampling is that you could, by chance, miss out a particular group in the sample. However, if you form the population into groups, and sample from each group, you can make sure the sample is representative.

In stratified sampling, the population is divided into groups called strata. A sample is then drawn from within these strata. Some examples of strata commonly used by the ABS are States, Age and Sex. Other strata may be religion, academic ability or marital status.

The ABS goes on to note that “Stratification is most useful when the stratifying variables are simple to work with, easy to observe and closely related to the topic of the survey.”

This, in my view is a key weakness of SRS in a community engagement context. The most usual strata for community engagement purposes is sex and age, but it can occasionally include education and/or income. My question is, are these the most useful strata for conversations about public policy?

My view is that a person’s age or sex is about as useful as their height or hair colour as an indicator of their values – and it is values after all, not age, which underlie all public policy (Note: I differentiate from public policy development from service delivery planning – where age and sex and clearly important considerations.)

It would be more useful to stratify the population according to a mixture of religion, ethnicity, education, political affiliations and television viewing habits. You would be more likely to get a good spread of value positions. In this context SRS could produce some really interesting ideas. I would particularly like to see these variables used in a market research context, rather than more deliberative context.

That’ll do for now. Interested to hear what any/everyone has to say. Happy to be challenged or proven wrong. Always up for a dialobate! (dialogue-debate)

Thanks for getting all the way to bottom! Subscribe to our monthly digest newsletter if you’d like to be kept up to date about community engagement practice globally. Take a look at our two product websites: EngagementHQ if you need a complete online engagement solution, and BudgetAllocator if you need a participatory budgeting solution. Or get in touch if you have a story idea you think is worth sharing.

Published Date: 8 July 2009 Last modified on October 9, 2018

See how activates your community. Request a demo