The argument about quality versus quantity in online participation is a perennial one.
What exactly defines quality?
And how is one person’s participation measurably more valuable, and presumably of higher quality, than another?
It’s a question that practitioners and academics, particularly those with an interest in the “deeper” deliberative dialogue have been pondering for well over a decade.
Over the last few weeks I have facilitated four Round Table discussions with some 50 online engagement practitioners in Sydney and Melbourne. The conversations covered a breadth of issues, but something that comes up again and again is the desire to lower the barriers to participation in order to drive up participation rates.
A noble goal and something we certainly strive for at Bang the Table. We see the introduction of online engagement practice to the methodological toolbox of community engagement practitioners as the most significant single change in the industry EVER to drive up potential citizen participation rates.
But, that is not to say that all participation is equal.
I use this graph to spell out the issue.
If we map the number of participants on the horizontal axis and the time it takes to participate on the vertical axis (using log scales) we get a curve that is theoretically asymptotic in both the horizontal and vertical planes.
In other words, as the time taken to participate approaches zero, the potential number of participants approaches infinity (or 100% of your target stakeholder group).
And, as the time taken to participate approaches infinity, the potential number of participants approaches zero.
This theoretical model seemed to make sense to the the Round Table participants.
If you ask people to commit 10 minutes of their time to joining the consultative process, you will get a lot more people participating than if you ask them to commit 30 minutes, one hour etc.
Now, it seems pretty clear to me that if someone takes five or ten minutes to learn and think about an issue, you will get a lesser quality of input than if they take thirty minutes or an hour, much less a week, month or year.
Input that takes minutes tends to be “top of head” – unless the person is already a studied expert on the subject matter. Quick polls, for example, give you a sense of where the community stands on an issue, but absolutely no sense of why or whether they are open to a conversation.
Whereas input that takes a little more time by requiring some learning along the way, tends to be better thought through, displaying a deeper appreciation of the complexity of the issues and tradeoffs at play. This is a generalisation: But I believe it is a fair one.
I hasten to point out that there is no natural polarity between two extremes. There is no dichotomy between “deep” versus “shallow” engagement: There is a continuum along the curve.
And everything along the curve has a legitimate place in the pantheon of community engagement objectives and methodologies. The important thing is to know what your engagement objectives are and therefore where you see your community input resting along the curve. Because when you know this, you can devise your engagement methodology and choose your online tools.
Different online tools and methodologies require more, or less, commitment on the part of the individual to participate in, or influence, the creation or modification of public policy.
This graph presents a VERY rough idea of where different online feedback (or engagement) mechanisms might sit on the T/P curve. I should emphasise at this point, that this is a working model. I welcome discussion in the comments section below.
I have located “Quick Polls” on the farthest reaches of the horizontal axis. These are the sorts of polls you see a lot on media websites because the results become a story in the their own right! Very low barrier to entry; often no need to register for the site, and very low time commitment.
On the other hand, I have located “social networks” on the farthest reaches of the vertical axis. Think Facebook, Linkedin, and dedicated “community of practice” social networks (using services like Ning). For these to be effective, they require a LOT of commitment by members, otherwise they all to often deteriorate into desert like spaces populate by tumbleweeds rather than people.
In between these two extremes come the plethora of “ideation” platforms, surveys, forums, story gathering tools.
I’ve put surveys somewhere in the middle, but the reality is that participation rates completely depend on the length of the survey. We all know that the longer the survey, the lower the completion rate.
I’ve located forums next to social networks, but this assumes that participants return to the forum a number of times to debate and dialogue of other forum members. Many don’t. Many leave a single, or maybe a couple, of comments and never return. This would push the participation rate well to the right along the curve.
Finally, the location of a particular tool along the curve is not fixed and not absolute. You can move tools along the curve by making participation more or less attractive. For example, by offering incentives to participate, or by presenting a particularly compelling argument to participate.
Thanks for getting all the way to bottom! Subscribe to our monthly digest newsletter if you’d like to be kept up to date about community engagement practice globally. Take a look at our two product websites: EngagementHQ if you need a complete online engagement solution, and BudgetAllocator if you need a participatory budgeting solution. Or get in touch if you have a story idea you think is worth sharing.