Are all consultations equal?
I feel on safe ground in claiming that one of the main purposes of a consultation is to generate a flow of information from the consultees to the sponsor. Certainly, there can be other reasons for consulting. But it would be a very odd consultation that didn’t seek to gain information about the topic in question. Yet, do all consultations do an equally good job? Are they all equally efficient?
It seems reasonable to define the efficiency of a consultation process in terms of the size of that information flow. The question is how to measure a flow of information? I can offer two routes to deriving the answer: the scenic and the direct. The scenic route uses the analogy of ecological diversity, while the direct route uses a formula from information theory. Both routes lead to the same destination, namely a precise, standardised and intuitive measure of the efficiency of consultation processes.
A scientific study of consultation: scenic or direct route?
The analogy of ecological diversity is important. Because it opens the way for scientific study of consultation. This, in turn, should lead to consultations which are better for the consultees, because we will make more efficient use of their time and energy. Better also for the sponsors. Because we will be able to demonstrate that we have delivered value for money. And better for us as practitioners, because we will know how efficient we have been.
Taking the scenic route first, imagine that the consultation topic is represented by an inaccessible mainland, with a series of islands offshore to which we do have access. These islands represent different consultations. Now imagine that the mainland is populated by a community of different species, some of which have managed to migrate to the offshore islands. For the purposes of this analogy, the species are the issues associated with the topic. For example, if the topic is policing in a city, the issues could include response times, use of firearms, visible policing, location of police stations and so on.
The mainland can’t be accessed directly so we have to infer the composition of its community from what we find on the islands. Because of the vagaries of the local environment some island communities will have more species than others. Similarly, on a given topic, some consultation processes produce more issues than others. It seems natural to say that the islands with more species have more diverse communities than those with fewer species. Likewise, it seems natural to say that consultation processes that produce more issues are more efficient than those with fewer issues.
But hold on!
When ecologists study communities of animals and plants they often ask the question: ‘do a few species predominate, while others are rare?’ (The answer is usually ‘yes’.) Similarly, we can ask the question of communities of issues: ‘do a few issues predominate, while others are rare?’ (Same answer.) In real life there is bunching – a few species/issues predominate, while others are rare. A simple count of the number of different species (the ‘species richness’ in ecological terms) will ignore bunching and give a misleading picture of diversity. A simple count of the number of issues raised (the ‘issue richness’) will give a misleading picture of efficiency.
Consider two different consultations – A and B – on the same topic, each with 100 responses. In the case of consultation A, let’s say that 91 people raised issue 1, and the other nine people each raised a different issue, making 10 in all. Let’s say that B also produced 10 issues, but each of them was raised by 10 people. The issue-richness is the same in both cases. But can we really say that both consultation processes are equally informative and thus efficient? The richness is the same in both cases, but in one case the issues are distributed evenly, while in the other they are not. We need to take both richness and evenness into account.
The ‘effective number of species’
Although ecologists have more than 50 tools to measure diversity there is one in particular which suits our purpose, namely the ‘effective number of species’, which gives equal weight to both richness and evenness. Advantages of this measure include taking bunching into account and providing an intuitive, standardised way to compare diversities.
Adapting this tool to our own needs, we can use the ‘effective number of issues’ (ENI) to measure the diversity of the issues – which is the amount of the information flow, which is the efficiency of the consultation. The ENI works with both quantitative and qualitative processes, open and closed questions. All we need is a list of the issues produced and the number of times each one was raised or responded to.
Actually, that should read ‘the valid number of times/responses’, and the following definition of ‘valid’ might make you feel uncomfortable at first.
I define ‘valid’ as meaning ‘expressing a definite view’. Conversely, ‘No answer’ is an invalid response, and so are noncommittal responses such as ‘No answer’, ‘Don’t Know’, ‘Not sure’, ‘Not enough information’ and ‘Neither/Nor’. Whether the views are positive or negative, supportive or hostile, very strong, fairly strong, fairly weak or very weak – none of this is relevant to measuring the efficiency of the consultation process. (This might make you feel uncomfortable, because we are ignoring the strength and the direction of the views expressed.) But all that matters for the purpose of measuring efficiency is the amount of information, not its content.
Of course, by going for the amount we are not abandoning the content. The strength and direction of what people say about a topic is valuable intelligence and will we hope be used to guide all sorts of important decisions.
But what I am advocating is a trade-off in which we put content on one side in exchange for a formidable tool for quantifying efficiency in a precise and intuitive way. That is what the ENI, focusing on amount, provides. Maybe one day someone will come up with a standardised way to measure meaning and content, at which point I will happily retire the ENI, but that day has not yet come.
Information theory and efficient consultation processes
Unlike the scenic, the direct route to measuring the flow of information in a consultation can be expressed very succinctly, if not transparently. Borrowing from information theory, I define the information flow, and therefore the efficiency, of a consultation process in terms of the standard measure of information which is the Shannon entropy.
More precisely, the Effective Number of Issues is the exponential of the Shannon entropy of the issues. The actual formula is ENI = exp (H’), where H´= – ∑px ln(px) and the px are the numbers of valid times each issue is raised expressed as a proportion of the total. Ecologists use the self-same formula to measure the effective number of species, which is why the two routes – scenic and direct – end up in the same place.
Doing the calculations is quite straightforward. (I have published a step-by-step guide to the ENI calculations elsewhere.)
So what can the ENI do?
Looking more closely, the formula for the ENI gives guidance on how to increase efficiency – increase issue richness or issue evenness (or both) and, in all likelihood, increase the effective number of issues.
It also provides the minimum and maximum possible values for the ENI. The minimum is 1.0, which is the value from a single issue referendum, making this the least efficient of all the ways to consult the public. (Brexit? I make no comment!) The ENI has no theoretical maximum, but in practice the highest ENI I have seen is 251.8, while the median value from 83 public consultations for which I have data is 19.3.
Also, the ENI is a standardised measure. This has the powerful consequence that whether quantitative or qualitative, conducted in the southern hemisphere or the northern, whether using open or closed questions, no matter what the subject matter might be: all consultations with the same ENI are equally efficient. This means that we can directly compare very different consultations. For example, we can now confirm our earlier instinct about the two hypothetical consultations, A and B. Their respective ENIs are 1.6 and 10.0. Can we though say that B is in fact 6¼ times as efficient as A (i.e. 10.0 divided by 1.6)? Yes, we can. The ENI behaves intuitively and an ENI of 20, for instance, is twice as efficient as one of 10, and four times more efficient than one of five.
Now that we have a tool to measure efficiency we can use it to increase our scientific understanding of consultation and improve our practice. And that is the justification for my claim that measuring efficiency can lead to benefits for consultees, sponsors and practitioners alike.
Dr John May (MA, PhD, DMS, MCMI, CMRS) is a social and market researcher and consultant in the voluntary, public and private sectors.
Header photo: Chuttersnap/unsplash/cc