August 23, 2016
How to get a better bang for the taxpayers’ buck in all sectors, not only Indigenous program
Peter Siminski, Ƶapp of Ƶapp
A released today by the Centre for Independent Studies (CIS) has drawn attention to the lack of quality evaluations being conducted on Indigenous programs.
The report identified 1082 Indigenous-specific programs delivered by government agencies, Indigenous organisations, not-for-profit NGOs and for-profit contractors. It found 92% have never been evaluated to see if they are achieving their objectives.
While it oversteps in some regards, this report raises a very important point: we don’t really know what works if we don’t check. That’s a lesson that applies to all areas of public policy spending, not just Indigenous affairs.
A bit of perspective
The asserts:
Indigenous-specific funding is being wasted on programs that do not achieve results because they are not subject to rigorous evaluation.
This is a contradiction. With no rigorous evaluation, how could we know if it’s a waste or not? The point should be that we mostly don’t really know if those programs are improving outcomes. But a lack of evaluation is indeed a major problem, and we can do better.
The report only addresses Indigenous programs but it’s important to note the issues raised are not confined to Indigenous programs. I was not entirely surprised by these findings because I have seen similar patterns in other sectors, such as education spending.
A recent published by the US’ reviewed the evidence from randomised evaluations on the impact of education programs (not confined to Indigenous programs) in developed countries. Of the 196 experiments it identified, only two were conducted in Australia.
If we were to withdraw funding from all programs conducted by Australian governments whose impact has not been verified through rigorous evaluation, then I don’t think we’d have many programs left.
That said, it may be that rigorous evaluation for Indigenous programs in Australia is of extra importance. In other areas (take education or design of the income support system), it is perhaps easier to piggy-back on the rigorous evaluations conducted in other countries; taking evidence “off-the-shelf” from overseas.
The CIS’ report is correct to draw attention to the paucity of rigorous evaluations. It feels good to spend money on Indigenous programs, just as it feels good to spend money on all worthy causes. But greater investment on evaluating those programs would almost certainly be money well spent, as long as the evaluations are of high quality.
Not all evaluations are created equal
We need to be very aware that not all evaluations are equally compelling. There can be a temptation for government departments to conduct tokenistic, low-quality evaluations that tick-the-box for a program being evaluated.
Many evaluations rely only on asking program participants or workers if they believe that a program has had a favourable impact. While such work has merit, it doesn’t actually measure impact. We don’t rely only on such evidence in medicine. Nor should we for social policy.
Such evaluations are usually inconclusive, which has the added benefit of not risking embarrassment to the minister championing the program.
We have made tentative steps toward fixing this problem. The Productivity Commission convened a roundtable of experts in 2009 on the topic of .
In his to the roundtable, Andrew Leigh – then a professor of economics at the Australian National Ƶapp, now the shadow assistant treasurer – outlined what he called a “hierarchy of evidence” that would help policymakers better understand what social programs were actually worth the money and effort:
Leigh’s proposed hierarchy itself may need more scrutiny, debate and refinement. My view is that studies relying only on or are a lower grade of evidence than genuine .
The CIS report recommends:
All programs receiving taxpayer funding should be subject to independent evaluations. At the same time, governments and organisations should cease collecting data that does not make a valuable contribution towards improving the level of knowledge about the effectiveness of programs.
I think we need to go further and ensure that we conduct the best possible evaluations. This includes conducting randomised trials as part of the mix.
, a quantitative social scientist at the Australian National Ƶapp, has asked whether the challenges facing programs targeting Indigenous people in remote Australia may have similarities to those targeting poverty in developing countries.
If so, then we should consider drawing on the considerable experience of the leaders in such evaluations, such as the , a network of professors who argue for policy informed by scientific evidence. Importantly, the Indigenous community must be involved in every step.
The CIS plans to follow up their report with a detailed review of the evaluations that have been conducted of Indigenous programs.
Whatever it finds, it is clear that more prominence should be given to understanding the variation in the quality of evidence.
, Associate Professor of Economics,
This article was originally published on . Read the .
Photo credit:
UOW academics exercise academic freedom by providing expert commentary, opinion and analysis on a range of ongoing social issues and current affairs. This expert commentary reflects the views of those individual academics and does not necessarily reflect the views or policy positions of the Ƶapp of Ƶapp.