Evidence-Based Practice

Evidence980x450

Last week the University of Edinburgh Business School hosted a well-attended full-day workshop on evidence-based practice organised by Céline Rojon.

The basic premise of evidence-based practice is simple: that organisations should base their practices on the best available evidence about what works and what doesn’t work. As a rational approach to managing organisations, you would expect that evidence-based practice would be an important strand in operations management. This is not necessarily so, and why it isn’t so provides interesting insights into the practice of operations management.

Evidence-based practice has its roots in health care. There is an enormous body of research in healthcare about the efficacy of treatments, so it is unsurprising that techniques have emerged to extract the maximum value from this data. Central to these methods are systematic reviews of evidence and meta-analysis to combine evidence to gain new insights. The value of systematic reviews of evidence led in 1992 to the founding of the not-for-profit Cochrane Collaboration as “a global independent network of health practitioners, researchers, patient advocates and others, responding to the challenge of making the vast amounts of evidence generated through research useful for informing decisions about health”. In England NICE ( currently National Institute for Health and Care Excellence ) provides “independent, authoritative and evidence-based guidance on the most effective ways to prevent, diagnose and treat disease and ill health”.

Professor Amanda Burls of City University gave the workshop an overview of evidence-based practice in healthcare. You might think that with medicine’s grounding in science that clinicians, when presented with the evidence, would follow the evidence-based practice. The key word in the NICE mission statement is “guidance”: clinicians can ignore the guidance. Professor Burls referred to her research on Patterns of Leakageshowing that even if clinicians are aware of the guidance, a proportion will not believe it, a further proportion will not adopt it and a further proportion will not adhere to it. This leakage means that for many situations where the evidence is very strong a large proportion of patients will not receive the intervention supported by evidence as clinicians exercise their personal judgement. Operationally this is the main difference between clinicians and Kwik-Fit fitters: Kwik-Fit fitters have not been to university for years to understand the theory of what they are doing so follow the best-practice set methods, where clinicians spent some of their time at university learning the theory of what they would be doing, which paradoxically becomes a barrier to adhering to evidence-based practices. This is another example of the conflict between autonomy and compliance that cleaves through most quality assurance systems.

Despite this recognition that the evidence-based practice is not always followed, the identification of evidence-based practice is seen as being a “good thing” in health care, saving many lives. This success has led to attempts to transfer the approach to other areas of organisational practice. The Campbell Collaboration uses the Cochrane Collaboration model for “preparing, maintaining and disseminating systematic reviews in education, crime and justice, social welfare and international development”. Dr Eamonn Noonan, director of the Campbell Collaboration, described their activities and the difficulties of evidence-based practice impacting on public policy. One problem is that these areas of policy are more context specific than most areas of healthcare. If there is evidence that a practice is effective in Japan, how strongly does that indicate that it will work in the UK? However even where the evidence is assessing specific programmes, the controllers of those programmes will not necessarily respond to evidence. To illustrate this Dr Noonan used the example of Scared Straight, a US penal programme which is based on an assumption that putting young offenders in contact with seasoned recidivists will “scare them straight”. As a policy it has the attraction of narrative plausibilty, but a meta-analysis of published research has shown that it is probably harmful, but this evidence has not killed off these programmes. Similar resistance to research evidence is seen in policy debates in the UK around school class sizes, putting police officers out on the street or hospital sizes. One explanation for this ignoring of research is that policy-makers are not technocratically improving efficiency, but instead are managing their exposure to risk. To do something is better than to do nothing and when it comes to a conflict between evidence and plausible narratives of policy, narratives often win. It is easier for public officials to communicate personalised narratives of a policy “working” over dry statistical evidence that overall it doesn’t work.

A third barrier to the adoption of evidence-based practice in private-sector management is the lack of evidence. A third presentation was by Eric Barends from the Center for Evidence-based Management and Professor Rob Briner from Bath University. Their presentation was a a rich overview of why evidence-based practice is not attractive to managers, and barely more attractive to academics. A particular difference between management and health, and to some extent social policy, as academic disciplines is that in health research there is a culture of carrying out controlled studies aiming to replicate or build on existing research. This generates the raw material for others to critically review in systematic reviews. The world is not suffering a drought of management research, but as Barends and Briner note, only a small part of it involves controlled studies and very little is seeking to replicate existing studies. The research culture in management privileges novel theory over rigorous evaluation, with top-ranked journals publishing papers that contribute to theory rather than generate evidence to inform practice. (Rob Kitchin, a geographer, describes these pressures eloquently in an online post to help junior academics publish successfully). As I argued in a paper years ago, managers are not big consumers of academic research anyway.

The workshop was balanced between extolling the benefits of evidence-based practice and identifying the legion barriers to applying it, particularly in management. But practising operations management is supposed to be a rational activity and teaching operations management is supposed to be provide students with the skills and knowledge to make positive changes, so I want to make sure that in future my own courses cover evidence-based practice.

Leave a Reply

Your email address will not be published. Required fields are marked *