The Turn To Large Data in Operations Management Research

cpoeb980x450

As larger operational datasets become available, we are seeing a movement in operations management research away from the use of surveys and single organisation cases towards large scale cross-organisational analysis. This was driven home by a session at the Production & Operations Management conference chaired by Kenneth Boyer entitled “Knowledge Creation in Healthcare”. The title was a bit misleading, as the three excellent presentations all addressed evaluations of IT  implementations in US hospitals: Hospital Information Technology (HIT) strategies, Computerized Physician Order Entry (CPOE) and Electronic Medical Records (EMR). I dragged myself along at 8 am to hear Carrie Queenan from the University of South Carolina’s paper on CPOE and “safety culture”.

This was just one of several  sessions addressing the evaluation of information technology in healthcare. The HITECH (Health Information Technology for Economic and Clinical Health Act) of 2009 has created two impetus for operations management research into healthcare IT in the US. First, the need to demonstrate “meaningful use”,  essentially that the federal government is getting some bangs for its $26,000,000,000, and second, the introduction of the systems are generating the data that allows researchers to do detailed quantitative analysis on the relationship between system functions and organisational arrangements on the one hand, and the outcomes for patients on the other. Thirty years ago I briefly worked as a researcher in Scotland on the operations analysis of hospital data. Even then there was plenty of data on hospital treatment, but both the computing resources and the motivation to analyse them were lacking.

Over the last thirty years, when  operations management researchers have wanted to research some innovative topic,such as quality circles, enterprise planning  or lean operations, they have split into two camps. First, researchers who would mail out surveys to thousands of managers to find out how many were using X and what the effects had been, and others who descended on a small number of users of X to get a more detailed understanding. They would then present their papers at conferences and rip into each other with well-rehearsed criticisms. The surveyors  would claim the cases might not be typical, and the case study people would argue the surveys would be missing too much detail. This methodological divide then led to a deeper epistemic divide between positivist surveys, where if the sample is large enough, then statistical analysis would uncover the truth, and interpretivist case studies, where if there was a  truth, then  it was being negotiated socially.   This was also a geographic divide between quantitative operations management research, coming out of management science, in the United States and qualitative operations management, influenced by sociology in the UK. The advent of large-scale data sets is finally bringing these approaches together and taking each of them away from their respective comfort zones of surveys and small case-studies.  Each of the three papers in this session was using large-scale data from both public and private sector sources, namely HIMSS (Healthcare Information and Management Systems Society), Treo Solutions and CMS (Centers for Medicare and Medicaid Services) , to assess the operational impacts of healthcare information technology. So far, so positivist; the analysis of “big data” makes it possible to identify actual relationships in the operational data and, importantly, track them through time. However, in trying to explain the relationships, presenters made references to theoretical conceptualisations drawn from organisation studies, so theoretically informed case research may still have a role in operations management and it not just going to be a future of operational data cuisineartists.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *