Randomisation and chance-based designs in social care research

Woods B, Russell I

Abstract

When there is little evidence about the relative effectiveness of two alternative approaches in social care, simple comparisons of the participants who happen to receive one approach with those who happen to receive the other are prone to bias arising from whatever social mechanism was responsible for these allocations. Chance-based designs (CBDs) address this intrinsic problem by allocating participants between approaches at random, thus maximising internal validity. This has led many commentators to describe CBDs (or ‘randomised trials’ as they are called in medicine) as the ‘gold standard’ of health care research and made them the dominant research design in interventions research and in developing ‘evidence-based medicine’ and practice.

However, CBDs are less widely used in social care research in the UK. Indeed, they are generally viewed with suspicion, especially by critics who equate them with drug trials. More specifically, Illsley (1982), a social scientist who chaired the Scottish Health Services Research Committee, argued that CBDs were difficult outside drug trials and that, even when they were feasible, the constraints of the experiment limited their generalisability. Over the past 25 years, health care research has addressed and overcome both of Illsley’s objections. First, there are now more than 750,000 CBDs listed in the Cochrane Library. Secondly, researchers have distinguished between two distinct types of CBD. ‘Fastidious trials’ test whether new approaches, typically new drugs, are more efficacious than the status quo under laboratory conditions. That drug trials are typically ‘double blind’, with neither participant nor doctor knowing whether their drug is active or inactive (or ‘placebo’), was a key feature that led Illsley and others to question their external validity.

In contrast, ‘pragmatic designs’ estimate in normal practice whether the benefits of new approaches, typically more complex interventions, outweigh their costs. More realistically, only those assessing outcomes do not know which intervention each participant is receiving. These pragmatic designs assess the differential effects of the intervention on the lives of participants and thus contribute to future decisions about the allocation of resources and where they are most effective and cost-effective. Such pragmatic designs require clarity about desired outcomes and the development of valid and reliable measures of these outcomes. Furthermore, though CBDs are well suited to evaluating the average effects on participants of a standard intervention, they increasingly assess interventions tailored to individual needs and use participant-centred outcome measures that focus on individual goals and concerns.

View PDF