Bob Woods Completed 2009
Social care is complex. We want to ensure those requiring, or in receipt of, social care have the best lives possible, but it can be difficult for the individual practitioner or service to be sure how much difference their involvement has made. If the situation improves, was it just luck or some other change independent of the care package? If there is a change for the worse, does that mean the care offered was inappropriate or ineffective, or again, was it the result of factors and circumstances beyond the control of the practitioner or the service? From the service-user’s perspective, how can assurance be gained that the most beneficial care package is being offered?
Budgets for social care – as for all public services – are limited, and making the best use of the resources we have will always be a priority. Investment is needed in those services that, for a given level of resource, make the most difference and achieve the outcomes that are most valued by the community. Conversely, identifying those services and interventions that are ineffective allows the potential of re-directing resource to where it can have greater impact.
It is, therefore, in the interests of practitioners, service-users, services and society as a whole to be able to ascertain the connections between services provided and outcomes achieved. Is it possible to design research in the social care context that can help to evaluate the effectiveness of services and interventions and can plausibly establish the likelihood that any changes seen are indeed related to the service rather than to extraneous factors?
When there is little evidence about the relative effectiveness of two alternative approaches in social care, simple comparisons of the participants who happen to receive one approach with those who happen to receive the other are prone to bias arising from whatever social mechanism was responsible for these allocations. Chance-based designs (CBDs) address this intrinsic problem by allocating participants between approaches at random, thus maximising internal validity. This has led many commentators to describe CBDs (or ‘randomised trials’ as they are called in medicine) as the ‘gold standard’ of health care research and made them the dominant research design in interventions research and in developing ‘evidence-based medicine’ and practice. However, CBDs are less widely used in social care research in the UK. Indeed, they are generally viewed with suspicion, especially by critics who equate them with drug trials.
The aim of this review was to examine study designs involving the allocation by chance of participants or participating services to receive, or not receive, the service or intervention of interest.
The review highlighted that: