Social care is complex. We want to ensure those requiring, or in receipt of, social care have the best lives possible, but it can be difficult for the individual practitioner or service to be sure how much difference their involvement has made. If the situation improves, was it just luck or some other change independent of the care package? If there is a change for the worse, does that mean the care offered was inappropriate or ineffective, or again, was it the result of factors and circumstances beyond the control of the practitioner or the service? From the service-user’s perspective, how can assurance be gained that the most beneficial care package is being offered?

Budgets for social care – as for all public services – are limited, and making the best use of the resources we have will always be a priority. Investment is needed in those services that, for a given level of resource, make the most difference and achieve the outcomes that are most valued by the community. Conversely, identifying those services and interventions that are ineffective allows the potential of re-directing resource to where it can have greater impact.

It is, therefore, in the interests of practitioners, service-users, services and society as a whole to be able to ascertain the connections between services provided and outcomes achieved. Is it possible to design research in the social care context that can help to evaluate the effectiveness of services and interventions and can plausibly establish the likelihood that any changes seen are indeed related to the service rather than to extraneous factors?

When there is little evidence about the relative effectiveness of two alternative approaches in social care, simple comparisons of the participants who happen to receive one approach with those who happen to receive the other are prone to bias arising from whatever social mechanism was responsible for these allocations. Chance-based designs (CBDs) address this intrinsic problem by allocating participants between approaches at random, thus maximising internal validity. This has led many commentators to describe CBDs (or ‘randomised trials’ as they are called in medicine) as the ‘gold standard’ of health care research and made them the dominant research design in interventions research and in developing ‘evidence-based medicine’ and practice. However, CBDs are less widely used in social care research in the UK. Indeed, they are generally viewed with suspicion, especially by critics who equate them with drug trials.


The aim of this review was to examine study designs involving the allocation by chance of participants or participating services to receive, or not receive, the service or intervention of interest.


The review highlighted that:

  • Many of the valid concerns raised about use of CBDs in social care have been addressed over several decades, and more consideration should be given to use of CBD for social care.
  • CBDs are challenging and require a team approach to set up and establish appropriately, including involvement of service users in research design and management. Careful consideration needs to be given as to how best to use CBD appropriately. However, properly conducted and reported, they may well be costeffective in terms of the value of knowledge and information gained.
  • Open and transparent reporting of CBDs is essential to maintaining and raising their quality, and ensuring that the results and recommendations do justice to the contribution of all those taking part.
  • There is a need for further development and testing of outcome measures in many cases before proceeding with a full CBD evaluation.
  • It is recommended that work goes on in parallel evaluating the processes of the intervention and its effects within the study, and wherever possible, an evaluation of cost-effectiveness should be carried out as part of the CBD.
Randomisation and chance-based designs in social care research
( )
Pubs page