What conclusions has Clinical Evidence drawn about what works, what doesn't based on randomised controlled trial evidence?
At Clinical Evidence (CE) we aim to help people make informed decisions about what treatments to use. So it would be nice to be able to give a clear, confident answer to every clinical question, based on an abundance of high-quality evidence. Of course, we can’t – but we can often highlight where more research is needed.
We want to identify treatments that work and for which the benefits outweigh the harms, especially treatments that may be underused. We also wish to highlight treatments that do not work or for which harms outweigh benefits. For the research community, our intention is to highlight gaps in the evidence – where there are no good RCTs or no RCTs that look at groups of people or at important patient outcomes.
Clinical Evidence selects around 3000 treatments that have been evaluated in research for analysis and divides their effectiveness for specific indications into categories. Dividing treatments into categories is never easy, and we spend a lot of time on it, calling on the knowledge of our information specialists, editors, peer reviewers, and expert authors, and revisiting our categorisations at each update of a review. In addition, categorisation always involves a degree of subjective judgement, and it’s sometimes controversial.
So if it’s so problematic, why categorise?
Because our users tell us it’s helpful. But, like all tools, it has both benefits and limitations: for example, an intervention may have multiple indications, and may be categorised as 'Unknown effectiveness' for one condition but 'Beneficial' for another.
‘Unknown effectiveness’ is perhaps a hard categorisation to explain. Included within it are many treatments that come under the description of complementary medicine (e.g., acupuncture for low back pain and echinacea for the common cold), but also many psychological, surgical, and medical interventions, such as CBT for depression in children, thermal balloon ablation for fibroids, and corticosteroids for wheezing in infants.
‘Unknown effectiveness’ may also simply reflect difficulties in conducting RCTs of an intervention, or be applied to treatments for which the evidence base is still evolving. As such, these data reflect how treatments stand up in the light of evidence-based medicine, and are not an audit of the extent to which treatments are used in practice.
We make use of what is ‘unknown’ in Clinical Evidence by feeding back to the UK NHS Health Technology Assessment Programme (HTA) with a view to helping inform the commissioning of primary research. Every 6 months we assess CE interventions categorised as Unknown effectiveness and submit those fitting the appropriate criteria to the HTA via their website: http://www.ncchta.org/.
And what do our categorisations mean in relation to clinical practice?
We would like to emphasise that our categorisation of the effectiveness of treatments does not identify how often evidence-based and non-evidence-based treatments are used in practise. We only highlight how evidence based treatments are for certain indications, based on randomised controlled trials. As such, these data reflect how different treatments stand up evidence-based medicine and are not an audit of the extent to which treatments are used in practice or for other indications not assessed in Clinical Evidence.