Skip navigation

In praise of failure: Failed meta-analysis

It is better to have meta-analysed and lost than never to have meta-analysed at all: was it Shakespeare or Oscar Wilde who said this?

Meta-analysis is an effective means of correcting both for bias and lack of power in individual randomised controlled studies. It is also a very important means of helping decision makers cope with the rapid growth of controlled trials. But not all meta-analyses will give clear, conclusive results.

Failed meta-analysis


David Naylor has written an interesting paper [1] in which he argues "the case for failed meta-analysis". He defines a failed meta-analysis not as one that fails to meet accepted standards, nor even one that may have been misleading (as was the case with the magnesium sulphate meta-analysis described in Bandolier 7). He defines a failed meta-analysis as a systematic review that "for various reasons other than poor methodology on the part of the analysts does not allow data aggregation that permits a definitive quantitative conclusion about the merits or demerits of a particular health care intervention".

Why meta-analysis can fail

He lists a number of ways that meta-analysis of good methodological design can fail to yield definitive quantitative results:
  1. Aggregation feasible, but results inconclusive.
  2. Aggregation not feasible owing to inconsistencies in design, study quality, endpoint reportage, or data availability.
  3. Aggregation not feasible owing to variability in populations.
  4. Aggregation not feasible owing to variability in interventions.


He cites the landmark publication of Effective Care in Pregnancy and Childbirth as one of the first steps to promote failed meta-analysis. That wonderful publication, which led to the Cochrane Collaboration, was the first comprehensive collection of systematic reviews, of which "about one third" were "unapologetically inconclusive".

Benefits from failure


Perhaps Naylor is being over-dramatic in calling these meta-analyses "failed meta-analyses"; but who are we in Bandolier to criticise the use of racy titles?

The point is that any such review has systematically collected the known work on a particular topic - a worthwhile end in itself. It will have pointed out deficiencies in study design or quality which cause the meta-analysis to fail. It will have pointed out any confusion over outcomes or end-points which needs to be addressed before more studies are conducted. It will prevent (hopefully) further pointless studies being conducted whose ethical status will be compromised unless design or outcome problems are addressed.

So long live negative meta-analysis - but only until they become positive - and long live editors with the intestinal fortitude to publish them.

Reference:

  1. D Naylor. The case for failed meta-analyses. J Eval Clin Pract 1995 1: 127-30.



previous or next story in this issue