What proportion of healthcare is evidence based? Resource Guide.

Pre-history      General Medicine      General Practice      Anaethesia     Dermatology      Haematology, Clinical      Haematology, Malignant       Medicare procedures     Oncology      Paediatrics      Paediatric Surgery      Psychiatry     Surgery         Surgery, Endoscopic     Surgery, Laparoscopic      Surgery, Paediatric        Surgery, Thoracic     Postscript

Compiled by Andrew Booth  with contributions from Benjamin Djulbegovic (bmdjul01@homer.louisville.edu), Bruce Guthrie (bg@srv1.med.ed.ac.uk), Matthias Perleth (perleth@epix.epi.mh-hannover.de), David Sackett (david.sackett@ndm.ox.ac.uk) and Scott Endersly (UVSENDSL@ihc.com), Dean Jenkins, Scott Richardson, Chris Taylor, Tom Dent & Murray Enkin. 



 
Study Setting RCT-based 
(Type I)
Non-experimental evidence 
(Type II)
Not supported No. of interventions/patients
Baraldini et al (1998) Tertiary referral paediatric surgical unit 26% 71% 3% 70/49
Djulbegovic et al (1999) Cancer Centre (US) 24% 21% 55% 154/Not known
Ellis  et al (1995) General medicine  District General Hospital (UK) 53% 29% 18% 108/108
Galloway et al (1997) Haematology General Hospital (UK) 70% (Type I or Type II) 30% Not known/83 
Geddes et al (1996) Psychiatry Acute adult general psychiatric ward (UK) 65% 40/40
Gill et al (1996) General Practice Suburban training practice (UK) 30% 51% 19% 101/122
Howes et al (1997) Surgery General surgical/vascular unit in Urban teaching hospital (UK)  24% 71% 5% 100/100
Jemec et al (1998) Dermatology Outpatient clinic University Hospital (Denmark) 38% 33% 23% (6% not accounted for) Not known/115
Kenny et al (1997) Regional Paediatric Surgical Unit (UK) 11% 66% 23% 281/281
Lee et al (2000) Surgery 
Tertiary Care Cancer Centre and community general hospital (US)
14% 64% 22% 50/Not known
Michaud et al (1998) Internal medicine General Hospital (Canada) 20.9 % placebo
43.9% head to head
150/150 
Myles et al (1999) Anaesthesia (Australia) 32% 64.7% 3.3% other Not known
Nordin-Johansson et al (2000) Internal Medicine
Department of Medicine Teaching Hospital (Sweden)
50% 34% by consensus 12% other 369/197 
Rudolf et al (1999) 12 Community Paediatricians (UK) 39.9% 7% 1149/247
Slim et al (1998) 11 hospitals (one university and 10 district hospitals) (France) 50% 28% 428/Not known
Suarez-Varela et al (2000) General Practice 34 primary health care centres (Spain) 38% 4% 58% other 2341/1990
Summers et al (1996) Psychiatry General Hospital (UK) 53% 10% 37% 160/158
Tsuruoka et al (1996) General Practice (Japan) 21% 60% 19% other 53/49

The Pre-History (Courtesy of David Sackett, supplemented by Matthias Perleth and Kerr White)

"The "20-25% of medical decisions are evidence-based" comes from a series of conjectures, many of them humorous, starting back in the 70's. For example, in an exchange between two giants of epidemiology, Kerr White (who related the incident to Iain Chalmers) and Archie Cochrane (after whom we named the collaboration) in Wellington, NZ, Kerr had just suggested that "only about 15-20% of physicians' interventions were supported by objective evidence that they did more good than harm" when Archie interrupted him with: "Kerr, you're a damned liar! you know it isn't more than 10%".

Shortly thereafter [1978], the US Congress's Office Of Technology Assessment reported that "only 10% to 20% of all procedures currently used in medical practice have been shown to be efficacious by controlled trial" and repeated the charge in 1983".

Matthias Perleth adds: "In the book "Assessing Medical Technologies", National Academy of Sciences, Washington DC 1985, p. 72 f., the distribution of technologies reviewed by the Office of Health Technology Assessment (OHTA) for the HCFA in 1982 - 1984 according to several categories was analyzed. It turned out that for 69% of the reviewed technologies insufficient data were available to assess its efficacy. (I assume that most of these were covered anyways...). Then refers to Dubinski & Ferguson, 1990 (below).

However, these figures should not be confused with patient-based data, as David Sackett has pointed out. To take it systematically, any benefit package (in this case that of MEDICARE) is composed of a number of technologies (say several thousand) that could be used for patients. Adopting an epidemiologic view, few of them (say one hundred) are used with a high frequency, so that the number of patients treated evidence-based may well exceed the number of technologies, that are of proven efficacy".

In a book for the Institute of Medicine (1992) Field MJ & Lohe KN (ed). Guidelines for Clinical practice: From Development to Use by Committee on Clinical Practice Guidelines. Washington: National Academy of Science, Institute of Medicine, 1992. p.34 (Available online from the National Academy Press Reading Room)
a table is presented entitled "Hypothetical distribution of evidence and consensus for all health services and patient management strategies" containing the following figures: Strong Evidence/Strong Consensus - 2%; Strong Evidence/Modest Consensus - 2%; Strong Evidence/Very weak or no Consensus - 0%; Modest Evidence/Strong Consensus - 20%; Modest Evidence/Modest Consensus - 25%; Modest Evidence/Very Weak or no Consensus - 0%; Very weak or no Evidence/Strong Consensus - 20%; Very weak or no Evidence/Modest Consensus - 25%; Very weak or no Evidence/Very weak or no Consensus - 6%.

David Sackett continues: "These gloomy figures were more recently repeated on this side of the Atlantic [UK] by Richard Smith (BMJ editor and a star supporter of ebm) as "Where is the wisdom...the poverty of medical evidence" [BMJ 1991;303:798-9] and "The ethics of ignorance" [J Med Ethics 1992;18:117-8]."

Kerr White followed up the Ellis et al, 1995 study [See below] with the following letter to the Lancet: Evidence-based medicine [Letter to the Editor]. White, Kerr L. 2401 Old Ivy Road, 1410, Charlottesville, VA 22903-4858, USA

"Sir--Sackett (Ellis) and co-workers (Aug 12, p 407) accurately recount Archie Cochrane's indignation in Wellington, New Zealand in 1976. To avoid unduly startling the Wellington Hospital's clinical staff I deliberately increased the lower boundary of the estimate that "only 10-20 percent of all procedures currently used in medical practice have been shown to be efficacious by controlled trial" from 10 percent to 15 percent. However, the estimate was "evidenced-based", albeit with soft data by today's standards. A 1963 paper in Medical Care reported a two-week survey by 19 general practitioners "representing almost every partnership and practice in a northern [British] industrial town". They recorded, among other items, the "intent" of each prescription written. In only 9.3 percent of the prescriptions for proprietary drugs was the intent specific for the condition for which it was prescribed. Another 22.8 percent were of "probable" benefit; 27.2 percent were of "possible benefit"; 28.2 percent were "hopeful", and 8.9 percent were regarded as a "placebo"; 3.6 percent "not stated". Distributions for non-proprietary drugs were similar. (1)

Sackett may recall attending one of the annual seminars on the application of epidemiological methods to the evaluation of health services that John Williamson and I ran at the Johns Hopkins during the mid-1960s for the Association of American Medial Colleges. I still have a slide ("The content of patient care") stating that specific measures accounted for 10-20 percent of all benefits; that the combined "placebo and Hawthorne effects" accounted for another 20-40 percent; and the rest (which we referred to usually as a "mystery") accounted for 70-40 percent. Some 20 years ago, as a member of the original Health Advisory Panel to the US Congressional Office of Technology Assessment I ventured the 10-20 percent figure again and invited anyone to provide more timely data. No-one could. The figure was immortalised in OTA circles and publications for almost a decade. In countless addresses and conferences I often challenged others to provide better evidence but none was forthcoming. So the northern industrial town "armchair" assessment persisted.

In determining what proportion of interventions (apart from the all-powerful placebo and Hawthorne effects) do more good than harm, however, the site, appropriateness, and volume of all interventions need assessing. Ellis et al have started us on the right path and it is gratifying to learn that 83 percent of interventions at Oxford's main teaching hospital were evidence based. Much work remains to be done, however, by the Oxford based Cochrane Collaboration and others as we seek to determine the extent of evidenced-based care (including the placebo and Hawthorne effects) in all the health establishment's ministrations. I suspect that it is now better than 20 percent but I doubt if, overall, it is 83 percent."

REFERENCES AND NOTES

1. Forsyth G. An enquiry into the drug bill. Med Care 1963; 1: 10-16.

 Footnote: Dr Robert Califf, Director of the Duke University Clinical Research Institute as reported in the October 12th 1998 issue of TIME magazine, estimates that less than 15% of US health care is evidence based: "Only 15% of the decisions a doctor makes every day are based on evidence," he recites.


General Medicine (Ellis et al, 1995)

Ellis J. Mulligan I. Rowe J. Sackett DL. Inpatient general medicine is evidence based. A-Team, Nuffield Department of Clinical Medicine. Lancet. 346(8972):407-10, 1995 Aug 12.

Nuffield Department of Clinical Medicine, Oxford-Radcliffe NHS Trust, Headington, UK.

Comments: Comment in: Lancet 1995 Sep 23;346(8978):837-8; discussion 840, Comment in: Lancet 1995 Sep 23;346(8978):838, Comment in: Lancet 1995 Sep 23;346(8978):838-9, Comment in: Lancet 1995 Sep 23;346(8978):839; discussion 840, Comment in: Lancet 1995 Sep 23;346(8978):839-40, Comment in: Lancet 1995 Sep 23;346(8978):840

Abstract: For many years clinicians have had to cope with the accusation that only 10-20% of the treatments they provide have any scientific foundation. Their interventions, in other words, are seldom "evidence based". Is the profession guilty as charged? In April, 1995, a general medical team at a university-affiliated district hospital in Oxford, UK, studied the treatments given to all 109 patients managed during that month on whom a diagnosis had been reached.

Medical sources (including databases) were then searched for randomised controlled trial (RCT) evidence that the treatments were effective. The 109 primary treatments were then classified: 82% were evidence based (ie, there was RCT support [53%] or unanimity on the team about the existence of convincing non-experimental evidence [29%]). This study, which needs to be repeated in other clinical settings and for other disciplines, suggests that earlier pessimism over the extent to which evidence-based medicine is already practised is misplaced.

The Oxford group found that that 82% of the patient management interventions they studied in 100 consecutive patients over a short period in a single general medical ward were based on high quality scientific evidence (Lancet 1995;346:407-410).

Contributor's Note: When I (David Sackett) moved to Oxford and started working on the general medicine wards here, these "armchair" pronouncements [see Pre-history above] were raised by one of the bright young house officers (Jon Ellis) and we decided to test them. Since we treat patients, not manoeuvers, we decided to determine the proportion of patients whose most important intervention for their most important diagnosis were based on systematic reviews/RCTs, on convincing non-experimental evidence (don't need an RCT to tell you that it's good to shock a VF-arrest), or without convincing evidence.

Our study was followed by a series of others of about the same design (consensus on the primary diagnosis, consensus on the primary intervention, tracking the intervention into the evidence, and asking one or more outsiders to independently review our interventions and their linkages to the evidence). we found that a service that ran like ours and worked hard to find the best evidence to guide its interventions could treat 53% of its patients on the basis of SRs and RCTs, another 29% on the basis of convincing non-experimental evidence, and just 19% on the basis of guessing and hope [Lancet 1995;346:407-10].


Comments: When I read this, one of the things that struck me was that there were three different treatments identified for the 10 people with heart failure. 6 go ACE inhibitors, 1 who was in sinus rhythm got digoxin (both interventions being rated as level 1/RCT based) and 3 got diuretics (rated as being based on convincing non-experimental evidence). The article itself didn't really have the space to go into detail about individuals, but I would be interested to know what the basis for choosing different interventions for these 10 people were. I think the question the study asked was "was the intervention applied supportable by evidence?". I think a more relevant question might be "was the intervention applied the best available?" (for example, should those who got diuretics alone have also got an ACE inhibitor). Without more detail, I don't think that I can answer the second question. - Bruce Guthrie [1998]

 Sackett responds: Because most Congestive Heart Failure patients are already on one or more medications for Congestive Heart Failure (to which we'd add another), and/or cannot take or tolerate others (so we wouldn't give them), the Congestive Heart Failure treatment we added/gave to each patient was, as it should have been, individualised (by carrying out the 4th step of practising EBM)." -David Sackett [1998]


Fowler, P B S Evidence-based medicine [Letter to the Editor]. Lancet 1995; 346 (8978): 838
"Sir--Ellis and colleagues' proof that inpatient general medicine "is evidence based" at last puts to rest some extraordinary assertions to the contrary. 82 percent of treatments given were evidence based and no clinician could quarrel with the treatment of the 18 percent of patients falling outside the very strict criteria set in this study. The stimulus for this work was the widely disseminated belief (not evidence based) that no more than 10 percent of physicians' interventions were supported by objective evidence that they did more good than harm. "Editor's choice" in the BMJ of April 29, 1995, under the title "Celebrating evidence based everything", stated "six months ago in Britain the phrase `evidence based medicine' produced blank looks...now.. .its everywhere". "Evidence based medicine" is a neologism for informed decision making, and this example of newspeak would have delighted George Orwell. The presumption is made that the practice of medicine was previously based on a direct communication with God or by tossing a coin.

Much of what seemed certain as a result of double-blind trials later turns out to be wrong. Forty years ago, there was the routine use of anticoagulants for all patients with a myocardial infarction, as the result of a flawed double-blind trial. A blind eye was turned on the double-blind trials of corticosteroid therapy where the control groups clearly failed to show the obvious changes in appearance seen in the treatment groups.

Ellis and colleagues show that, once diagnosed, patients usually get the correct treatment. The real problem in clinical medicine is the diagnosis on which all treatment is based. The use of lectures has been downgraded and much teaching takes place in the form of discussion groups with "facilitators" and "motivators", so that if skills in clinical medicine have been deteriorating that is hardly surprising. Some have attempted to rewrite the history of early post-war medical education and denigrate the great men of that era. This study shows the typical work of an efficient unit and the rational approach to treatment, practised by similar units stretching back over the years. The division should not be between academic and non-academic medicine but between good and bad doctors".
 


Bradley, Fiona; Field, Jenny. Evidence-based medicine [Letter to the Editor]. Lancet 1995; 346 (8978): 838-839

"Sir--Evidence-based care has the potential to rescue us from sinking in a sea of papers. It also, and especially for generalists, keeps us up to date with rapidly changing management strategies and identifies areas where further research is needed. However, proponents of the movement threaten to swamp us in a tidal wave of enthusiasm, and the report from Ellis and colleagues raises two concerns for us.

We agree that in evaluating the scientific basis for medical care the patient, rather than the intervention, is the right denominator but this study used an alternative denominator ("primary diagnosis"), a choice which reduces complexity but drifts from the reality of many patients presenting with more than one problem and excludes the undiagnosed. This approach may alter the percentage of interventions judged to be evidence based, by reducing the "grey zones of clinical practice". (1) Also, might the assignment of diagnosis have been influenced by both the choice of treatment and the available evidence? In our specialty (primary care) the diagnostic label may be determined by treatment given rather than the reverse. (2) The decision to assign a single primary diagnosis of poisoning to 15 patients admitted after overdoses could reflect better randomised trial evidence available for management of poisoning than for psychiatric illness. In answering the question--"What percentage of interventions used with a given number of patients are evidence based?"--the three steps of assigning diagnoses, deciding on treatments (which may be multiple for an individual patient), and evaluating evidence need to be separated.

Our second reservation is more fundamental. Categorising interventions by evidence makes an implicit value judgment. It is a short step from "without substantial evidence" to "without substantial value". Also, interventions for which outcomes are less easy to measure (eg, in emotional stress) may be devalued. In Ellis' study there was a qualitative difference in the primary diagnoses assigned to groups I/and II with "convincing evidence for interventions" (eg, angina, myocardial infarction, deep-vein thrombosis, transient ischaemic attack, oesophagitis) compared with those in group III "without substantial evidence" (eg, inoperable cervical myelopathy, terminal motor-neurone disease, non-cardiac chest pain, confusion). It is difficult to see how some of group III therapies ("specific symptomatic and supportive care") could be submitted to a randomised trial, the gold standard of evidence-based medicine enthusiasts. Not all that is measured is of value and not all that is of value can be measured. To respond to the Evidence-based Medicine Working Group's question--"Were all clinically important outcomes considered?" (3) --qualitative research methods may have to be used. By acknowledging the limitations of epidemiological evidence we will avoid the spectre of the evidence-based-medicine philosophy being used to devalue the unquantifiable.

References And Notes
1. Naylor CD. Grey zones of clinical practice: some limits to evidence-based-medicine. Lancet 1995; 345: 840-42.

2. Howie JGR. Diagnosis: the Achilles heel? J R Coll Gen Pract 1972; 22: 310-15.

3. Guyatt G, Sackett DL, Cook DJ. Users' guides to the medical literature II. JAMA 1993; 270: 2598-601.


Postscript: This sort of study has now been replicated (with similar results) in two other E-B oriented internal medicine in-patient groups, both here and in Canada [Sackett, 1998]. E.g.:
 

Michaud G, McGowan JL, van der Jagt R, Wells G, Tugwell P. Are therapeutic decisions supported by evidence from health care research? Archives of Internal Medicine, 1998, 158 (15):1665-1668

Univ Ottawa, Dept Med ,501 Smyth Rd,Ottawa,On K1h 8l6,Canada; Univ Ottawa,Ottawa Gen Hosp,Dept Internal Med,Ottawa,On,Canada

Background: One of the most common decisions physicians face is deciding which therapeutic intervention is the most appropriate for their patients. In recent years much emphasis has been placed on making clinical decisions that are based on evidence from the medical literature. Despite the emphasis on incorporation of evidence-based medicine into the undergraduate curriculum and postgraduate medical training programs, there has been controversy regarding the proportion of interventions that are supported by health care research.

Objective: To investigate the proportion of major therapeutic interventions at our institution that are justified by published evidence.

Methods: One hundred fifty charts from the internal medicine department were reviewed retrospectively. The main diagnosis, therapy provided, and patient profile were identified and a literature search using MEDLINE was performed. A standardized search strategy was developed with high sensitivity and specificity for identifying publication quality. The level of evidence to support each clinical decision was ranked according to a predetermined classification. In this system there were 6 distinct levels, which are explained in the study.

Results: Of the decisions studied, 20.9% could be supported by placebo-controlled randomized trials and 43.9% by head-to-head trials. Half of these were shown to be significantly superior to the treatment against which it was being compared. For 10 of the 150 clinical decisions, evidence was found demonstrating alternative therapies as being more effective than that selected.

Conclusions: Most primary therapeutic clinical decisions in 3 general medicine services are supported by evidence from randomized controlled trials. This should be reassuring to those who are concerned about the extent to which clinical medicine is based on empirical evidence. This finding has potential for quality assurance, as exemplified by the discovery that a literature search could have potentially improved these decisions in some cases.



 

Nordin-Johansson A, Asplund K. Randomized controlled trials and consensus as a basis for interventions in internal medicine.   JOURNAL OF INTERNAL MEDICINE 247: (1) 94-104 JAN 2000

Asplund K, Umea Univ Hosp, Dept Med, SE-90185 Umea, Sweden.

Objectives. To estimate the proportion of routine clinical interventions in internal medicine that are supported by the results of
randomized controlled trials or consensus amongst experienced internists.

Design, Retrospective review of case records allowed one or more major diagnosis-intervention combination(s) to be identified
for each patient. The scientific literature was searched for metaanalyses and randomized controlled trials in electronic databases
that supported the specific intervention used. When support from randomized trials was lacking, possible consensus on
management was sought by asking national expert panels of experienced clinicians.

Setting. Department of Medicine at a Swedish teaching hospital.

Subjects, At total of 197 consecutively admitted medical inpatients.

Results, Fifty per cent of the diagnosis-intervention combinations (186/369) were supported by results from randomized
controlled trial evidence and 34% (125/369) were supported by consensus amongst experienced clinicians. The proportion of
interventions based on randomised controlled trials was highest in patients with cardiac (64%) and other circulatory diagnoses
(73%). There were no important differences between sexes or between age groups.

Conclusions. Half of the interventions used in routine clinical practice amongst medical inpatients are supported by results from
randomized controlled trials. These results refute popular claims that only a small proportion of medical interventions are
supported by scientific evidence.


General Practice (Gill et al, 1996)

Gill P. Dowell AC. Neal RD. Smith N. Heywood P. Wilson AE. Title: Evidence based general practice: a retrospective study of interventions in one training practice. BMJ. 312(7034):819-21, 1996 Mar 30.

Institution: Centre for Research in Primary Care, Leeds University.

Comments: Comment in: BMJ 1996 Jul 13;313(7049):114; discussion 114-5

Abstract: OBJECTIVES--To estimate the proportion of interventions in general practice that are based on evidence from clinical trials and to assess the appropriateness of such an evaluation.

DESIGN--Retrospective review of case notes.

SETTING--One suburban training general practice.

SUBJECTS--122 consecutive doctor-patient consultations over two days.

MAIN OUTCOME MEASURES--Proportions of interventions based on randomised controlled trials (from literature search with Medline, pharmaceutical databases, and standard textbooks), on convincing non-experimental evidence, and without substantial evidence.

RESULTS--21 of the 122 consultations recorded were excluded due to insufficient data; 31 of the interventions were based on randomised controlled trial evidence and 51 based on convincing non-experimental evidence. Hence 82/101 (81%) of interventions were based on evidence meeting our criteria.

CONCLUSIONS--Most interventions within general practice are based on evidence from clinical trials, but the methods used in such trials may not be the most appropriate to apply to this setting.


Comments and follow-up

Gill and colleagues reviewed 122 consecutive (BMJ 1996;312:819) doctor-patient consultations over a two day period in one general medicine training practice and concluded that as much as 81% of medical practice was evidence based. (Evidence-based in latter two studies (Ellis et al, & Gill et al) referred to RCT and what authors called "convincing non-experimental evidence". In the study published in Lancet 53% of interventions considered were supported by data from RCT, and in the BMJ article 25% of interventions were based on data from RCT) . Dr Gill and his colleagues reported on the interventions they applied in a consecutive series of consultations in their general practice in Leeds and found 31% based on RCTs and 51% based on convincing non-experimental evidence [BMJ 1996;312:819-21].
 


Tsuruoka, Koki; Tsuruoka, Yuko; Yoshimura, Manabu; Imai, Koyu; Sekiguchi, Satoko; Mise, Junichi; Asai, Yasuhiro; Nago, Naoki; Igarashi, Masahiro. Evidence based general practice: Drug treatment in general practice in Japan is evidence based [Letter]. British Medical Journal 313(7049) July 13, 1996: 114.

Department of Community and Family Medicine, Jichi Medical School, Minamikawachi, Kawachi, Tochigi, Japan.

"Editor,--P Gill and colleagues report their study of the proportion of interventions in general practice that is evidence based. (1) We performed a similar study to evaluate the basis of such interventions in Japan and found that most (81 percent) are evidence based.

We estimated the proportion of drug treatments given to outpatients in general practice that was based on evidence from randomised controlled trials. The design was a retrospective review of case notes of patients treated between June and December 1995. Forty nine outpatients received 53 drugs prescribed by seven residents for 63 chronic diseases; 28 patients had hypertension. The setting was a training centre for general practice in Japan. New drug treatments, changes to treatment, and the addition of drugs to treatment were classed as subjective interventions. We classified levels of evidence supporting drugs as Ellis et al did (2): (i) evidence from randomised controlled trials, (ii) convincing non-experimental evidence, and (iii) interventions without substantial evidence.

We classified groups (i) and (ii) as the "evidence group" and group (iii) as the "non-evidence group." Each drug was evaluated by discussion with senior doctors. In discussion we used literature retrieved from Medline and personal files of the senior doctors. As a result the evidence group comprised 43 (81 percent) of the drug treatments. Thirty two of the 53 drugs were antihypertensive agents (calcium channel antagonists, angiotensin converting enzyme inhibitors, and alpha adrenergic antagonists) and oral hypoglycaemic drugs. For these drugs there are no randomised controlled trials with a true end point. These drugs were classified as belonging to group (ii) on the basis of certain guidelines. If these drugs had been classified as belonging to group (iii) the evidence group would have comprised 11 (21 percent) of the drug treatments.

Our finding is similar to Gill and colleagues': in about 80 percent of cases we select drugs for chronic diseases in general practice on the basis of evidence from randomised controlled trials and guidelines. It was a problem that this evidence was not in Japanese."

References

1. Gill P, Dowell AC, Neal RD, Smith N, Heywood P, Wilson AE. Evidence based general practice: a retrospective study of interventions in one training practice. BMJ 1996;312:819-21. (30 March.)

2. Ellis J, Mulligan I, Rowe J, Sackett DL. Inpatient general medicine is evidence based. Lancet 1995;346:407-10.
 


Chikwe, Joanna. Evidence based general practice: Findings of study should prompt debate [Letter]. British Medical Journal 313 (7049): 114.

Oxford University, St Peter's College, Oxford OX1 2DL.

"Editor,--P Gill and colleagues' adaptation to a general practice setting (1) of a study originally designed to assess interventions in an acute hospital medical firm (2) encouraged me to apply their methodology to acute admissions (n=50) over four weeks in the paediatric department of a district general hospital. My finding that, by Gill and colleagues' criteria, two thirds of primary interventions in this setting were evidence based is perhaps less interesting than the flaws in their study that were highlighted by my attempt to emulate it.

Firstly, Gill and colleagues cite individual randomised controlled trials and state that they did not attempt to assess the methodological quality of the trials identified. In my study at least four diagnosis-intervention pairs could be supported or contraindicated depending on which of two conflicting randomised controlled trials one chose to quote. Differences in the date of publication were not great enough to dictate the choice; an accurate assessment of trial strength is vital in such cases. Ellis et al's solution to this problem was to use overviews in addition to randomised controlled trials. (2)

Secondly, the treatments that fell into Gill and colleagues' category (ii)--"intervention based on convincing non-experimental evidence"--were decided by a consensus of practitioners. Because of the nature of interventions in the paediatric department that I studied, this was the criterion that I adopted.

The inclusion criteria for this category were therefore vastly different from those of Ellis et al, whose category (ii) interventions, such as cardiopulmonary resuscitation, were those "whose face validity is so great that randomised trials were unanimously judged by the team to be both unnecessary and, if a placebo would have been involved, unethical." (2) The general practice study, like mine, therefore included within the authors' definition of evidence based interventions a large number of treatments that proponents of evidence based medicine would call non-evidence based. Such studies are useful for assessing the scientific basis of treatment. When, however, randomised controlled trials are not examined for power and a consensus of practitioners is substituted for such trials in some cases, the finding that two thirds or more of interventions are evidence based is less a cause of satisfaction than a source of debate."

References

1. Gill P, Dowell AC, Neal RD, Smith N, Heywood P, Wilson AK. Evidence based general practice: a retrospective study of interventions in one training practice. BMJ 1996;312:819-21. (30 March.)

2. Ellis J, Mulligan I, Rowe J, Sackett D. Inpatient general medicine is evidence based. Lancet 1995;12:407-9.


Meakin, Richard; Lloyd, Margaret; Ward, Sue. Evidence based general practice: Studies using more sophisticated methods are needed [Letter] British Medical Journal 313 (7049): 114.

Royal Free Hospital School of Medicine, Department of Primary Care and Population Sciences, London NW3 2PF.

"Editor,--P Gill and colleagues (1) to respond to the challenge posed by Ellis et al (2) to assess the extent to which evidence forms the basis of practice in settings other than acute hospitals. They comment on the challenges of identifying the evidence and express concerns about its generalisability and applicability. It is not clear from their methodology, however, whether they assessed the quality of the evidence they identified, though they comment generally on issues related to quality.

We think that several methodological issues are worth highlighting. As a result of the retrospective design of the study the authors assume that the diagnostic label recorded first in the patient's medical record was the primary reason for the patient's presentation. Is this a safe assumption? Many general practitioners have had the experience of patients expressing their main concern as they leave the consulting room. Also, the authors excluded 11 patients from their sample, for whom the "attempt to cure, alleviate, or care for the patient in respect of the primary diagnosis" was referral or investigation. Their reasons for this are not clear as these are valid interventions for which evidence of efficacy might be sought. The inclusion of follow up interventions in the sample may result in the inclusion of patients whose intervention is the result of decisions taken outside general practice.

Two points arise from the results. Firstly, the fact that 76 percent of the interventions were drug interventions compared with the 66 percent reported by Fry (3) casts further doubt on the representativeness of this sample. Also, although the authors report a similar proportion of evidence based interventions to that reported by Ellis et al, (2) a higher proportion of these (50 percent compared with 29 percent) were substantiated by convincing non-experimental evidence. This may reflect the fact that the interventions used in general practice are of a "low tech" nature and were often introduced before randomised controlled trials became commonly used. It means, however, that this evidence is qualitatively different from that in Ellis et al's study and calls into question the appropriateness of using this paradigm in this setting. In our view, the place of evidence based practice in primary care is an important issue and needs further investigation with more sophisticated methodologies."

References

1. Gill P, Dowell AC, Neal RD, Smith N, Heywood P, Wilson AE. Evidence based general practice: a retrospective study of interventions in one training practice. BMJ 1996;312:819-21. (30 March.)

2. Ellis J, Mulligan I, Rowe J, Sackett DL. Inpatient general medicine is evidence based. Lancet 1995;346:407-10.

3. Fry J. General practice: the facts. Oxford: Radcliffe Medical Press, 1993.


Gill, Paramjit S. Author's reply: Evidence based general practice [Letter] British Medical Journal 1996; 313 (7049): 114-115.

Author's reply:

"Editor,--Joanna Chikwe and Richard Meakin and colleagues share the concerns that my colleagues and I have about the quality of randomised controlled trials. I would draw their attention to two further points. Firstly, few randomised controlled trials have been carried out in general practice. Secondly, owing to difficulties with methodological rigour and interpreting the clinical findings for a primary care context, the results of randomised controlled trials may be of questionable validity. (1) Systematic reviews and meta-analyses may offer ways to address the problems of methodological quality, but such reviews cover only limited topics and may themselves lack rigour. Nor will there ever be evidence from randomised controlled trials, systematic reviews, and meta-analyses to support all but a minority of the many interventions of everyday primary care.

Koki Tsuruoka and colleagues highlight our concerns about the use of appropriate end points of treatment in randomised controlled trials. Like them, we used a pragmatic method of defining end points. All antihypertensive drugs were allocated to group (i) even though a reduction in mortality and morbidity has been established with only relatively few drugs. (2) Furthermore, Tsuruoka and colleagues' study raises issues about generalising results to countries where health systems and clinical practice may be very different.

Meakin and colleagues iterate our concern about the diagnostic label recorded in patients' notes and draw attention to the exclusion of patients who were referred or investigated. We too were concerned about the use of the first recorded problem as a primary diagnosis. Our methodology related interventions to diagnostic labels. We deliberately excluded patients sent for investigation and referral because investigations may modify diagnostic labels and referral may modify either diagnostic labels or the treatment plan. In our paper we insisted that the results should not be generalised. Interestingly, however, if the referral and investigation group is added back into the sample the proportion of drug interventions (72 percent) compares favourably with that reported by Fry, (3) given the small sample size.

The main point of our study was not merely to estimate the proportion of evidence based interventions in general practice but to debate the appropriateness of methods used to assess evidence based practice. We consider it misjudged to compare percentages of evidence based interventions in different disciplines. It is now appropriate, however, to shift the debate to exploring alternative paradigms of evidence based care and consider how we can ensure that the increasing body of research evidence is made accessible to all practitioners.

References

1. Pringle M, Churchill R. Randomised controlled trials in general practice. BMJ 1995;311:382-3.

2. Medical Research Council Working Party. MRC trial of mild hypertension: principal results. BMJ 1985;291:97-104.

3. Fry J. General practice: the facts. Oxford: Radcliffe Medical Press, 1993.


P.S. None of their critics have backed up their critiques by doing their own studies, and I know of no others in primary care - David Sackett [1998] 

Suarez-Varela MM, Llopis-Gonzalez A, Bell J, Tallon-Guerola M, Perez-Benajas A, Carrion-Carrion C.                 Evidence based general practice. Eur J Epidemiol 1999 Oct;15(9):815-9

Unit of Public Health, Hygiene and Environmental Health, Valencia University, Spain. maria.m.morales@uv.es

OBJECTIVES: To estimate the proportion of interventions in general practice that are based on evidence.
DESIGN: A one-year cross-sectional study involving all consultations by patients over age 15 years seen in 34 national primary health care centers.
SETTING: The rural Castellon provincial district within the Valencian Community in eastern Spain, with a total population of
21,155 inhabitants.
SUBJECTS: of 1990 case histories registered in the course of one year, 4800 consultations were identified; of these, 2341 (49%) distinct diagnosis-intervention pairs were identified and coded.
MAIN RESULTS: The evidence basis for the diagnosis-intervention pairs in the study was derived from a computerized search of the scientific literature published in 1992-1996. The quality of the evidence was classified according to the method of Ellis et al. Within the 2341 diagnosis-intervention pairs, there was positive evidence in support of the intervention used in 55%. The evidence basis was sound for 42%, with 38% being based on Type I (clinical trials) evidence and 4% on Type II evidence. The most frequently presenting diseases involved the circulatory (18.7%), respiratory (14.9%), nervous (14.2%), musculo-skeletal (12.5%) and nutrition and metabolism and digestive systems, with 12.1% each.
CONCLUSIONS: Clinical practice was clearly supported by positive evidence of all Types (I-III) in a total of 55% of interventions, and by good positive evidence of Type I or II in 42% of interventions. The percentage of evidence-based interventions in general practice serving a substantial population in rural Spain was lower than had been reported by some authors.
PMID: 10608361, UI: 20074040


Anaesthesia

Myles PS, Bain DL, Johnson F, McMahon R. Is anaesthesia evidence-based? A survey of anaesthetic practice.  Br J Anaesth 1999 Apr;82(4):591-5

Department of Anaesthesia and Pain Management, Alfred Hospital, Prahran, Victoria, Australia.

We were interested in measuring the proportion of anaesthetic interventions in routine practice that are supported by evidence in the literature. We surveyed our hospital practice, asking anaesthetists to nominate a primary problem (if any) and their chosen intervention. Each intervention was classified into one of four levels according to the strength of the evidence recovered from the literature. We found that 96.7% were evidence-based (levels I-IV), including 32% supported by randomized, controlled trials (levels I and II). These results are similar to recent studies in other specialties and refute the claim that only 10-20% of treatments have any scientific foundation.

Comments: Comment in: Myles PS, Bain D, Johnson F. Evidence-based anaesthetic practice. Br J Anaesth 1999 Aug;83(2):360-2; Comment in: Barnardo PD. Is anaesthesia evidence-based?     Br J Anaesth 83: (4) 684-685 Oct 1999; Myles PS, Bain DL. Is anaesthesia evidence-based? Br J Anaesth 83: (4) 685 Oct 1999

PMID: 10472229, UI: 99401449


Dermatology

Jemec GBE, Thorsteinsdottir H, Wulf HC. Evidence-based dermatologic out-patient treatment. International Journal of Dermatology 37: (11) 850-854 NOV 1998

Jemec GBE, Strandvejen 97, 3tv, DK-2900 Hellerup, Denmark. Univ Copenhagen, Bispebjerg Hosp, Dept Dermatol D, Copenhagen, Denmark.

Objective To determine the evidence base for routine therapeutic decisions in dermatologic out-patients.

Design A retrospective review of a random sample of primary therapy and literature.

Setting University hospital, dermatologic out-patient clinic in Copenhagen

Material A random sample of the case notes from 115 out-patients.

Method The evidence base of therapy prescribed when the diagnosis was ascertained was studied in literature searches in
MEDLINE(R) and EMBASE(R). Evidence was structured into primary evidence consisting of randomized controlled trials, and
secondary evidence consisting of follow-up studies or the application of trial results between diseases with pathogenic or clinical
similarities, e.g. atopic and seborrheic dermatitis.

Results Randomized controlled trials could be found describing 38% (95% confidence interval: 30-47) of all treatments.
Secondary evidence was found for 33% (24-41), white no evidence was found for 23% (16-31) of the given treatments.

Conclusions Approximately three-quarters of dermatologic out-patient therapy is based on scientific evidence ranging from
randomized controlled trials to logical deduction from analogous clinical situations. The proportion of evidence-based medicine
in dermatologic therapy therefore appears to be comparable with that of internal medicine and may thus be above expectations.


Haematology, Clinical

Galloway M. Baird G. Lennard A. Haematologists in district general hospitals practise evidence based medicine. Clinical & Laboratory Haematology. 19(4):243-8, 1997 Dec.

Department of Haematology, Bishop Auckland Hospitals NHS Trust, County Durham, UK.

Abstract: A study published by the Centre for Evidence Based Medicine in Oxford, demonstrated that 82% of primary interventions offered by a general medical team in a 1 month period were evidence based. This contrasted with the traditional view that only 10-20% of medical interventions offered to patients have any scientific foundation. We have carried out a prospective study to determine if the primary interventions we offer to patients are evidence based. In June 1996 all therapeutic decisions which were made in one clinical haematology practice were studied. We included in the analysis the primary haematological diagnosis and the primary intervention offered. Interventions were classified as evidence based if the intervention was based on either evidence from randomized controlled trials, or evidence from well-designed non-randomized prospective or retrospective controlled studies or other convincing non-experimental evidence. In our study 70% of the primary therapeutic decisions made in the 83 patients studied were evidence based. This study reinforces the view that earlier assessments of the degree to which medicine is evidence based were too pessimistic. It is clear from our study that randomized controlled trials need to be developed in areas which are a relatively common clinical problem. 


Haematology, Malignant (Djulbegovic et al, 1997)

We have some data (published in abstract form in the Proceedings ASCO 1997;16:416a; the paper is currently under review): A status of the quality of medical evidence in hematology/oncology. B. Djulbegovic, G. Kloecker and G.H. Goldsmith. University of Louisville, Department of Medicine, Louisville, KY.

The extent to which physician practice directly reflects evidence-based decision making is uncertain for most of clinical medicine. The Office for Technological Assessment has made an aggregate estimate of between ten and twenty per cent. Assessment of evidence-based practice for a clinical problem requires identification of all pertinent clinical decisions that must be made, delineation of the range of management strategies, or interventions, that apply to each decision, and quantitative evaluation of the medical evidence for each of these decision-intervention pairs. Following completion of this process for the range of clinical problems and diseases encountered by a practicioner in a specialty area, a reasonable estimate of the extent to which practice is evidence-based can be made.

We have assessed the quality of medical evidence available for the treatment of malignant blood disorders. The analytic process described above was performed for 14 common disorders. Management of these diseases requires 143 major clinical decision-intervention pairs with varying numbers of potential interventions for each decision step. The quality of medical evidence for these interventions was assessed and ranked: (I) evidence originating from randomized trials; (II) evidence derived from single arm prospective studies; (III) evidence based upon retrospective studies or case reports. This analysis reveals that 29 (20%) of interventions are supported by level (I) evidence, 32 (22%) by level (II) evidence, and 82 (58%) by only level (III) data.

We conclude that most of clinical practice in management of hematologic malignancies is not supported by high quality evidence. This analysis helps identify clinical decisions for which additional randomized trial data are needed.

Djulbegovic writes:The above shows that in the field of malignant hematology only 24% of decisions are supported by data from RCT; 21% of decisions were supported by data from single arm prospective studies and rest (55%) decisions/interventions were supported by anecdotal/retrospective evidence. However, when analysis was applied to 255 consecutive patients actually seen in our institution, 78% of the initial decisions/interventions in the management of newly diagnosed hematologic/oncologic disorders could have been based on data obtained from RCT.

He goes on to state: As we were doing this study we were also surprised to find that most claims regarding the extent to which medical practice is based on high quality data are based on opinions and not on data. We found only 3 reports in the literature studying this issue in empirical fashion . [Dubinski & Ferguson (1990), Ellis et al (1995)& Gill et al (1996)].

Paper subsequently published as:

Djulbegovic B, Loughran TP Jr, Hornung CA, Kloecker G, Efthimiadis EN, Hadley TJ, Englert J, Hoskins M, Goldsmith GH The quality of medical evidence in hematology-oncology. Am J Med 1999 Feb;106(2):198-205

Department of Medicine, James Graham Brown Cancer Center, University of Louisville, Kentucky, USA.

PURPOSE: The purpose of this study was to evaluate the quality of the medical evidence available to the clinician in the practice of hematology/oncology.
METHODS: We selected 14 neoplastic hematologic disorders and identified 154 clinically important patient management       decision/interventions, ranging from initial treatment decisions to those made for the treatment of recurrent or refractory disease. We also performed a search of the scientific literature for the years 1966 through 1996 to identify all randomized controlled trials in hematology/oncology.
RESULTS: We identified 783 randomized controlled trials (level 1 evidence) pertaining to 37 (24%) of the decision/interventions. An additional 32 (21%) of the decision/interventions were supported by evidence from single arm prospective studies (level 2 evidence). However, only retrospective or anecdotal evidence (level 3 evidence) was available to support 55% of the identified decision/interventions. In a retrospective review of the decision/interventions made in the          management of 255 consecutive patients, 78% of the initial decision/interventions in the management of newly diagnosed hematologic/oncologic disorders could have been based on level 1 evidence. However, more than half (52%) of all the decision/interventions made in the management of these 255 patients were supported only by level 2 or 3 evidence.
CONCLUSIONS: We conclude that level 1 evidence to support the development of practice guidelines is available primarily for initial decision/interventions of newly diagnosed diseases. Level 1 evidence to develop guidelines for the management of relapsed or refractory malignant diseases is currently lacking.

Publication Types: Meta-analysis
Comments:  Comment in: Am J Med 1999 Feb;106(2):263-4

PMID: 10230750, UI: 99245781


Medicare Coverage (Dubinski & Ferguson, 1990)

In their article "Analysis of the National Institutes of Health Medicare coverage assessment" (Int J Technol Assess Health Care 1990;6:480-8), Dubinsky and Ferguson reviewed coverage decisions of 126 technologies between 1981 and 1987 (thus a certain overlap with the above mentioned OHTA analysis could be assumed). They were surprised to find only about 20% of these assessments to be based on "clinical trials, case control studies or cohort studies, and almost 80% relied only on expert opinion" (cited from p. 485). - ie not even RCTs! Dubinski and Ferguson tried to empirically assess the quality of medical evidence underlying medical/surgical procedures for the purpose of the Medicare coverage and concluded that high quality evidence existed for the use of only 21% of the 126 procedures they evaluated (Int J Technol Assess Health Care 1990;6:480). 

Paediatrics

Rudolf MCJ, Lyth N, Bundle A, Rowland G, Kelly A, Bosson S, Garner M, Guest P, Khan M, Thazin R, Bennett T, Damman  D, Cove V, Kaur V. A search for the evidence supporting community paediatric practice. Arch Dis Child 80: (3) 257-261 Mar 1999.

Rudolf MCJ, Leeds Gen Infirm, Belmont House, 3-5 Belmont Grove, Leeds LS2 9NP, W Yorkshire, England.
Univ Leeds, Acad Univ Paediat & Child Hlth, Leeds LS2 9NP, W Yorkshire, England.

Aim-Controversy exists regarding the evidence base of medicine, Estimates range from 20% to 80% in various specialties, but
there have been no studies in paediatrics. The aim of this study was to ascertain the evidence base for community paediatrics.

Methods-Twelve community paediatricians working in clinics and schools in Yorkshire, Manchester, Teesside, and Cheshire
carried out a prospective review of consecutive clinical contacts. Evidence for diagnostic processes, prescribing, referrals,
counselling/advice, and child health promotion was found by searching electronic databases. This information was critically
appraised and a consensus was obtained regarding quality and whether it supported actions taken.

Results-Two hundred and forty seven consultations and 1149 clinical actions were performed. Good evidence was found from
a randomised controlled trial or other appropriate study for 39.9% of the 629 actions studied; convincing non-experimental
evidence for 7%; inconclusive evidence for 25.4%; evidence of ineffectiveness for 0.2%; and no evidence for 27.5%.
Prescribing and child health promotion activities had the highest levels of quality evidence, and counselling/advice had the
lowest.

Conclusions-An encouraging amount of evidence was found to support much of community paediatric practice. This study
improved on previous research in other specialties because actions other than medications and surgery were included.


Paediatric Surgery

Kenny-SE; Shankar-KR; Rintala-R; Lamont-GL; Lloyd-DA. Evidence-based surgery: interventions in a regional paediatric surgical unit. Arch-Dis-Child. 1997 Jan; 76(1): 50-3

Department of Paediatric Surgery, Alder Hey Children's Hospital, Liverpool.

OBJECTIVES: To determine the proportion of paediatric surgical interventions that are evidence-based and to identify areas where randomised controlled trials (RCTs) or further research are required.
DESIGN: Prospective review of paediatric general surgical inpatients.
SETTING: A regional paediatric surgical unit.
SUBJECTS: All consecutive paediatric general surgical patients admitted in November, 1995.
MAIN OUTCOME MEASURES: Each patient on whom a diagnosis had been made was allocated a primary diagnosis and primary intervention (n = 281). On the basis of expert knowledge, Plusnet Medline, and ISI Science Citation database searches, each intervention was categorised according to the level of supporting evidence: category 1, intervention based on RCT evidence; category 2, intervention with convincing non-experimental evidence such that an RCT would be unethical and unjustified; category 3, intervention without substantial supportive evidence.
RESULTS: Of 281 patient interventions, 31 (11%) were based on controlled trials and 185 (66%) on convincing non-experimental evidence. Only 23% of interventions were category 3.
CONCLUSIONS: In common with other medical specialties , the majority of paediatric surgical interventions are based on sound evidence. However, only 11% of interventions are based on RCT data, perhaps reflecting the nature of surgical practice. Further RCTs or research is indicated in a proportion of category 3 interventions. 


Psychiatry

Geddes JR, Game D, Jenkins NE, Peterson LA, Pottinger GR, Sackett DL. What proportion of primary psychiatric interventions are based on evidence from randomised controlled trials? Quality In Health Care, 1996, Vol.5, No.4, pp.215-217

Univ Oxford,Warneford Hosp,Dept Psychiat,Oxford OX3 7JX,England Oxford Radcliffe Nhs Trust,Nuffield Dept Med,Res & Dev Programme Ctr Evidence Based Med,Oxford,England.

Objectives-To estimate the proportion of psychiatric inpatients receiving primary interventions based on randomised controlled trials or systematic reviews of randomised controlled trials.
Design-Retrospective survey.
Setting-Acute adult general psychiatric ward.
Subjects-All patients admitted to the ward during a 28 day period.
Main outcome measures-Primary interventions were classified according to whether or not they were supported by evidence from randomised controlled trials or systematic reviews.

Results-The primary interventions received by 26/40 (65%; 95% confidence interval (95% CI) 51% to 79%) of patients admitted during the period were based on randomised trials or systematic reviews.

Conclusions-When patients were used as the denominator, most primary interventions given in acute general psychiatry were based on experimental evidence. The evidence was difficult to locate; there is an urgent need for systematic reviews of randomised controlled trials in this area.


In E-B Psychiatric Services, both in-patient (67% of admissions treated on the basis of SRs and RCTs at the oxford centre for e-b psychiatry) [Qual Health Care 1996;5:215-7] and out-patient psychiatry [poster and abstract at a psych meeting] documented results as good or better as those in medicine. [Sackett, 1998]
 


Summers, A.; Kehoe, R. F. Is psychiatric treatment evidence-based? [Letter to the Editor]. Lancet 1996: 347 (8998): 409-410

Airedale General Hospital, Keighley, West Yorkshire BD20 6TD, UK.

Sir--We know of no published study of the extent to which psychiatric interventions are evidence based. We investigated this in 158 individuals over 6 weeks during 1995, and identified decisions to initiate new treatments (pharmacological, psychological, or social) from case notes. We excluded decisions to provide patient education, assessment, or monitoring; continue or adjust treatments; change location or provider of treatment; and treat unrelated physical problems. 160 decisions were identified, 75 in outpatients, 11 in community mental-health centre clients, 18 in day patients, and 56 in inpatients. Randomised controlled trials of treatments were identified from published reviews (1,2) and from those already known to us. Evidence was identified to support 85 (53 percent) interventions. The most frequent were specific drug treatments for depression (n=35) and psychotic symptoms (n=10).

A further 16 (10 percent) interventions were not considered because trials would have been unethical. The most frequent interventions in this group were close observation in hospital for individuals at high immediate risk of suicide (n=6) and treatments for related physical illness in depressed patients (n=5).

The remaining 59 (37 percent) interventions did not fall into either category. The most frequent were supportive practical measures (n=12) and non-specific supportive psychotherapy (n=8). We relied on authoritative reviews and well known evidence for our evidence of treatment effectiveness. If there is other evidence our figures would underestimate the proportion that are evidence based. It is likely that we overestimated the extent to which evidence underpins clinical management for the following reasons: consideration of only a limited range of decisions; assumption that diagnoses were accurate and assessments of severity were appropriate; and differences between our patients or treatments and those in the trials on which the evidence is based. These issues have been discussed elsewhere. (3)

We note that there may be treatments that could have been initiated but were not considered. In using higher proportions of evidence-based treatments we may be introducing bias towards treatments that are easier to test but not necessarily more effective. We did not establish that our evidence-based decisions represented the most effective, or the most acceptable or cost effective treatment for each individual.

References and notes

1. Wing JK. Mental illness health care needs assessment no 15. Wessex Institute of Public Health Medicine. Oxford: Radcliffe Medical Press, 1994.

2. School of Public Health University of Leeds, Centre for Health Economics University of York, Research Unit Royal College of Physicians. Effective Health Care. The treatment of depression in primary care. Bulletin of the effectiveness of health service interventions for decision-makers, no 5. University of Leeds, 1993.

3. Grimley Evans J. Evidence based and evidence biased medicine. Age Ageing 1995; 24: 461-63. 


Surgery

Howes N; Chagla L; Thorpe-M; McCulloch-P. Surgical practice is evidence based. Br-J-Surg. 1997 Sep; 84(9): 1220-3

Aintree Hospitals NHS Trust, UK.

BACKGROUND: The quality of surgical research, and particularly the reluctance of surgeons to perform randomized controlled trials, has been criticized. The proportion of surgical treatments supported by satisfactory scientific evidence has not been evaluated previously.
METHODS: A 1-month prospective audit was performed of 100 surgical inpatients admitted under two consultants in a general surgical/vascular unit at an urban teaching hospital; the main illness and interventions were agreed through group discussions in each case. The literature concerning the efficacy of each treatment was reviewed, and the evidence was categorized as: (1) supported by randomized controlled trial evidence; (2) sufficient other evidence of efficacy to make a placebo-controlled trial unethical; or (3) neither of the above.
RESULTS: Of the 100 patients studied, 95 (95 per cent confidence interval (c.i.) 89-98) received treatment based on satisfactory evidence (categories 1 and 2) and, of these, 24 patients (95 per cent c.i. 17-35) received treatments based on randomized controlled trial evidence and 71 had treatments based on other convincing evidence (95 per cent c.i. 62-80).

CONCLUSION: Inpatient general surgery is 'evidence based', but the proportion of surgical treatments supported by randomized controlled trial evidence is much smaller than that found in general medicine. Some reasons for this are clear, but the extent to which surgical practice needs to be reevaluated is not. Current methods for classifying and describing evidence in therapeutic studies need improvement.


Surgery, Paediatric

Kenny SE. Shankar KR. Rintala R. Lamont GL. Lloyd DA.Evidence-based surgery: interventions in a regional paediatric surgical unit. Archives of Disease in Childhood. 76(1):50-3, 1997 Jan.

Department of Paediatric Surgery, Alder Hey Children's Hospital, Liverpool.

OBJECTIVES: To determine the proportion of paediatric surgical interventions that are evidence-based and to identify areas where randomised controlled trials (RCTs) or further research are required.
DESIGN: Prospective review of paediatric general surgical inpatients.
SETTING: A regional paediatric surgical unit.
SUBJECTS: All consecutive paediatric general surgical patients admitted in November, 1995.
MAIN OUTCOME MEASURES: Each patient on whom a diagnosis had been made was allocated a primary diagnosis and primary intervention (n = 281). On the basis of expert knowledge, Plusnet Medline, and ISI Science Citation database searches, each intervention was categorised according to the level of supporting evidence: category 1, intervention based on RCT evidence; category 2, intervention with convincing non-experimental evidence such that an RCT would be unethical and unjustified; category 3, intervention without substantial supportive evidence.

RESULTS: Of 281 patient interventions, 31 (11%) were based on controlled trials and 185 (66%) on convincing non-experimental evidence. Only 23% of interventions were category 3.

CONCLUSIONS: In common with other medical specialties, the majority of paediatric surgical interventions are based on sound evidence. However, only 11% of interventions are based on RCT data, perhaps reflecting the nature of surgical practice. Further RCTs or research is indicated in a proportion of category 3 interventions.


Baraldini V. Spitz L. Pierro A. Evidence-based operations in paediatric surgery. Pediatric Surgery International. 13(5-6):331-5, 1998 Jul.

Institute of Child Health and Great Ormond Street Hospital for Children, 30 Guilford Street, London WC1N 1EH, UK.

It has been assumed that only 10% of medical interventions are supported by solid scientific evidence. The aim of this study was to determine the type of research evidence supporting operations in a tertiary referral paediatric surgical unit. All patients admitted over a 4-week period to two surgical firms were enrolled in the study. All major operations carried out on each patient since birth were evaluated. Patients for whom a diagnosis was not reached were excluded. A bibliographic database (MEDLINE) was used to search for the articles published between January 1986 and December 1995 on the analysed operations. The type of evidence supporting the operations was classified as follows: I=evidence from randomised controlled trials (RCTs); II=self-evident intervention (obvious effectiveness not requiring RCTs); III=evidence from prospective and/or comparative studies; IV=evidence from follow-up studies and/or retrospective case series; and V=intervention without substantial evidence for or against results of randomised trials. Seventy operations (32 individual types) were performed on 49 patients (1-5 operations/patient); 18 (26%) were supported by RCTs (type of evidence I). Two patients (3%) received a self-evident intervention (type II); 48 operations (68%) were based on non-randomised prospective or retrospective studies (type III=13%; type IV=55%). Two patients (3%) received an operation not supported by or against convincing scientific evidence (type V). A significant proportion of operations in paediatric surgery is supported by RCTs. However, the vast majority of these trials were conducted on adult patients. Sixty-eight per cent of the operations were based on prospective follow-up studies or retrospective case series, which may not represent solid scientific evidence. More RCTs are needed in paediatric surgery. 


Comment:

I recently heard (Last week at University of Utah) R. Brian Haynes of McMaster state that depending on specialty, 20% (pediatric surgery) to 53% (general practice- the Oxford study I think) of clinical interventions (?manoeuvers) have randomized clinical trials as evidence. He also stated that there are ~4000 RCTs in the literature. If you divide this number by the number of ICD-9 codes, assuming random distribution of RCTs which is clearly untrue, then the percentage is ~25-30%. It obviously depends on whether you are talking about specific diseases, subtypes of diseases, clinical interventions, specific procedures (in the CPT sense) on the percentages of applicable RCTs that can be identified. The point is, that there is substantial lack of evidence, by any criteria you use, and hence a continued need to produce quality evidence (ie. the research industry is not in danger of going out of business). - Scott Endsley [1998]


Surgery, Endocrine

Thomusch O, Dralle H. Endocrine surgery and evidence-based medicine. CHIRURG 71: (6) 635-645 JUN 2000

[Language: German]

Dralle H, Univ Halle Wittenberg, Klin Allgemeinchirurg, Ernst Grube Str 40, D-06097 Halle, Germany.
Univ Halle Wittenberg, Klin Allgemeinchirurg, D-06097 Halle, Germany.

Introduction: The aim of this literature review is to classify current knowledge on nine questions of current interest for endocrine surgery and their classification with regard to levels of evidence-based medicine (EBM).
Methods: The literature in Medline and EM-Base was reviewed. Only retrospective or prospective comparative studies with statistical analysis were selected.
Results: (See Table 8.)
Conclusion: With respect to the current literature, only routine identification of the RLN and the minimally-invasive approach for adrenalectomy can be regarded as EBM. To answer the remaining questions prospective studies are needed.


Surgery, Laparoscopic

Slim K, Lescure G, Voitellier M, Ferrandis P, Le Roux S, Dumas PJ, Lere JM, Patouillard P, Caburet A, Baudet B, Rolet JP, Prat M, Pezet D, Chipponi J. Is laparoscopic surgery really evidence-based in everyday practice? Results of a prospective regional survey in France. Presse Medicale 27: (36) 1829-1833 NOV 21 1998

Slim K, Hotel Dieu, Serv Chirurg Gen & Digest, BP 69, F-63003 Clermont Ferrand, France.CHU Clermont Ferrand, Serv Chirurg Gen & Digest, Clermont Ferrand, France.Hop Thiers, Serv Chirurg Gen & Digest, Thiers, France. Hop Riom, Serv Chirurg Gen & Digest, Riom, France. Hop Issoire, Serv Chirurg Gen & Digest, Issoire, France. Hop Aurillac, Serv Chirurg Gen & Digest, Aurillac, France. Hop Montlucon, Serv Chirurg Gen & Digest, Montlucon, France. Hop Puyen Velay, Serv Chirurg Gen & Digest, Le Puy, France.

OBJECTIVE: Evidence-based medicine is a growing paradigm in health care. We conducted a prospective study to determine whether laparoscopic surgery is truly evidence-based in everyday practice.

METHODS: A prospective regional survey was performed in 11 French hospitals (one university and 10 district hospitals) to
ascertain how general laparoscopic surgery was conducted during the last 3 months of 1997 We also searched the electronic
databases for original articles on laparoscopic procedures. The methodology of randomized trials was analyzed and procedures
were classed by level of evidence. We assumed that an evidence-based procedure was which had been validated by
well-designed randomized controlled or prospective trials giving homogeneous results.

RESULTS: One half of the procedures performed had been evaluated by randomized controlled trials. Among the 428
laparoscopic procedures, 334 (78%) were found to be evidence based (CI 74. 1-81.9%). Twelve of the 18 indications for
laparoscopy (67%) were evidence based (CI: 62.5%-71.5). There was no difference between university teaching hospitals and
general district hospitals.

CONCLUSION: Contrary to initial criticisms, the practice of laparoscopic surgery appears to be truly evidence-based in the
majority of cases.


Surgery, Thoracic

Lee JS, Urschel DM, Urschel JD. Is general thoracic surgical practice evidence based? Ann Thorac Surg 2000 Aug;70(2):429-31

Department of Thoracic Surgical Oncology, Roswell Park Cancer Institute, Buffalo, New York, USA.

BACKGROUND: In evidence-based medicine clinical decisions are based on experimental evidence of treatment efficacy. There are no data on the extent to which general thoracic surgical practice is evidence based.
METHODS: A list of 50 thoracic surgical treatments was derived from the operating room log of one surgeon practicing at both a tertiary care cancer center and an affiliated community general hospital. Minor diagnostic procedures and procedures performed as part of experimental protocols were excluded. For each treatment a Medline search was done to obtain the best published evidence supporting the treatment's efficacy. The evidence was then placed in one of three categories developed by the Oxford Centre for Evidence-Based Medicine: (1) evidence from randomized controlled trials (RCTs); (2) convincing non-experimental evidence; and (3) interventions without substantial evidence.
RESULTS: Category 1 evidence supported 7 of 50 thoracic surgical treatments. Category 2 evidence supported 32 treatments, and 11 treatments were without substantial supportive evidence.
CONCLUSIONS: The majority of commonly performed general thoracic surgical procedures are supported by nonexperimental evidence. Although there are many obstacles to the performance of surgical randomized controlled trials, the limitations of nonrandomized studies are such that continued emphasis on randomized controlled trials in general thoracic surgery is warranted. This study could serve as a baseline reference for future assessments of evidence-based medicine in general thoracic surgical practice.

PMID: 10969657, UI: 20424058


Postscript

Pickin, Cornell & Booth (ScHARR, University of Sheffield) are currently analysing results from a retrospective casenote-based study of presentations at 13 A&E departments. In addition to the unique multi-site nature of this investigation the study has the following characteristics; analysis of all problem-intervention pairs (not just primary diagnosis/intervention) and answering the question was this the best available evidence based intervention (not "was this intervention based on evidence?"). - Andrew Booth [1998]


Some debate about the denominator (patients versus interventions)

Dr Dean Jenkins Specialist Registrar Llandough Hospital, Wales wrote to evidence-based-health:

"I am well into an audit project with a colleague from Newport looking at the percentage of main treatments that are evidence based in the elderly. It is based on Sackett's audit of general inpatient care published in the Lancet a few years ago. Basically we have taken 1 month of admissions in a teaching hospital and a district general hospital for age 75 or more (about 160 admissions). The two teams have sat down and decided

- the main problem

- the main intervention

e.g. pneumonia - Rx with antibiotics; stroke - referred to stroke unit; angina - IV heparin, oral nitrate added.

We will then go through the Cochrane database, Medline etc. to find the evidence and grade it as follows.

1 - treatment based on 1 or more RCTs (e.g. MI - thrombolysis)

2 - treatment based on convincing non-experimental evidence (e.g. meningitis - antibiotics)

3 - treatment not based on evidence (e.g. supportive care in major intracerebral haemorrhage)

Should we include another category such as 1b - treatment based on RCTs but with the elderly excluded from the trial?

Is anyone else doing a similar audit and if so could we combine our figures? That is if they were gathered in the same way."

In reply David Sackett wrote:

"Really great that you are doing this! it will add care of the elderly to surgery (x2), paediatrics, psychiatry (x2) and medicine (x a few).

Thoughts:

a. Am I correct that your denominator is patients (like most of the other audits) rather than treatments (since every patient receives great numbers of the latter)?

b. your level 1b (RCTs that excluded the elderly) is an interesting one. it would make sense to me in terms of tweaking those of us who do and teach RCTs to pay more attention to this group of individuals.

c. but it doesn't make sense to me in terms of extrapolation of trial results to the elderly unless you or i can come up with a biologic rationale why stuff should stop working on somebody's birthday.

d. what sometimes does change, however, is the PEER (my patient's expected event rate). thinking out loud, most of the treatments i use in the elderly are for patients whose PEERs are higher than the typical patient in the trial. as long as i'm happy that the RRR (relative risk reduction) isn't birthday-dependent, NNTs fall with advancing age and the case for offering Rx goes up, not down, with age. so, unless the risk of side-effects/ toxicity are such that my elderly patient would rather not take that latter risk to achieve the former benefit, i offer it.

e. all the patients in our study (Ellis et al, 1995) accepted our advice and took the treatment we offered them. but this wasn't the case in our last month on service (when a quite rational nonagenarian refused laparotomy for his perforated ulcer). do you want to consider having a special category for level 1 Rx refused?

f. when we're looking for evidence, we search in the order of:

a. our own CATs

b. best evidence (the CD of all the back issues of ACPJournal Club and EBM)

c. 3 different sites in the cochrane library (cochrane reviews, DARE, and their database of RCTs, the latter of which gives us just about twice the yield of MEDLINE)

Hope this helps, and I look forward to learning the results of your important study."

Dr Tom Dent, Consultant in public health medicine, North and Mid Hampshire Health Authority contributed:

"There have been several papers of this kind published recently and they have helped dispel some people's belief that clinical decisions are usually taken on the basis of no evidence at all. The problem is that studies can often be found to justify several different courses of action. Finding a study to support what was done does not prove either that the decision was taken because of knowledge of the study, nor that there was not an alternative decision which was supported by more or better evidence.

A more interesting question to answer is

"What proportion of clinical decisions taken in [insert your professional setting] are supported by research evidence that shows that decision was the best one for the patient?" This might be harder to answer, but searching for evidential support for alternative management approaches should not be impossible."

David Sackett replied:

"Another thought stimulated by tom dent's note:

Although it's nice to justify decisions after the fact, wouldn't it be a good idea to determine which evidence you had on tap when you were actually treating the patients or, barring that, which patients you thought - at the time you made their Rx decision - were receiving e-b care."

Murray Enkin wrote:

"I feel it necessary to comment on the first of DLS's thoughts "am i correct that your denominator is patients (like most of the other audits) rather than treatments (since every patient receives great numbers of the latter)?"

I accept, of course, that most published audits use patients as the denominator, rather than treatments. This would be quite proper if the question is "what proportion of patients receive evidence-based care?"

If, on the other hand, the question is "what proportion of the treatments we use is based on good evidence of effectiveness?", the proper denominator would be treatments, rather than patients.

I think that the gross discrepancy between the reported low proportion of evidence-based therapy (claimed, without specifying the denominator) in the past, and the high proportion of evidence-based treatment reported in the more recent observational studies (which use patients as the denominator) is based on the fact that they were answering different questions.

Both questions ("what proportion of patients receive evidence-based care?" and "what proportion of care is evidence-based?") are important. But they are clearly different. Perhaps for clinicians the first is more important, but for those of us who are involved in the evaluation of the effectiveness of specific care practices the latter question is more relevant.

As this issue has been bothering me for some time now I would very much appreciate learning the opinions of others about this. Am I barking up the wrong tree?"

Chris Taylor, Queen Mary's A&E, Sidcup, Kent, UK added:

"I'm always interested in evidence, but despite reading your mail several times I'm still not entirely clear of the question. AISI, it is:

"What percentage of elderly patients admitted to hospital receive a 'main treatment' for their 'main condition' that was evidence-based ?"

Assuming I interpreted you correctly, then I would argue that the first step is to compile a list of conditions for which evidence-based treatments exist. That's in itself a mammoth task."