Comm Eye Health Vol. 17 No. 51 2004 pp 40 - 41. Published online 01 October 2004.

Bridging the gap to evidence-based eye care

Richard Wormald

Co-ordinating Editor, Cochrane Eyes and Vision Group (CEVG), International Centre for Eye Health, London School of Hygiene and Tropical Medicine, Keppel Street, London WC1E 7HT.

Related content

In the first article in this series, I touched on the enormous challenge to make access to information equal for those who need it at the time and place when they need it. Only if this is achieved can we successfully promote an evidence-based approach to health care. The move towards open access publishing is taking us some way to achieving this. However, there are further gaps to be bridged if we are to turn eye care workers into evidence-based practitioners. We can define an evidence-based practitioner as one who combines their individual knowledge and expertise with the best available external clinical evidence from systematic research.

Treatment for retinopathy of prematurity (ROP).
Photo: International Centre for Eye Health www.iceh.org.uk, London School of Hygiene & Tropical Medicine. Published in: Community Eye Health Journal Vol. 17 No.51 OCTOBER 2004 www.cehjournal.org

The standard approach to evidence-based health care is:

  1. Formulating a question
  2. Looking for evidence, which usually means searching the scientific literature

  3. Appraising the information, i.e. deciding if it is reliable

  4. Applying the evidence
  5. Evaluating the process.

Formulating a question

The first gap we need to bridge relates to the way that health care providers think about knowledge and information. Asking a question might be difficult for those who have been taught to practice medicine by rote, memorising lists of causes and treatments without ever being taught to ask the question “why?” or “how do you know?” Creating a thirst for knowledge, and encouraging people to practice it, is the challenge for implementing evidence-based practice in ophthalmology.

An important starting point is to challenge anything new; just because it is new, it does not necessarily follow that it is better. It is almost certainly going to be more expensive so what additional benefit is there to justify the additional cost? But it is almost as important to challenge established practice. Just because we always do things in certain ways does not mean it’s the best way of doing it. How many of our readers continue to treat corneal abrasion with antibiotic ointment and a pad?

The few trials that have been conducted suggest that padding slows healing and increases discomfort. There is no evidence that padding reduces the risk of secondary infection (though this might be more of a concern in poorer countries with less sanitation or dusty environments from where no such studies have been reported).

Looking for evidence

Once we have a question, the search for an answer follows, and here lies another gap. As already stated, there are major inequalities in access to and availability of reliable information sources. A major problem is that the holders of information demand payment for access. The amount to pay is rarely adjusted for ability to pay and prices are set by richer economies. Looking for evidence can also be a time consuming process, time which a busy clinician in an overwhelmed and under resourced outpatient clinic will rarely, if ever, have. Those with internet access can run simple searches to find answers. However, of the evidence available, the average searcher will find only a fraction, which is why services which synthesise and evaluate research, such as The Cochrane Library, are valuable.

Appraising the information

Once we have found sources of information to answer our question, we must appraise it – is this information reliable? The challenge to evidence-based health care is to ensure that information is reliable, of good quality, free from bias and not linked to the personal advantage of any individual or group of individuals. It should be available and accessible to users of information in a form that is interpretable and relevant. It should also be quality controlled. How then is evidence graded? What hierarchies are used for judging the quality of evidence?

Table 1 gives an example of one such hierarchy from the Canadian Task Force on Preventive Health Care. Randomised controlled trials (RCTs) provide the best evidence for the effectiveness of interventions because the control group provides the comparator (which allows us to quantify an estimate of the effectiveness of a treatment) and randomisation prevents bias in selecting who gets treated and who is a control. It also deals with the possibility of baseline dissimilarities between treatment and control groups which may account for the outcome of the trial rather than the intervention; these are termed confounders which may be apparent or hidden. The randomised controlled trial, and especially the systematic review of several random controlled trials, is more likely to inform us than mislead us.

Table 1. Levels of evidence – research design rating

I Evidence from randomized controlled trial(s)
II-1 Evidence from controlled trial(s) without randomization
II-2 Evidence from cohort or case-control analytic studies, preferably from more than one centre or research group
II-3 Evidence from comparisons between times or places with or without the intervention; dramatic results in uncontrolled experiments could be included here
III Opinions of respected authorities, based on clinical experience; descriptive studies or reports of expert committees

Source: Canadian Task Force on Preventive Health Care. Available from: www.ctfphc.org

Human beings are usually rather quick to attribute cause and effect. Traditional methods of healing rely on QED (Quod est demonstrandum – as is demonstrated) type evidence. The patient was ill, the doctor treated and the patient recovered. Thus the treatment was effective. We forget that the patient may have got better anyway and that the treatment may have been sham or placebo. Hence it is only RCTs which can properly attribute cause and effect and additionally estimate how powerful the effect of an intervention is; not just does it work, yes or no, but by how much is the probability of an adverse outcome reduced or a benefit increased. One intuitively useful measure of effect which can be derived from trials is the NNT – the number needed to treat for one to benefit. For ocular hypertension, this may be as high as 40.

Other types of study, cohort and case control, often called observational studies, can quantify effect or risk but are more prone to bias and confounding. But the commonest type of medical report in ophthalmology, the case series (where there is no comparator or control group) is not only likely to be biased through selection but also cannot provide an estimate of effect size. It is that very basic sort of evidence, the QED type, which surgeons and eye doctors often seem to think sufficient. Much needs to be done to educate clinicians about the nature and quality of evidence on which we base our practice. It is also necessary to apply quality control to the studies themselves. RCTs vary greatly in quality, and stringent criteria for evaluating the quality of individual studies need to be applied.

Applying the evidence

One common difficulty in relating existing evidence to the patient in front of you is that your patient may have little in common with the subjects who participated in the trials. This is the external validity of a trial. If the inclusion/exclusion criteria are so tight that only a small sample of the population at risk of the effects of a disease are included, it can be difficult to interpret and apply the evidence. For example, in chronic glaucoma, we only have evidence of the effectiveness of lowering intraocular pressure in ocular hypertensives, people with early manifest glaucoma and normal tension glaucoma because these are the only patient groups included in trials where treatment has been compared to none. And apart from a few African Americans, the patients were all white Europeans. How far can we apply those findings to the populations of the rest of the world? Another gap occurs for people who are routinely excluded from trials, such as pregnant women and children.

Implementing evidence in practice is difficult to achieve and doctors often only use evidence if it fits with their pre-existing beliefs. One strategy is for agencies to develop evidence-based guidelines. There are numerous examples of guidelines which are not evidence-based since these are much easier to produce. Usually, these are developed using a select group of ‘experts’ whose dominant opinion becomes the basis of recommended practice. This is not evidence-based.

Evaluating the process

The final step in evidence-based practice is to monitor the effectiveness of interventions in the real world. Trials are clinical experiments conducted in carefully controlled conditions. It is well known that outcomes in trials tend to be better than in ordinary clinics and this is another gap between evidence and practice. Regular monitoring of outcomes is therefore an important part of the evidence base.
Personal audit is an excellent means for surgeons to monitor and improve their practice. Research is ongoing to develop a simple database for use in VISION 2020 programmes globally. Large-scale representative studies of outcomes are important to establish standards which can be used for audit. Several large outcome studies for cataract surgery have been conducted such as those in the USA, UK and Scandinavia. The latter is a large register of cataract surgery in Sweden which allows monitoring of rare but important adverse events such as endophthalmitis and also provides information on effectiveness of prophylactic measures which cannot be detected in trials (since they are never large enough to detect differences in the occurrence of rare events). Registers have been important in establishing the evidence base for corneal transplantation.

Adverse events can also be monitored by surveillance systems and are in place in many Western countries for collecting information on the adverse effects of drugs. However, few are yet established for surgery, and such information collection is impossible in poorer countries. Global efforts, perhaps VISION 2020, will in time provide the infrastructure for such surveillance.

Conclusion

Compared to other specialties, ophthalmology has a long way to go in developing its evidence base. I have peer refereed a systematic review for the Cochrane Tobacco and Addiction group which included more than 90 randomised controlled trials for nicotine replacement therapy. In our review on Ivermectin for onchocerciasis, we could find only five relevant studies. This reflects the enormous bias (reflecting the availability of research resources) towards evidence for diseases affecting affluent nations.

But there is also a need for ophthalmologists (like many other surgically dominated specialties) to recognise the importance of evidence beyond QED in informing their practice.