Why Vision Research Must Include Systematic Reviews

Recently, the value of quality research has become an increasingly important topic of conversation. As the world continues to navigate a global pandemic, evidence surrounding its spread and symptoms, as well as the pathways toward a vaccine, we long for accurate data through which we will find effective solutions.

While the novel coronavirus has us still searching for answers through research, studies on poor vision and its consequences seem almost endless. However, the vast disparity across types of studies, questions asked, quality of studies, and calls to action require us to perform yet another genre of research – the Systematic Review.

To bring greater awareness to the impact of poor vision, and a richer understanding of the existing research on its impacts, the VII recently commissioned a number of systematic reviews across multiple affected pillars. Working with researchers and authors from the African Vision Research Institute (AVRI) and the London School of Hygiene and Tropical Medicine, this month we proudly added five new systematic reviews and more than 100 referenced research studies to the VII research database.

We asked Clare Gilbert, MD, VII advisory board member and co-author of our Systematic Review on the Impact of Uncorrected Refractive Error on Children, to define what is meant by a systematic review, shed light on the process, and highlight their importance in the overall research and practice landscape.

According to Dr. Gilbert, systematic reviews of published studies which bring together all the available evidence on a particular topic are very important for two broad reasons. First, they summarize what is already known about the topic; second, they identify research questions with inadequate or no evidence.

The first point is about ethics – it is unethical to include people in research studies on topics when there is already very good evidence. The second point is equally important as it highlights topics where further research is needed. While both are important for researchers, the latter leads to studies that address an evidence gap identified through the systematic review.

First, the research question to be addressed must be carefully delineated. This question should include the following four components:

  1. Population group of interest
  2. Intervention of interest and any anticipated harms
  3. Interventions to be compared
  4. Outcomes of interest

Second, specific search terms or words to be applied to databases of published studies must be identified. With multiple unique databases out there, each containing many millions of published studies, working with an information scientist to compile search terms is recommended to keep the number of studies to be reviewed to a minimum. Once studies are identified through the searches, at least two researchers very familiar with the topic must identify the final studies to be included in the review using clear inclusion and exclusion criteria.

Next, relevant data from the studies included must be extracted so further statistical analysis can be undertaken, which may entail combing the data from several studies with similar designs and outcomes in a meta-analysis. Alternatively in studies where it is not possible to combine data summary data and common conclusions are identified from the various validated studies. The final step is to consider the reliability of the data, which is influenced by the design of the study (for example, randomized clinical trials are given more weight than non-randomized comparisons) and how well they were designed and conducted (to assess the risk of bias).

At the end of this process, according to Dr. Gilbert, it is possible to come to a specific conclusion based on the evidence. The reviewers may conclude that the evidence is substantial and of high quality, in which case no further trials or studies are required on this specific topic. Another conclusion may be that there is insufficient or no evidence to support the question posed, which would indicate a need for more research on the topic, with more systematically designed studies to answer the proposed question.

As with all research, systematic reviews can be done well or poorly. Ideally, the methods for a systematic review should be published (there are online registers for this), and the findings written up according to predefined formats. Publication of systematic reviews in medical journals also entails an extensive process and approval prior to completion.

Undertaking systematic reviews is a lengthy and time-consuming activity which makes them a valuable part of a good research evidence database. Policymakers, researchers and other agencies increasing rely on the findings of systematic reviews for decision-making.

At the VII, as we continue advocating to raise the priority of good vision on a global scale, curated and commissioned research will serve as a springboard for our messages and advocacy. Reviews like these ultimately help all of us better understand the overall impact of poor vision and identify research gaps that must be filled in order to effect positive change in policy, the provision of services and how they are provided, and personal behaviors worldwide.

Categories

Archive

RSS