Background

Systematic review synthesises all empirical evidence meeting pre-specified eligibility criteria in order to answer a specific research question.  The methods used are explicit, systematic and thus aim to be reproducible. It has become an increasingly important tool for evidence-based decision-making not only in healthcare but also in other fields such as criminology; education; environment; international development; and software engineering. Around 12 systematic reviews are published each day, of which 50% have included a meta-analysis. Meta-analysis is a statistical technique used to combine the results of the studies included within systematic reviews to produce a pooled effect size. There is, however, a concern that meta-analysis is often underused and the commonly cited reasons for this include researchers’ over-sensitivity to heterogeneity, lack of knowledge of relevant meta-analytic methods, or lack of relevant skills or software. This could, in turn, introduce bias as systematic reviews without a formal meta-analysis and single summary estimate may lack the power to detect significant effects, even when the findings are reliable. On the other hand, researchers may sometimes conduct meta-analyses without detailed consideration of whether it is appropriate, or whether they have been conducted in the most optimal way.

While there is substantial body of guidance on how meta-analyses should be conducted, decisions as to whether they should be conducted at all are often largely matters of judgement. The Cochrane handbook provides some guidance, such as advising against meta-analysis when studies are clinically heterogeneous, when the studies are at high risk of bias, and in the presence of publication or other reporting biases. Nevertheless, the guidance is often insufficiently clear to be able to make a decision about whether to conduct a meta-analysis or not. Moreover researchers in different fields – or even within the same field of interest – may not agree on when meta-analysis is appropriate. The decision may be even more difficult when systematic reviews involve studies evaluating complex interventions, which typically have more sources of variability (whether statistical, conceptual, or practical) both within and between studies than “simpler” reviews. For example, trials of complex interventions in which the degree of tailoring of the intervention varies both within treatment groups and across trials can call into question the appropriateness of combining effects across studies. Even once a decision to meta-analyse complex interventions has been made, it raises the question of how to deal with the different dimensions of complexity, such as the choice of appropriate modelling techniques, excluding inconsistent cases, or selecting specific outcomes for analysis.

Hence, there is a pressing need to develop clearer guidance on when and in what contexts meta-analysis should or should not be conducted, which extends beyond issues of statistical and clinical heterogeneity (though these are obviously important), and which specifically addresses the potential sources of complexity in a systematic review. The MRC guidance identifies some of these sources, for example flexibility in intervention implementation is discussed above. From one perspective this flexibility is simply unwanted heterogeneity; from another it represents legitimate variation, and raises the problem of how much variability in the intervention must exist before meta-analysis becomes inappropriate. Other sources of complexity such as interactions and synergies between multiple components, and multiplicity of outcomes (and outcome measures) also need further guidance. Such guidance needs to be based upon both a wide-ranging synthesis of existing methodological knowledge, including consultation with experts in the field, and empirical work on specific cases to demonstrate the feasibility of the guidance. The proposed project incorporates both of these dimensions, as well as detailed plans for networking and dissemination to ensure wide discussion and uptake of the guidance.

Several research teams and organisations, such as the Cochrane and Campbell Collaborations and the EPPI-Centre, conduct meta-analyses—and are involved in methods development—across multiple disciplinary areas. However, the expansion of meta-analytic practice has now reached a point where no single organisation can encompass the whole methodological landscape. Thus, valuable contributions to methods development in particular topic areas may not be widely diffused and adopted across disciplines. An important additional dimension of this project is to carry out a wide-ranging methodological review covering a broad range of scientific fields, and to consult extensively with meta-analysts and other researchers in these fields, in order to gain a synoptic overview of methodology, practice, and guidance.

 

Comments are closed.