Methods for investigating and controlled heterogeneity in systematic reviews of diagnostic test accuracy
Jon Deeks with Patrick Bossuyt (Amsterdam)
Systematic reviews of diagnostic accuracy studies are undertaken with the intention of improving the precision of estimates, investigating consistency, and comparing results between studies of different designs and from different settings. In terms of methods for meta-analysis, there currently is a conflict between simple methods that are criticised for not taking adequate account of the complexities of diagnostic data (such as simple pooling of sensitivities and specificities) and more complex mathematical methods which are difficult to apply. The most commonly promoted method of meta-analysis, the Moses method for fitting summary receiver operating curves (ROC), does not provide estimates of test sensitivity and specificity required for building decision models for HTA, nor likelihood ratios, desired by clinicians for updating of disease probabilities.
In contrast to meta-analyses of trials, heterogeneity commonly remains uninvestigated, as methods are not widely available. If differences between studies could be controlled, or if methods were available for studies which compare tests head-to-head (either by randomising individuals to tests, or by all individuals receiving multiple tests), more robust and useful results may be obtained.
This four year programme of work contains six sub-projects:
1) Identification of methods are currently used in systematic reviews and health technology assessments for synthesizing results, investigating heterogeneity, and making comparisons between tests
2) Identifying summary statistics should be used for meta-analyses of test accuracy and in which circumstances
3) Assessing methods for investigating heterogeneity in meta-analyses of test accuracy
4) Assessing the degree by which between-study heterogeneity explained by (a) spectrum bias (b) methodological differences between studies
5) Methods for undertaking meta-analyses to make comparisons between tests: (a) when different tests are evaluated in different studies (b) when studies directly compare both tests (c) when there is a mixture?
6) Assessing whether meta-analyses of comparative studies of diagnostic accuracy are more reliable than comparisons of meta-analyses of uncontrolled studies of diagnostic accuracy
These questions will be answered by a combination of simulation and empirical investigations.