Evaluating non-randomised studies
Jon Deeks, Roberto D’Amico, Charlotte Sakarovitch, Doug Altman with Amanda Sowden (York), Jacqueline Dinnes (Southampton), Fujian Song (Birmingham), Mark Petticrew (Glasgow) and the collaborating groups of the International Stroke Trial and the European Carotid Surgery Trial
Quality assessment guides for non-randomised studies (NRS) recommend assessing “similarity in all known determinants of outcome”, and whether studies “adjust for such differences in analysis”. These guides are based on sound theoretical reasoning, but have not been evaluated empirically.
NRS were created by resampling from two large randomised trials: the International Stroke Trial (IST) and the European Carotid Surgery Trial (ECST). Historically controlled trials of fixed sample size were generated within each region by comparing trial participants allocated control in the 1st half of trial with those allocated treatment in the 2nd half. Concurrently controlled studies were generated by comparing participants allocated intervention in one region with those allocated treatment in a different region. The resampling process was repeated 1000 times. Distributions of unadjusted and adjusted results for non-randomised studies were compared with those of randomised comparisons to estimate bias and residual confounding.
The bias introduced by non-random allocation was noted to have two components. First, the bias could lead to consistent over or under estimations of treatment effects. This occurred for historical controls, the direction of bias depending on time-trends in the case-mix of participants recruited to the study. Second, the bias increased variation in results for both historical and concurrent controls, due to haphazard differences in case-mix between groups. The biases were large enough to lead studies to falsely conclude significant findings of benefit or harm.
Four strategies for case-mix adjustment were evaluated: none adequately adjusted for bias in historically and concurrently controlled studies. Omission of important confounding factors can partially explain under-adjustment, as can differences between conditional and unconditional odds ratio estimates of treatment effects.
We concluded that results of non-randomised studies sometimes, but not always, differ from results of randomised studies of the same intervention. Standard methods of case-mix adjustment do not guarantee removal of bias.