Program for Comparative Effectiveness Methodology

Program for Comparative Effectiveness Methodology

What works best and for whom?

Patients, physicians and caregivers look to researchers to provide clarity on what treatments work best and for whom.  Comparative effectiveness research (CER) aims to give them those answers, pursuing accurate information on the advantages of different treatments on patient health.  An unprecedented wealth of data -- including electronic health records, insurance provider claims and quality improvement registries -- creates opportunities to compare novel treatments, in new settings, while considering the unique characteristics of individual patients.

F94A3692CEM Meeting Terrence Jones Photography

Our Perspective

Big data yields big choices regarding study design, selection of the population, confounders and modeling strategy.  Our shared methodological and practical experience translates to good choices in support of high quality CER.

We ask the right question. The statistical toolbox for comparative effectiveness is full of options, in part because different techniques target different causal questions. We focus on careful framing of research questions, in light of downstream clinical implications, so that decisions about methods serve the question.

Good CER requires collaboration between statisticians and clinicians. By understanding the scientific context, treatment mechanisms, and observational treatment allocation, we facilitate study design whereby assumptions are proactively addressed rather than retrospectively acknowledged.

Advanced methods matter. Our group has developed and applied advanced methods for CER, accruing a library of examples for what matters and when.  In examples where good randomized trials exist, those results can be replicated by rigorous methods, but not naïve alternatives.

Our combined knowledge is better than the sum of its parts.  New problems and proposed solutions are brought to the group, in order to gain from different perspectives and identify methodological gaps for future research.

Matching Methods to Opportunities

To seize those opportunities offered by emerging data, CER methodology has to keep pace with new challenges in order to provide accurate information to patients and physicians.  The Program for Comparative Effectiveness Methodology (CEM) promotes and supports collaboration among Duke faculty members and other participating researchers to expand statistical methodologies, including the development, application and interpretation of quantitative methods for treatment comparisons. Its members work together to establish best practices for comparative effectiveness studies, particularly in the context of electronic health records and big data. The program also works to advance the use of superior techniques for clinical research.

F94A3654CEM Meeting Terrence Jones Photography

Propensity Score Weighting with Multilevel Data

Fan Li, PhD, and her collaborators at Harvard, compare several propensity-score-weighted estimators for clustered data, including marginal, cluster-weighted and doubly-robust estimators to a study of racial disparities in breast cancer screening among beneficiaries in Medicare health plans.

Strengths and Pitfalls of Randomized vs. Observational Analyses of Treatment Effects

Laine Thomas, PhD

As part of our DCRI Research Forum series, Laine Thomas, PhD, discussed how studies of new users of a treatment overcome barriers in data from prevalent users and how overlap weighting can allow observational data to approach the equipoise demanded by a randomized clinical trial.

CEM Activities

Through the program, faculty and other participants:

  • Collaborate on challenging clinical research problems, finding solutions for immediate analysesParticipate in a journal club to discuss relevant publications by ourselves and others
  • Collaborate on grant development to support research in comparative effectiveness methodology
  • Recognize clinical research collaborations that advance superior comparative effectiveness methodology in design or analysis
  • Disseminate knowledge through publications and presentations regarding best practices for comparative effectiveness
  • Provide review and feedback on research proposals and manuscripts
  • Participate in a journal club to discuss relevant publications written by Duke faculty and others

Recent Publications

Tools for Evaluating and Improving Causal Inference: Introducing JAMA Cardiology Readers to the Risk of Bias in Nonrandomized Studies of Interventions (ROBINS-I) Tool, JAMA Cardiology. Huffman, MD and Thomas, LE. (2018)

Balancing covariates via propensity score weighting, Journal of the American Statistical Association. Li, F, Morgan, KL, and Zaslavsky, AM (2018)

Addressing extreme propensity scores via the overlap weights, American Journal of Epidemiology. Li, F, Thomas, LE, and Li, F. (2018) epub ahead

Propensity score weighting for causal inference with multi-valued treatments, arXiv: 1808.05339. Li, F, Li, F. (2018)

Sample size determination for GEE analyses of stepped wedge cluster randomized trials, Biometrics, Journal of the International Biometric Society. Li, F, Turner, EL, and Preisser, JS. (2018)

An evaluation of constrained randomization for the design and analysis of group‐randomized trials, Statistics in Medicine. Li, F., Lokhnygina, Y., Murray, DM, Heagerty, PJ, and DeLong, ER.

Program Faculty and Participants

The Duke Clinical Research Institute facilitates the program, whose faculty and other participants come from across the university.

Laine Thomas
DCRI’s Associate Director for Biostatistics
Associate Professor of Biostatistics and Bioinformatics
Program Co-Director

Dr. Thomas’ primary interest is causal inference methods for comparative effectiveness studies, particularly using large data sets such as registries, Medicare claims and electronic health records that have long-term longitudinal follow-up.  She is a co-investigator on the NIH-supported Statistical Methods for Complex Data in Cardiovascular Disease and primary investigator on the AHRQ-supported Matching Methods for Comparative Effectiveness Studies of Longitudinal Treatments.  She uses causal inference methods in collaborations in cardiovascular disease (CHAMP-HF, ORBIT-AF registries; ACTION-NCDR and CRUSADE registries linked to Medicare; ARISTOTLE and NAVIGATOR clinical trials) and uterine fibroids (COMPARE-UF).  She teaches the causal inference course in the Biostatistics and Bioinformatics Masters and PhD programs.

More on Dr. Thomas


Fan Li
Associate Professor of Statistical Science
Program Co-Director

Dr. Li’s primary research interest is causal inference. She focuses on developing general, flexible and accessible statistical methods and models within the Rubin Causal Model (RCM) for drawing causal inferences under a wide range of complex situations that render standard methods infeasible or inefficient. Two foci of her research are propensity score methods and principal stratification. She is the principal investigator of the NSF-supported projects ``New weighting methods for causal inference" and "Bayesian multivariate analysis for causal inference with intermediate variables." Her secondary research interests include Bayesian analysis, missing data and imaging analysis. She teaches the causal inference course at both graduate and undergraduate level in the Department of Statistical Science and occasionally offers short courses and workshops on causal inference methods to researchers in substantive fields such as epidemiology and traffic safety research.

More on Dr. Li


Benjamin A. Goldstein
Assistant Professor of Biostatistics and Bioinformatics

Dr. Goldstein’s research focuses on the meaningful use of electronic health records (EHR) data with an interest in both deriving inference from EHR data and developing risk prediction models. His interests include understanding the potential and limitations of EHRs for clinical research and adapting methods for the analytic challenges that arise. In regard to causal inference, he studies issues related to informed presence bias, the fact that people only interact with the health system when they are sick. One of his foci is identifying situations in which this bias can arise, characterizing the problems it can engender and ultimately developing solutions for addressing it. In regard to risk prediction, he works closely with member of the Duke University Health System to develop and implement clinical risk prediction tools.

More on Dr. Goldstein


Yuliya Lokhnygina
Assistant Professor of Biostatistics and Bioinformatics

Dr. Lokhnygina's primary research interests are design and analysis of cluster-randomized studies of the comparative effectiveness of hospital-level interventions, development of new metrics of hospital performance with regard to antibiotic use and infection control and causal inference methods for observational studies and secondary analyses of large clinical trials in cardiovascular disease (ROCKET-AF, TRACER, IMPROVE-IT) and diabetes (TECOS). She has taught the master's-level survival analysis course.

More on Dr. Lokhnygina


Roland Matsouaka
Assistant Professor of Biostatistics and Bioinformatics
Dr. Matsouaka’s research focuses on nonparametric, semiparametric, and causal inference methods for comparative effectiveness studies, clinical trials affected by non-compliance, not-so-perfect experiments, and observational studies. His work aims at making the best use of the data collected to answer scientific questions, while applying principled methods to minimize bias and ensure fair assessments.  The areas of application of his research include public health, biomedical, and social sciences. As a DCRI faculty statistician, he collaborates with clinical researchers to better understand and treat cardiovascular diseases. He is actively involved in the analyses of large registry data including the Society of Thoracic Surgeons (STS) National Database, the STS and American College of Cardiology (ACC) Transcatheter Valve Therapy (TVTR) Registry, and the American Heart Association/American Stroke Association Get With The Guidelines (GTWG).

More on Dr. Matsouaka


Sean O’Brien
Associate Professor of Biostatistics and Bioinformatics

Dr. O'Brien studies statistical issues in healthcare provider performance evaluation, comparative effectiveness research, and the analysis of multi-center clinical registries. He is a principal investigator of the NIH-supported Statistical Methods for Complex Data in Cardiovascular Disease and the PCORI-supported Improving Methods for Linking Secondary Data Sources for Comparative Effectiveness Research and Patient-Centered Outcomes Research. He is statistician for several ongoing comparative effectiveness studies at Duke including both randomized trials (ISCHEMIA, ISCHEMIA-CKD, STRESS) and observational studies (Society of Thoracic Surgeons National Database, the STS/ACC Transcatheter Valve Therapy Registry).

More on Dr. O’Brien