Video: Every Case Tells a Story| Webinar: ACR/CHEST ILD Guidelines in Practice

An official publication of the ACR and the ARP serving rheumatologists and rheumatology professionals

  • Conditions
    • Axial Spondyloarthritis
    • Gout and Crystalline Arthritis
    • Myositis
    • Osteoarthritis and Bone Disorders
    • Pain Syndromes
    • Pediatric Conditions
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Sjögren’s Disease
    • Systemic Lupus Erythematosus
    • Systemic Sclerosis
    • Vasculitis
    • Other Rheumatic Conditions
  • FocusRheum
    • ANCA-Associated Vasculitis
    • Axial Spondyloarthritis
    • Gout
    • Psoriatic Arthritis
    • Rheumatoid Arthritis
    • Systemic Lupus Erythematosus
  • Guidance
    • Clinical Criteria/Guidelines
    • Ethics
    • Legal Updates
    • Legislation & Advocacy
    • Meeting Reports
      • ACR Convergence
      • Other ACR meetings
      • EULAR/Other
    • Research Rheum
  • Drug Updates
    • Analgesics
    • Biologics/DMARDs
  • Practice Support
    • Billing/Coding
    • EMRs
    • Facility
    • Insurance
    • QA/QI
    • Technology
    • Workforce
  • Opinion
    • Patient Perspective
    • Profiles
    • Rheuminations
      • Video
    • Speak Out Rheum
  • Career
    • ACR ExamRheum
    • Awards
    • Career Development
  • ACR
    • ACR Home
    • ACR Convergence
    • ACR Guidelines
    • Journals
      • ACR Open Rheumatology
      • Arthritis & Rheumatology
      • Arthritis Care & Research
    • From the College
    • Events/CME
    • President’s Perspective
  • Search

Tips for Designing Studies That Actually Reveal Causal Inference

Ruth Jessen Hickman, MD  |  Issue: May 2021  |  May 13, 2021

In a randomized, controlled trial, the risk difference between groups is interpreted as a causal effect of the treatment, according to Seoyoung C. Kim, MD, ScD, MSCE, an associate professor of medicine in the Division of Pharmacoepidemiology and Pharmacoeconomics and the Division of Rheumatology, Inflammation and Immunity at Brigham and Women’s Hospital and Harvard Medical School, and an instructor in epidemiology at the Harvard T.H. Chan School of Public Health, Boston.

But when a randomized, controlled trial can’t be conducted, well-designed and well-executed observational analyses can be useful for causal inference. Dr. Kim says estimation of causal effects in such studies is challenging, but doable with careful methodological consideration.

ad goes here:advert-1
ADVERTISEMENT
SCROLL TO CONTINUE

Dr. Kim presented this and other information on the key concepts of causal inference and mediation analysis in a virtual course sponsored by the VERITY grant (Value and Evidence in Rheumatology Using Bioinformatics and Advanced Analytics) on March 4. Through this and other offerings, VERITY is helping promote highly rigorous research in clinical topics in rheumatology.

Causal Inference

In her presentation, Dr. Kim focused on topics related to causal inference, the process of determining the independent, actual effects of a component within a system. These can be visualized with the help of directed acyclic graphs, which can be used as tools to think through the possible causal ways a variety of factors might interact.

ad goes here:advert-2
ADVERTISEMENT
SCROLL TO CONTINUE

Dr. Kim discussed multiple common mistakes researchers make in constructing their studies and repeatedly emphasized the importance of correct initial study design. Various statistical methods, where appropriate, are also important to help minimize confounding, such as multivariable adjustment, stratification, propensity score methods, etc. However, Dr. Kim added, “Even if you have all kinds of fancy statistical methods, if your design is wrong, it will not save your study.”

Although several different kinds of observational studies are available to researchers, Dr. Kim emphasized that to infer a causal effect, the treatment exposure must occur prior to assessed outcomes. Thus, cross-sectional, case series or case control studies are not well suited to causal inference.

Dr. Kim made an important distinction between common causes in a network of events (e.g., confounders) and common effects (i.e., colliders in the language of causal inference). Confounders are variables that causally influence both the original event and the outcome being studied, whereas colliders are factors that may be causally influenced by both the original event and the studied outcome.

Although it is critical to make statistical adjustments for common causes to remove confounding, adjusting for common effects will introduce selection bias into the results. “The difficult part is that it is not always clear which is a confounder [and which is a collider] unless you set your timeline correctly,” she explained. “Also, you need to have expert knowledge to determine these factors. Not all statistical methods can tell you which is a confounder and which is a collider.”

Dr. Kim also warned against a common study design in which nonusers of a treatment are compared with prevalent users (e.g., current users or ever users). In other words, patients using a drug of interest are compared to those not taking any treatment at all. But in clinical practice, there may be important confounding reasons why a patient might not be prescribed a treatment, such as increased frailty or less severe symptoms.

“If you happen to have a similar drug to use as a reference to the drug of interest, that is, an active comparator design, the unmeasured confounder will be much less,” Dr. Kim explained.

kentoh / shutterstock.com

kentoh / shutterstock.com

Dr. Kim also emphasized the importance of clearly defining treatment exposures, so the two comparator groups can be presumed to be identical with respect to their risk of outcomes. This can be trickier than it seems at first. For example, it’s important to consider the concept of positivity, which states that the possibility of receiving a treatment must be greater than zero (this would not be true, for example, in case of a drug contraindication). The interventions themselves must also be well defined in areas of timeline, inclusion and exclusion criteria, and other factors.

For people intimidated by some of the details of advanced technical methods in causal inference, Dr. Kim pointed out that not all questions require such methods be employed. “Try to make the study question very simple first,” she advised. “Once you have a very straightforward causal question, start with a simpler method, and you will build up your confidence in applying advanced methods later,” she said.

For those looking to learn more about causal inference, Dr. Kim strongly recommends a free textbook by Miguel A. Hernán, MD, MPH, ScM, DrPH, and James M. Robins, MD, Causal Inference: What If, as a good entry point into the topic.1 These researchers were the first to explicitly propose a “target trial” approach to organizing an observational study, an approach also recommended by Dr. Kim.2

Target Trials

In the second part of the March 4 session, Daniel H. Solomon, MD, MPH, chief of the Section of Clinical Sciences at Brigham and Women’s Hospital and professor of medicine at Harvard Medical School, expanded on Dr. Kim’s recommendation to follow a target trial approach when designing rigorous observational studies. Dr. Solomon also serves as principal investigator on the VERITY grant and as editor in chief of Arthritis & Rheumatology.

Dr. Solomon noted that accepted research methodologies can evolve over time. For example, the Bradford Hill causal criteria have fallen out of favor somewhat as the research community has realized their limitations. In contrast, the target trial approach has been gaining momentum in recent years.

“For each observational analysis for causal inference, we can imagine a hypothetical randomized trial that we would likely prefer to conduct. That is the target trial,” says Dr. Solomon.

He noted that clinicians want their decision making to be informed by causal knowledge about comparative effectiveness and comparative safety. Deferring decisions is not really an option because it just maintains the status quo.

“A relevant randomized trial would, in principle, answer each causal question about comparative effectiveness and safety,” he said. “But we often can’t have randomized trials because they are expensive. They are sometimes unethical, depending on the study. They can be impractical, and they often can’t deliver timely answers.”

Although observational analyses cannot provide the highest level of evidence, the target trial approach provides a way to carefully and thoughtfully design observational studies, so they can provide more accurate information to inform clinical practice.

The idea is to explicitly emulate the envisioned randomized trial with observational data. This requires rigorous analysis and thorough description of various categories as they would occur in a randomized trial: eligibility criteria, treatment strategies, assignment criteria, assignment procedures, follow-up period, outcome, causal contrast of interest and analysis plan. Problems in any of these areas can lead to results that are difficult to interpret and that cannot help guide wise clinical decisions.

For example, Dr. Solomon described a systematic review he helped perform that analyzed the comparative effectiveness of different rheumatoid arthritis strategies through the lens of a target trial emulation framework.3 Such work might be very insightful, as randomized, controlled trials have been slow to fill this hole in the literature. But 29 out of the 31 trials included in their analysis had one or more design flaws, as determined by a target trial emulation approach, limiting their applicability.

Dr. Solomon sees target trial emulation as a way to strengthen causal inference and thus the strength of our confidence in found associations. “Unfortunately, with observational data, people are often a bit sloppy. That results in a lot of vagaries in the literature. We think that target trial emulation is a way of creating some rigor around how one does these comparative effectiveness studies to bring us closer to causation.”

The March 11 sessions further explored mediation analysis, a highly pertinent concept for interpreting and designing both observational and randomized trials.

About VERITY

VERITY has three main efforts: a Methodology Core, a Bioinformatics Core and an educational enrichment component through its Administrative Core, all funded through a Center Core (P30) Grant from the National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS). Through its enrichment component, VERITY offers a variety of educational opportunities, including the recent mini-course in methodology.

Jeffrey A. Sparks, MD, MMSc, is the director of VERITY’s longer, main course and associate director of VERITY’s Administrative Core, as well as an assistant professor of medicine at Brigham and Women’s Hospital and Harvard Medical School. “Attendees [of VERITY’s multi-day event] get quite a bit of feedback about their research project from many different people throughout the course, which hopefully helps them fine tune it along the way,” says. Dr. Sparks. “They also obtain career development advice. I see it as trying to help bolster the pipeline of investigators in rheuma­tology for years to come.

“Many of those junior faculty are getting promoted and getting big grants,” he says. “I’ve seen many of the projects they came with being published in prestigious journals.”

One VERITY 2020 participant, Namrata Singh, MD, MSCI, an assistant professor in the Division of Rheumatology at the University of Washington, Seattle, was working on a K23 grant for NIAMS on the effects and associations between rheumatologic immunosuppressive drugs and cancer outcomes, which she was able to discuss in a small group setting with other participants and VERITY faculty.

Dr. Singh describes this specific feedback as the most helpful part of the program; she learned her planned study outcome would probably be hard to measure, and its validity might not be accepted by reviewers. “I reshaped my grant to reflect that. I think it was a critical insight for me,” she says.

To stay informed via email about offerings and opportunities through VERITY, interested individuals can apply for a free membership, which also provides access to various live-streamed meetings and seminars, the VERITY video library and other features.

Dr. Solomon concludes, “As an editor and the principal investigator of VERITY, my goal is to have more of the rheumatology community—more of the NIAMS research community—appreciate how to perform rigorous clinical research using the best methodology. That way, we can make sure that our journals are filled with good science and our clinicians are practicing good medicine based on solid evidence.”


Ruth Jessen Hickman, MD, is a graduate of the Indiana University School of Medicine. She is a freelance medical and science writer living in Bloomington, Ind.

References

  1. Hernán MA, Robins JM. Causal Inference: What If. 2020 Boca Raton: Chapman & Hall/CRC; 2020.
  2. Hernán MA, Robins JM. Using big data to emulate a target trial when a randomized trial is not available. Am J Epidemiol. 2016 Apr 15;183(8):75–764.
  3. Zhao SS, Lyu H, Solomon DH, et al. Improving rheumatoid arthritis comparative effectiveness research through causal inference principles: Systematic review using a target trial emulation framework. Ann Rheum Dis. 2020 Jul;79(7):883–890.

Page: 1 2 3 4 | Multi-Page
Share: 

Filed under:Research Rheum Tagged with:Causestudy designtrials

Related Articles

    Thick Skin & Solid Research: Necessary Ingredients for Publishing Success

    June 1, 2023

    Scientific publishing requires a commitment to clear writing, concise narratives and a willingness to accept feedback. Daniel Solomon, MD, editor-in-chief of Arthritis & Rheumatology, provides insights into his experiences.

    Research Roundup: Abstract Data Presented at ACR Convergence 2021

    February 11, 2022

    The research presented at ACR Convergence 2021 had a broad scope. Below are details on three studies that addressed cardiovascular safety in treat-to-target strategies, phase 2 study results on the efficacy of tigulixostat and the impact of patient preference on treatment adherence. Take our quiz after you read this article. Treat to Target Abstract L06:…

    Meet the Incoming Arthritis & Rheumatology Editor in Chief, Dr. Daniel Solomon

    December 18, 2019

    Tashatuvango / shutterstock.com Daniel Solomon, MD, MPH, has practiced rheumatology for more than 20 years, all while conducting translational and clinical research and teaching young clinicians. Soon, he will also step into the role of editor in chief of Arthritis & Rheumatology, as Richard J. Bucala, MD, PhD, ends his tenure. He will assume some…

    The 2020 ACR Awards of Distinction & Masters Class

    November 12, 2020

    Presidential Gold Medal The highest award the ACR can bestow, the Presidential Gold Medal is awarded in recognition of outstanding achievements in rheumatology over an entire career. This year’s award went to James O’Dell, MD, the Stokes-Shackleford Professor of Internal Medicine, vice chair of internal medicine and chief of the Division of Rheumatology at the…

  • About Us
  • Meet the Editors
  • Issue Archives
  • Contribute
  • Advertise
  • Contact Us
  • Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. ISSN 1931-3268 (print). ISSN 1931-3209 (online).
  • DEI Statement
  • Privacy Policy
  • Terms of Use
  • Cookie Preferences