Truth in Advertising: Do Clinical Trial Publications Tell It All?

Authors:
Mohammad K. Khan
Search for other papers by Mohammad K. Khan in
Current site
Google Scholar
PubMed
Close
 MD, PhD
,
Zachary S. Buchwald
Search for other papers by Zachary S. Buchwald in
Current site
Google Scholar
PubMed
Close
 MD, PhD
, and
Waleed F. Mourad
Search for other papers by Waleed F. Mourad in
Current site
Google Scholar
PubMed
Close
 MD
Full access

We would like to commend the authors of the article, “Do Published Data in Trials Assessing Cancer Drugs Reflect the Real Picture of Efficacy and Safety?” (in this issue, page 1363) for their thorough analysis of published clinical trial data on cancer drugs. Lv et al asked if published trials accurately reflect the real picture of efficacy and safety relative to a standard database, ClinicalTrials.gov. Their study is unique and impactful on several levels. For one, this study is distinct from an earlier study by Hartung et al,1 which was limited to trials conducted prior to 2009 and only represented 2 years' worth of data after the enactment of the mandatory results reporting law, FDA Amendments Act (FDAAA section 801). In addition, this earlier study only included approximately 3% of all oncology-related trials; thus, the current quality of published cancer drug trials remains unknown. Conversely, Lv et al provide a longer, more longitudinal and contemporary view of clinical trials published from 2004 to 2014, and a 10-year view of quality and accuracy of reporting in published cancer drug trials after the institution of FDAAA section 801. The authors compared the results reported in one of the largest publically accessible trial registries, ClinicalTrials.gov, with the results published in leading academic journals; elements that were compared included basic trial design, efficacy measurement, serious adverse events (SAEs), and other adverse events (OAEs). Lv et al start with a relatively large sample size of 200,000 studies reported in ClinicalTrials.gov and filter it down to clinical trials with results published in the ClinicalTrials.gov registry (18,474 trials), including mainly phase III or IV cancer drug trials (323 trials) with medium to large numbers of patients (n >60) that were published online. They randomly selected 50% of these trials to analyze, resulting in a sample size of 117 trials.

The authors used a scoring system (0–2) to rate the completeness of reporting in ClinicalTrials.gov compared with reporting in the actual publication, with the higher score (2) representing both completeness and consistency between the actual publication and what is reported to ClinicalTrials.gov. Two independent reviewers assessed completeness and consistency, with a third randomly rechecking 50% of the samples in order to increase quality. The authors analyzed 14 distinct metrics, ranging from basic trial design (study design, study arms, number of randomized patients), efficacy measurements (primary outcome measurement, timing of assessment, number of patients involved, specific metrics, secondary end points), SAEs (number affected by ≥1 SAE, risk difference between the arms, number of individuals at risk per group), and OAEs (number affected by ≥1 OAE, risk difference, and number of individuals at risk per group). The goal was to identify any discrepancies between what is reported to ClinicalTrials.gov and what is published in leading academic journals.

In the collective analysis of all 117 trials, completeness and consistency was reported to be 21 (suggesting a generally reasonable reporting quality). However, notable discrepancies were present. For example, 18.4% of trials (almost 1 in 5) exhibited some inconsistencies in reporting the primary outcome measurement. Of the 18 accessible trials with altered primary outcome measures, statistically significant results were favored in 11, suggesting a bias toward altering the primary end point to favor a positive result.

Unfortunately, since the earlier study by Hartung et al,1 not much has improved in reporting primary outcomes despite implementation of the FDAAA. Thus, we agree with the authors' assertion that additional efforts are needed by regulators to define policies and procedures to add to FDAAA section 801 to help improve reporting quality. Primary end points drive clinical decision-making and the statistical design of trials. Thus, additional efforts should be taken to ensure that reporting of the primary end point is of the highest quality. Overestimation or alteration of the primary end point can lead to false impressions about the efficacy of cancer drugs, and potentially lead to wasted healthcare dollars and other resources.

Lv et al also note other important discrepancies regarding SAEs and OAEs. In SAE reporting, 26 of 51 trials (51%) reported at least 1 discrepancy, with a tendency to report fewer individuals at-risk in the publication. For OAE reporting, 54 of 73 trials (74%) showed at least 1 discrepancy, with a tendency to report fewer individuals at-risk. Thus, considering the authors' analysis regarding discrepancies in primary end point, SAEs, and OAEs, it is safe to say that there is a general bias in cancer drug clinical trial publications to overestimate the benefits while underreporting adverse events of cancer drugs.

The authors did find that trials that were of parallel assignment, were phase IV, were primarily funded by industry (vs other funding), were completed after 2009, and had earlier results reported after primary completion of the trial had improved completeness and consistency on multivariate analysis. Perhaps these findings can serve as a guide for future improvements in trial designs and reporting. Lv et al suggest some ways to mitigate inconsistent reporting, including consulting the raw data for reference, having more tailored policies to ensure improved reporting quality by all stakeholders (including journal editors and reviewers), and encouraging trialists to stick to reporting timelines.

Other mechanisms to mitigate problems, as proposed by Goldacre,2 include routine audits of completed but unreported trials and development of performance tables to incentivize trialists to ensure compliance in reporting. Mendez et al3 suggest that consistent reporting of prospectively defined outcomes and consistent use of registry data during the peer-review process may also improve the validity of clinical trial publications. Ioannidis et al4 suggest that full clinical trial transparency is also warranted to minimize outcome reporting bias. Furthermore, rather than blaming authors or editors for selective reporting of outcomes, they suggest several preventive actions; first, editors are requested to consider implementing better quality-control procedures, and second, investigators and journals should be transparently acknowledged for making changes in trial protocols and analyses. Bashir and Dunn5 suggested that identifying links between clinical trial registries and published clinical trial results, and making them easier to access, may also help.

In conclusion, most publications appear to be of reasonable quality. However, approximately 1 in 5 trials may have significant issues with reporting primary end point measurements. A greater discrepancy is found in SAE and OAE reporting, with a tendency toward overestimating benefits and underreporting SAEs and OAEs in cancer drug trials. Similar findings of discordance between published trials and results databases have also been noted by other authors.6,7 We concur with Lv et al that additional steps are needed to guarantee the accuracy of public clinical trial result reporting to ensure integrity, efficacy, and public safety.

References

  • 1.

    Hartung DM, Zarin DA, Guise JM et al.. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Ann Intern Med 2014;160:477483.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 2.

    Goldacre B. How to get all trials reported: audit, better data, and individual accountability. PLoS Med 2015;12:e1001821.

  • 3.

    Mendez MH, Passoni NM, Pow-Sang J et al.. Comparison of outcomes between preoperatively potent men treated with focal versus whole gland cryotherapy in a matched population. J Endourol 2015;29:11931198.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 4.

    Ioannidis JP, Caplan AL, Dal-Ré R. Outcome reporting bias in clinical trials: why monitoring matters. BMJ 2017;356:j408.

  • 5.

    Bashir R, Dunn AG. Systematic review protocol assessing the processes for linking clinical trial registeries and their published results. BMJ 2016;6:e013048.

  • 6.

    Mayo-Wilson E, Fusco N, Li T et al.. Multiple outcomes and analyses in clinical trials create challenges for interpretation and research synthesis. J Clin Epidemiol 2017;86:3950.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 7.

    Dal-Ré R, Ross JS, Marusic A. Compliance with prospective trial registration guidance remained low in high-impact journals and has implications for primary end point reporting. J Clin Epidemiol 2016;75:100107.

    • PubMed
    • Search Google Scholar
    • Export Citation

Dr. Khan is Associate Professor and Director of Radiation Immuno-Oncology in the department of Radiation Oncology, Emory University. He is also the co-leader of the Immuno-Oncology program at Winship, and serves on the phase I clinical trials working group.

Dr. Khan is a nationally and internationally renowned for his work with radiation abscopal effects and for the management of skin cancers, lymphoma, hematological malignancies, lung cancer, pediatrics, and prostate cancer. He leads several clinical trials as a PI, including 2 current trials focusing on a deeper understanding of interactions between radiation and immunotherapy to improve outcomes for patients with melanoma, NSCLC, and myeloma.

Dr. Khan was one of the first physicians to adopt radiation and immunotherapy combinations in the management of patients with cancer.

The ideas and viewpoints expressed in this commentary are those of the author and do not necessarily represent any policy, position, or program of NCCN.

Zachary S. Buchwald, MD, PhD, has training in both radiation oncology and immunology. His PhD research focused on developing immunotherapies that can ameliorate inflammatory, erosive bone diseases. His current interests include modulating T-cell activity for therapeutic benefit in cancer using both immunotherapy and radiation.

Dr. Mourad is the Medical Director of the department of Radiation Oncology at Erlanger Medical Center, University of Tennessee - College of Medicine Chattanooga. He is an expert in head and neck tumors, AIDS-related tumors, virus-induced malignancies (HIV/HPV/EBV), and heterotopic ossification. Dr. Mourad has mastered the utilization of various radiation therapy modalities as a means of delivering highly targeted radiation, allowing for precision treatment, including in patients who have had prior radiation.

  • Collapse
  • Expand
  • 1.

    Hartung DM, Zarin DA, Guise JM et al.. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Ann Intern Med 2014;160:477483.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 2.

    Goldacre B. How to get all trials reported: audit, better data, and individual accountability. PLoS Med 2015;12:e1001821.

  • 3.

    Mendez MH, Passoni NM, Pow-Sang J et al.. Comparison of outcomes between preoperatively potent men treated with focal versus whole gland cryotherapy in a matched population. J Endourol 2015;29:11931198.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 4.

    Ioannidis JP, Caplan AL, Dal-Ré R. Outcome reporting bias in clinical trials: why monitoring matters. BMJ 2017;356:j408.

  • 5.

    Bashir R, Dunn AG. Systematic review protocol assessing the processes for linking clinical trial registeries and their published results. BMJ 2016;6:e013048.

  • 6.

    Mayo-Wilson E, Fusco N, Li T et al.. Multiple outcomes and analyses in clinical trials create challenges for interpretation and research synthesis. J Clin Epidemiol 2017;86:3950.

    • PubMed
    • Search Google Scholar
    • Export Citation
  • 7.

    Dal-Ré R, Ross JS, Marusic A. Compliance with prospective trial registration guidance remained low in high-impact journals and has implications for primary end point reporting. J Clin Epidemiol 2016;75:100107.

    • PubMed
    • Search Google Scholar
    • Export Citation

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 664 110 7
PDF Downloads 538 92 7
EPUB Downloads 0 0 0