We would like to commend the authors of the article, “Do Published Data in Trials Assessing Cancer Drugs Reflect the Real Picture of Efficacy and Safety?” (in this issue, page 1363) for their thorough analysis of published clinical trial data on cancer drugs. Lv et al asked if published trials accurately reflect the real picture of efficacy and safety relative to a standard database, ClinicalTrials.gov. Their study is unique and impactful on several levels. For one, this study is distinct from an earlier study by Hartung et al,1 which was limited to trials conducted prior to 2009 and only represented 2 years' worth of data after the enactment of the mandatory results reporting law, FDA Amendments Act (FDAAA section 801). In addition, this earlier study only included approximately 3% of all oncology-related trials; thus, the current quality of published cancer drug trials remains unknown. Conversely, Lv et al provide a longer, more longitudinal and contemporary view of clinical trials published from 2004 to 2014, and a 10-year view of quality and accuracy of reporting in published cancer drug trials after the institution of FDAAA section 801. The authors compared the results reported in one of the largest publically accessible trial registries, ClinicalTrials.gov, with the results published in leading academic journals; elements that were compared included basic trial design, efficacy measurement, serious adverse events (SAEs), and other adverse events (OAEs). Lv et al start with a relatively large sample size of 200,000 studies reported in ClinicalTrials.gov and filter it down to clinical trials with results published in the ClinicalTrials.gov registry (18,474 trials), including mainly phase III or IV cancer drug trials (323 trials) with medium to large numbers of patients (n >60) that were published online. They randomly selected 50% of these trials to analyze, resulting in a sample size of 117 trials.
The authors used a scoring system (0–2) to rate the completeness of reporting in ClinicalTrials.gov compared with reporting in the actual publication, with the higher score (2) representing both completeness and consistency between the actual publication and what is reported to ClinicalTrials.gov. Two independent reviewers assessed completeness and consistency, with a third randomly rechecking 50% of the samples in order to increase quality. The authors analyzed 14 distinct metrics, ranging from basic trial design (study design, study arms, number of randomized patients), efficacy measurements (primary outcome measurement, timing of assessment, number of patients involved, specific metrics, secondary end points), SAEs (number affected by ≥1 SAE, risk difference between the arms, number of individuals at risk per group), and OAEs (number affected by ≥1 OAE, risk difference, and number of individuals at risk per group). The goal was to identify any discrepancies between what is reported to ClinicalTrials.gov and what is published in leading academic journals.
In the collective analysis of all 117 trials, completeness and consistency was reported to be 21 (suggesting a generally reasonable reporting quality). However, notable discrepancies were present. For example, 18.4% of trials (almost 1 in 5) exhibited some inconsistencies in reporting the primary outcome measurement. Of the 18 accessible trials with altered primary outcome measures, statistically significant results were favored in 11, suggesting a bias toward altering the primary end point to favor a positive result.
Unfortunately, since the earlier study by Hartung et al,1 not much has improved in reporting primary outcomes despite implementation of the FDAAA. Thus, we agree with the authors' assertion that additional efforts are needed by regulators to define policies and procedures to add to FDAAA section 801 to help improve reporting quality. Primary end points drive clinical decision-making and the statistical design of trials. Thus, additional efforts should be taken to ensure that reporting of the primary end point is of the highest quality. Overestimation or alteration of the primary end point can lead to false impressions about the efficacy of cancer drugs, and potentially lead to wasted healthcare dollars and other resources.
Lv et al also note other important discrepancies regarding SAEs and OAEs. In SAE reporting, 26 of 51 trials (51%) reported at least 1 discrepancy, with a tendency to report fewer individuals at-risk in the publication. For OAE reporting, 54 of 73 trials (74%) showed at least 1 discrepancy, with a tendency to report fewer individuals at-risk. Thus, considering the authors' analysis regarding discrepancies in primary end point, SAEs, and OAEs, it is safe to say that there is a general bias in cancer drug clinical trial publications to overestimate the benefits while underreporting adverse events of cancer drugs.
The authors did find that trials that were of parallel assignment, were phase IV, were primarily funded by industry (vs other funding), were completed after 2009, and had earlier results reported after primary completion of the trial had improved completeness and consistency on multivariate analysis. Perhaps these findings can serve as a guide for future improvements in trial designs and reporting. Lv et al suggest some ways to mitigate inconsistent reporting, including consulting the raw data for reference, having more tailored policies to ensure improved reporting quality by all stakeholders (including journal editors and reviewers), and encouraging trialists to stick to reporting timelines.
Other mechanisms to mitigate problems, as proposed by Goldacre,2 include routine audits of completed but unreported trials and development of performance tables to incentivize trialists to ensure compliance in reporting. Mendez et al3 suggest that consistent reporting of prospectively defined outcomes and consistent use of registry data during the peer-review process may also improve the validity of clinical trial publications. Ioannidis et al4 suggest that full clinical trial transparency is also warranted to minimize outcome reporting bias. Furthermore, rather than blaming authors or editors for selective reporting of outcomes, they suggest several preventive actions; first, editors are requested to consider implementing better quality-control procedures, and second, investigators and journals should be transparently acknowledged for making changes in trial protocols and analyses. Bashir and Dunn5 suggested that identifying links between clinical trial registries and published clinical trial results, and making them easier to access, may also help.
In conclusion, most publications appear to be of reasonable quality. However, approximately 1 in 5 trials may have significant issues with reporting primary end point measurements. A greater discrepancy is found in SAE and OAE reporting, with a tendency toward overestimating benefits and underreporting SAEs and OAEs in cancer drug trials. Similar findings of discordance between published trials and results databases have also been noted by other authors.6,7 We concur with Lv et al that additional steps are needed to guarantee the accuracy of public clinical trial result reporting to ensure integrity, efficacy, and public safety.
HartungDMZarinDAGuiseJM. Reporting discrepancies between the ClinicalTrials.gov results database and peer-reviewed publications. Ann Intern Med2014;160:477–483.
MendezMHPassoniNMPow-SangJ. Comparison of outcomes between preoperatively potent men treated with focal versus whole gland cryotherapy in a matched population. J Endourol2015;29:1193–1198.
BashirRDunnAG. Systematic review protocol assessing the processes for linking clinical trial registeries and their published results. BMJ2016;6:e013048.
Mayo-WilsonEFuscoNLiT. Multiple outcomes and analyses in clinical trials create challenges for interpretation and research synthesis. J Clin Epidemiol2017;86:39–50.
Dal-RéRRossJSMarusicA. Compliance with prospective trial registration guidance remained low in high-impact journals and has implications for primary end point reporting. J Clin Epidemiol2016;75:100–107.