Impact of NCI-Mandated Scientific Review on Protocol Development and Content

Authors: Ning Ning PharmDa, Jingsheng Yan PhDa, Xian-Jin Xie PhDa, and David E. Gerber MDa
View More View Less
  • a From School of Medicine, Harold C. Simmons Cancer Center, Department of Clinical Sciences, and Department of Internal Medicine, Division of Hematology-Oncology, University of Texas Southwestern Medical Center, Dallas, Texas.

Purpose: The NCI requirement that clinical trials at NCI-designated cancer centers undergo scientific review in addition to Institutional Review Board review is unique among medical specialties. We evaluated the impact of this process on protocol development and content. Methods: We analyzed cancer clinical trials that underwent full board review by the Harold C. Simmons Cancer Center Protocol Review and Monitoring Committee (PRMC) from January 1, 2009, through June 30, 2013. We analyzed associations between trial characteristics, PRMC decisions, and protocol modifications using Chi-square testing, Fishers exact testing, and logistic regression. Results: A total of 226 trials were analyzed. Of these, 77% were industry-sponsored and 23% were investigator-initiated. Initial PRMC decisions were: approval (40%), provisional approval (52%), deferral (7%), and disapproval (1%). These decisions were associated with study sponsor (P<.001) and phase (P<.001). Ultimately, 97% of industry-sponsored and 90% of investigator-initiated trials were approved (P=.05). Changes were requested for 27% of industry-sponsored trials compared with 54% of investigator-initiated trials (P<.001). Total changes requested (mean, 5.6 vs 2.4; P<.001) and implemented (mean, 4.6 vs 2.1; P=.008) per protocol were significantly greater for investigator-initiated trials. Changes related to study design were more commonly requested (35% vs 13% of trials) and implemented (40% vs 5% of trials) for investigator-initiated trials compared with industry-sponsored trials (P=.03). Conclusions: NCI-mandated scientific protocol review seems to have a substantial impact on investigator-initiated trials but less effect on industry-sponsored trials. These findings may provide guidance on development and prioritization of institutional protocol review policies.

In recent years, the process of cancer clinical trial activation has come under scrutiny. Study activation barriers and delays limit treatment options available to patients, increase research costs, hinder trial accrual, and may result in study objectives becoming obsolete.1 Researchers have voiced concerns about the complexity, length, inconsistencies, excessive demands, and inappropriate conservatism of the review process.26 In turn, these steps contribute to delays in and escalating costs of developing new therapies.79 As a result, efforts are underway to streamline and accelerate these processes.10,11 Studies of the clinical trial activation process have focused on the individual contributions of the process components (eg, ethical review, budget, contract, site visit, supply shipment) and comparisons among institutions.1214

As part of this process, the NCI mandates that any clinical trials involving patients with cancer at NCI-designated cancer centers undergo institutional formal scientific review. Studies covered by this requirement include interventional clinical trials, noninterventional studies, tissue banks, and medical records review. This process occurs in addition to ethical review by an Institutional Review Board (IRB). This separate scientific review requirement seems to be unique among medical fields; clinical trials that do not involve patients with cancer generally require only IRB review. Furthermore, clinical cancer research conducted at non–NCI-designated centers in the United States or at cancer centers in other countries may not be subject to this requirement.

Although the role of IRBs in clinical research conduct has been described extensively,4,1517 there is a dearth of information on the effect of scientific review committees on protocol design and content. We therefore reviewed the decisions, requested changes and clarifications, and resulting protocol modifications of the Protocol Review and Monitoring Committee (PRMC) made at the Harold C. Simmons Cancer Center at the University of Texas (UT) Southwestern Medical Center within a recent 5-year period.

Methods

Data Sources and Collection

The Simmons Cancer Center is a freestanding clinical facility affiliated with the UT Southwestern Medical Center that received NCI designation in 2010. The PRMC has been in operation since 2001. Members include clinicians, investigators, biostatisticians, pharmacists, regulatory experts, data and safety monitoring specialists, and patient advocates. The committee meets twice each month to review new protocol submissions, responses to prior reviews, and accrual of activated trials. All clinical cancer research conducted at UT Southwestern involving human subjects is reviewed by the PRMC. Certain types of protocols may undergo only an administrative review, such as medical records research or studies already deemed to have undergone adequate scientific review (eg, NCI cooperative group trials). Other protocols undergo full committee review. In general, each of these protocols is reviewed by 2 to 3 clinicians/investigators, a biostatistician, a pharmacist, and a data and safety monitoring specialist.

We collected the following documents for each study that underwent full PRMC review from January 1, 2009, through June 30, 2012: PRMC submission form, study protocol and consent form, reviewer evaluations, PRMC decision letter, principal investigator response letter, and any revised study documents. For each study, we recorded the following characteristics: year, disease under study, phase and type (interventional/noninterventional), and sponsor type (institutional/industrial). Institutional trials included investigator-initiated trials with a local study chair or a study chair at another institution. Interventional studies included therapeutic, prevention, supportive care, screening, detection, and diagnostic studies. Noninterventional studies included epidemiologic, observational, and correlative studies. We recorded the initial and final PRMC decision (categorized as approval, provisional approval, deferral, or disapproval). For protocols that had not received PRMC approval by the time of data cutoff, we recorded the most recent decision.

From the PRMC decision letter, we recorded all protocol changes and clarifications requested, which were broadly grouped as protocol-related or non–protocol-related. The non–protocol-related group included points related to the consent form or the PRMC submission form, such as funding adequacy and accrual expectations. Protocol-related points were categorized as study design (which included issues related to blinding, inclusion of placebo, randomization, stratification, selection of treatment arms, end points, assessments, monitoring, and statistical analysis plan), intervention (which generally referred to treatment dose and/or schedule), population (inclusion/exclusion criteria), or evidence/rationale (which included requests to obtain or clarify preclinical and other evidence used to support study design). All data collection was performed by a single investigator (N.N.). Ten percent of studies were randomly selected for extensive data review by an experienced clinical investigator and long-term PRMC member (D.E.G.). Discrepancies were noted in 0.8% of all data cells (116 cells per trial).

Statistical Analysis

We analyzed the association between trial characteristics, PRMC decisions, and PRMC protocol modifications using chi-square test, Fisher exact test, logistic regression, linear regression, and general linear models. All reported P values are 2-sided, and a P value of less than .05 was used as the criterion for statistical significance. Multiple comparisons were not adjusted. To limit the influence of outlying data, we performed additional analyses by binning values for changes/clarifications requested/implemented as follows: 0, 1 to 5, 6 to 10, greater than 10. All statistical calculations were performed using SAS for Windows 9.3 (SAS Institute Inc., Cary, NC).

Results

A total of 226 studies that underwent full PRMC review from January 1, 2009, through June 30, 2013, were identified and included in the analysis. Of these, 23% were investigator-initiated; 82% were interventional. Additional study characteristics are listed in Table 1. Trial sponsor was correlated with trial type and trial phase. Among industry-sponsored trials, 87% were interventional compared with 65% among investigator-initiated trials (P<.001). Industry-sponsored trials were more likely to be later-phase (9% no phase; 16% phase I/pilot; 34% phase II; 41% phase III) compared with investigator-initiated trials (27% no phase; 35% phase I/pilot; 31% phase II; 7% phase III; P<.001).

The initial PRMC decision was approval in 90 trials (40%). Of 136 initial “other” decisions, 118 (87%) were provisional approval (ie, approval pending acceptable response to stipulations), 17 (12%) were deferrals (ie, reevaluation at a future committee

Table 1

Characteristics of Clinical Trials Undergoing Full Scientific Review at the Harold C. Simmons Cancer Center (2009–2013)

Table 1
meeting after receipt of response to stipulations), and 1 (1%) was a disapproval. At the time of this analysis, 10 most recent decisions were nonapprovals. In addition to the 1 disapproval, there were 3 approvals, 3 provisional approvals, 2 deferrals, and 4 withdrawals. Initial and final PRMC decisions according to trial characteristics are shown in Table 2. Initial and final PRMC decisions differed significantly according to study sponsor, with more initial approvals (48% vs 11%; P<.001) and final approvals (97% vs 90%; P=.05) for industry-sponsored than for investigator-initiated trials.

Among the 226 studies in the analysis, the PRMC requested changes for 75 trials (33%). A total of 270 changes were requested (mean, 3.6 changes requested per study). Of these, 132 (49%) were protocol-related (mean, 0.6 per study) and 138 (51%) were non–protocol-related (mean, 0.6 per study). The proportion of trials with changes requested and implemented is shown in Table 3. Changes were twice as likely to be requested for investigator-initiated trials (54%) compared with industry-sponsored trials (27%; P<.001). Requested

Table 2

Initial and Final Scientific Review Committee Approval Decisions According to Trial Characteristics

Table 2
changes were classified as follows: evidence/rationale (7%), design (26%), intervention (11%), population (5%), consent form (13%), and other (38%). For 149 studies, no changes were requested by the PRMC. The maximum number of requested changes was 17. After PRMC review, a total of 243 changes were made across the 226 studies (mean, 1.1 changes made per study): 109 (45%) protocol-related (mean, 0.5 per study) and 134 (55%) non-protocol-related (mean, 0.6 per study).

The average numbers of changes requested and implemented per protocol according to study characteristics are shown in Table 4. Among trials for which changes were requested, the number of changes requested per protocol was significantly associated with study sponsor and phase. The same associations were observed in analyses that used binning of values (data not shown). An average of 5.6 changes per protocol were requested for investigator-initiated trials compared with an average of 2.4 changes per industry-sponsored trial (P<.001). The average number of requested changes was 4.7 for phase I/pilot studies,

Table 3

Proportion of Trials With Changes Requested and Implemented According to Trial Characteristics

Table 3
3.0 for phase II studies, and 1.9 for phase III studies (P=.02). The average number of changes implemented per protocol was also significantly associated with study sponsor (P=.008) and phase (P=.04). Among the 149 studies for which no change was requested by the PRMC, changes were implemented in 20 (13%). Instances in which more changes were implemented than were requested are likely to represent one of the following scenarios: (1) the research team chose to make additional changes independent of the scientific review process, and documented them in the response letter; (2) one requested change led to more than one implemented change (eg, changing a consent form to reflect a change in study design); or (3) a clarification requested by the PRMC resulted in a change to the study.

Types of changes requested and implemented for investigator-initiated and industry-sponsored trials are shown in Table 5. In all categories, changes were more likely to be requested for investigator-initiated trials than for industry-sponsored trials. These differences were most pronounced for changes related to study design, intervention, and population. Design-related changes were requested for 35% and implemented in 40% of investigator-initiated trials compared with 13% and 5%, respectively, for industry-sponsored trials.

A total of 517 clarifications were requested for 106 studies (47%). These clarifications were protocol-related in 385 instances (74%) and non-protocol-related in 132 instances (26%). In response to these requests, a total of 399 clarifications were addressed in response letters (307 protocol-related; 92 non-protocol-related). Among the 120 studies for which no clarifications were requested, clarifications were provided at the time of resubmission for 20 studies (17%). The proportion of trials with clarifications requested and addressed is shown in Supplemental Table 1 (available online, in this article, at JNCCN.org). Clarifications were more likely to be requested for investigator-initiated trials (73%) than for industry-sponsored trials (37%; P<.001). There was also an association with trial phase, with clarifications requested for 61% of phase I trials and 32% of phase III trials (P<.001).

Table 4

Number of Changes Requested and Implemented Among Trials for Which Changes Were Requested and Implemented

Table 4
Among trials for which clarifications were requested, the mean number of clarifications requested and addressed are shown in Supplemental Table 2 (available online, in this article, at JNCCN.org).

Discussion

To our knowledge, this is the first study to evaluate the impact of NCI-mandated scientific review on cancer clinical trial development and content. In our analysis of more than 200 cancer clinical trials that underwent full scientific board review at a single NCI-designated cancer center, we found that committee requests and decisions, and resulting protocol modifications, were highly correlated with study sponsor. In general, compared with industry-sponsored trials, investigator-initiated trials were less likely to be approved initially (12% vs 48%) or ever (90% vs 97%). Among trials for which the PRMC requested changes, the number of requested changes for investigator-initiated trials was more than twice that requested for industry-sponsored trials (mean,

Table 5

Proportion of Investigator-Initiated and Industry-Sponsored Trials With Specific Changes Requested and Implemented According to Trial Characteristics

Table 5
5.6 vs 2.4). Sponsor-related differences were most pronounced for changes related to study design, intervention, and population, for which the likelihood of requested changes was 3 to 5 times greater for investigator-initiated trials. Furthermore, after scientific review, the proportion of trials for which requested changes were implemented was consistently higher for investigator-initiated trials. After review, changes to study design were implemented in 40% of investigator-initiated trials, but in only 5% of industry-sponsored trials. Changes to study intervention were implemented in 19% and 3% of trials, respectively. Changes to study population were implemented in 14% and 1% of trials, respectively.

Several explanations are possible for the differential impact of institutional scientific review on investigator-initiated and industry-sponsored clinical trials. One is that, presumably having already undergone considerable scientific review before protocol completion, industry-sponsored trials raise fewer scientific concerns than investigator-initiated trials. This observation may also be related to trial phase. Initial scientific review committee approval rates were highest for phase III trials (57%), which were usually industry-sponsored, and lowest for phase I/pilot trials (22%; P<.001), which were more likely to be investigator-initiated. Later phase trials, which may feature more straightforward, less ambitious designs and be perceived as conveying lower risk, may raise fewer scientific questions. Finally, a degree of learned helplessness or nihilism may also impact committee decisions. Although it is relatively straightforward and feasible to modify an investigator-initiated trial protocol, modifying a multicenter industry-sponsored research protocol that has already undergone multiple amendments and may already be activated at other centers represents a far more challenging consideration. With that in mind, scientific review committee members may be less likely to request changes to industry-sponsored trials. In contrast to protocol-related changes, the likelihood of requested nonprotocol changes (such as to the consent form or other materials) being implemented was similar for industry-sponsored and investigator-initiated trials, which may reflect the relative feasibility of altering these documents compared with altering the protocol itself.

Despite the lower rates of requested changes being implemented for industry-sponsored clinical trials, ultimately 97% of these studies were approved by the scientific review committee. In an era when the length and complexity of the clinical trial activation process have come under scrutiny,10 this observation may raise questions about value added by institutional scientific review of industry-sponsored research. In a Vanderbilt University study, the median time cancer clinical trial protocols spent in scientific review exceeded that spent in IRB review (median, 70 vs 47 days). However, other processes occurring in parallel, such as contract negotiations (median, 78 days), may have been the principal rate-limiting steps.12 A comparison of a US cancer center (Washington University) and a European cancer center (University of Turin, Italy) revealed a lengthier and more complex protocol activation process in the United States. Compared with the European site (which did not have an institutional scientific review committee), the US site had more steps (>110 vs <60) and required twice as long (285 vs 145 days) from protocol submission to first accrual.14 Particularly for multicenter trials, multiple regulatory committee reviews have not consistently demonstrated clear impact on study procedures or human subject protection.1821 At various institutions, efforts to streamline this process have included implementing a clinical trials “control tower” to track protocol progress electronically from inception to closure, and permitting local review steps to occur in parallel with rather than after FDA review.10,11

Our findings also suggest potential overlap with IRB and administrative reviews. Presumably, the expressed purview of a cancer center scientific review committee is to review the scientific merits of a protocol, such as evaluating study design, clinical appropriateness, objectives and hypotheses, priority, statistical analysis, outcome and safety monitoring, and overall risk and benefit ratio. However, 51% of requested changes in this study were related to the study consent form and other nonprotocol documentation. The issue of committee mission creep or territorial expansion has been the subject of prior work, primarily focused on the converse phenomenon: the raising of scientific concerns by ethical review committees.15 In a British study of 141 Research Ethics Committee letters that were not an initial approval, 74% raised scientific issues.22 The presence of scientific issues was correlated with final approval status: 100% of studies rejected initially, 92% of studies with initial provisional review and subsequent rejection, and 60% of studies with initial provisional review and subsequent acceptance.

This analysis had several limitations. This single-center study may not be generalizable to other cancer centers. As has been demonstrated for ethical review boards (ie, IRBs),16 there may be considerable variation in concerns and decisions across institutions. Furthermore, the proportion of clinical trials that are sponsored by industry versus investigator-initiated may differ between institutions, thereby affecting the relative impact of scientific review committees. Our analysis reported requested and implemented protocol changes in aggregate; thus, it is not feasible to determine which specific requested protocol changes were implemented and which were not. We limited our data collection to the PRMC review letters, the investigator response letters, and other PRMC documentation. We did not review full clinical trial protocols. There may be instances that protocol changes were reported but not actually implemented, and vice versa. However, this seems unlikely. Finally, it is not possible from this analysis to determine the value added or quality of scientific review and the resulting changes to study design.

Conclusions

This study demonstrates that the NCI-mandated scientific review seems to have substantial impact on investigator-initiated cancer clinical trials. Not uncommonly, this process results in modifications to study design, intervention, and population. Recently, the NCI has requested that centers separately and formally review investigator-initiated clinical trials at the concept stage, which may result in an ever-greater impact of the local peer review process. However, the impact of NCI-mandated scientific review on industry-sponsored trials, which at many centers may comprise most research protocols, seems far more limited. To what extent this difference reflects differences in the initial scientific quality of submitted protocols, differences in the ability to modify submitted protocols, or other factors is not clear. Regardless, it may be beneficial for cancer centers to consider policies to permit expedited scientific review of industry-sponsored research, resulting in more rapid study activation of these trials and allowing scientific reviewers to focus their efforts on investigator-initiated trials.

Supplemental Table 1

Proportion of Trials With Clarifications Requested and Addressed According to Trial Characteristics

Supplemental Table 1
Supplemental Table 2

Number of Clarifications Requested and Addressed According to Trial Characteristics

Supplemental Table 2

The authors have disclosed that they have no financial interests, arrangements, affiliations, or commercial interests with the manufacturers of any products discussed in this article or their competitors.

This work was supported by a National Cancer Institute Cancer Clinical Investigator Team Leadership Award (1P30 CA142543-01 supplement) (to D.E.G.) and by a National Institutes of Health National Institute of Diabetes and Digestive and Kidney Diseases Short-Term Institutional Research Training Grant (5 T35 DK 66141-10) (to N.N.). Biostatistical support was provided by the Biostatistics Shared Resource at the Harold C. Simmons Cancer Center, University of Texas Southwestern Medical Center, which is supported in part by National Cancer Institute Cancer Center Support Grant, 1P30 CA142543-01.

The data presented in this article were presented in an abstract at the 2014 ASCO Annual Meeting; May 31–June 3, 2014; Chicago, IL.

The authors wish to thank Tiffany Levine, Arlene Thomas, Jennifer Davis, and Erin Williams from the Simmons Cancer Center Clinical Research Office for assistance collecting Protocol Review and Monitoring Committee documents. The authors would also like to thank Helen Mayo, MLS, from the UT Southwestern Medical Library for assistance performing literature searches.

References

  • 1.

    Duley L, Antman K, Arena J et al. . Specific barriers to the conduct of randomized trials. Clin Trials 2008;5:4048.

  • 2.

    Sheard L, Tompkins CN, Wright NM, Adams CE. Non-commercial clinical trials of a medicinal product: can they survive the current process of research approvals in the UK? J Med Ethics 2006;32:430434.

    • Search Google Scholar
    • Export Citation
  • 3.

    Lux AL, Edwards SW, Osborne JP. Responses of local research ethics committees to a study with approval from a multicentre research ethics committee. BMJ 2000;320:11821183.

    • Search Google Scholar
    • Export Citation
  • 4.

    Edwards SJ, Ashcroft R, Kirchin S. Research ethics committees: differences and moral judgement. Bioethics 2004;18:408427.

  • 5.

    Baer AR, Bridges KD, O’Dwyer M et al. . Clinical research site infrastructure and efficiency. J Oncol Pract 2010;6:249252.

  • 6.

    Stewart PM, Stears A, Tomlinson JW, Brown MJ. Regulation—the real threat to clinical research. BMJ 2008;337:a1732.

  • 7.

    Stewart DJ, Whitney SN, Kurzrock R. Equipoise lost: ethics, costs, and the regulation of cancer clinical research. J Clin Oncol 2010;28:29252935.

    • Search Google Scholar
    • Export Citation
  • 8.

    DiMasi JA, Hansen RW, Grabowski HG. The price of innovation: new estimates of drug development costs. J Health Econ 2003;22:151185.

  • 9.

    Christie DR, Gabriel GS, Dear K. Adverse effects of a multicentre system for ethics approval on the progress of a prospective multicentre trial of cancer treatment: how many patients die waiting? Intern Med J 2007;37:680686.

    • Search Google Scholar
    • Export Citation
  • 10.

    Kurzrock R, Pilat S, Bartolazzi M et al. . Project Zero Delay: a process for accelerating the activation of cancer clinical trials. J Clin Oncol 2009;27:44334440.

    • Search Google Scholar
    • Export Citation
  • 11.

    Hammond AL, Waller EK, Finkelstein LB. CT2—the clinical trials control tower: overcoming barriers to opening oncology clinical trials. J Clin Oncol 2007;25:1288; author reply 1288.

    • Search Google Scholar
    • Export Citation
  • 12.

    Dilts DM, Sandler AB. Invisible barriers to clinical trials: the impact of structural, infrastructural, and procedural barriers to opening oncology clinical trials. J Clin Oncol 2006;24:45454552.

    • Search Google Scholar
    • Export Citation
  • 13.

    Dilts DM, Sandler AB, Cheng SK et al. . Steps and time to process clinical trials at the Cancer Therapy Evaluation Program. J Clin Oncol 2009;27:17611766.

    • Search Google Scholar
    • Export Citation
  • 14.

    Wang-Gillam A, Williams K, Novello S et al. . Time to activate lung cancer clinical trials and patient enrollment: a representative comparison study between two academic centers across the atlantic. J Clin Oncol 2010;28:38033807.

    • Search Google Scholar
    • Export Citation
  • 15.

    Dixon-Woods M, Angell E, Tarrant C, Thomas A. What do research ethics committees say about applications to do cancer trials? Lancet Oncol 2008;9:700701.

    • Search Google Scholar
    • Export Citation
  • 16.

    Mansbach J, Acholonu U, Clark S, Camargo CA Jr. Variation in institutional review board responses to a standard, observational, pediatric research protocol. Acad Emerg Med 2007;14:3773780.

    • Search Google Scholar
    • Export Citation
  • 17.

    Ortega R, Dal-Re R. Clinical trials committees: how long is the protocol review and approval process in Spain? A prospective study. IRB 1995;17:69.

    • Search Google Scholar
    • Export Citation
  • 18.

    Humphreys K, Trafton J, Wagner TH. The cost of institutional review board procedures in multicenter observational research. Ann Intern Med 2003;139:77.

    • Search Google Scholar
    • Export Citation
  • 19.

    Byrne MM, Speckman J, Getz K, Sugarman J. Variability in the costs of institutional review board oversight. Acad Med 2006;81:708712.

  • 20.

    Sugarman J, Getz K, Speckman JL et al. . The cost of institutional review boards in academic medical centers. N Engl J Med 2005;352:18251827.

  • 21.

    Sobolski GK, Flores L, Emanuel EJ. Institutional review board review of multicenter studies. Ann Intern Med 2007;146:759.

  • 22.

    Angell EL, Bryman A, Ashcroft RE, Dixon-Woods M. An analysis of decision letters by research ethics committees: the ethics/scientific quality boundary examined. Qual Saf Health Care 2008;17:131136.

    • Search Google Scholar
    • Export Citation

Correspondence: David E. Gerber, MD, Division of Hematology-Oncology, Harold C. Simmons Cancer Center, University of Texas Southwestern Medical Center, 5323 Harry Hines Boulevard, Mail Code 8852, Dallas, TX 75390-8852. E-mail: david.gerber@utsouthwestern.edu
  • 1.

    Duley L, Antman K, Arena J et al. . Specific barriers to the conduct of randomized trials. Clin Trials 2008;5:4048.

  • 2.

    Sheard L, Tompkins CN, Wright NM, Adams CE. Non-commercial clinical trials of a medicinal product: can they survive the current process of research approvals in the UK? J Med Ethics 2006;32:430434.

    • Search Google Scholar
    • Export Citation
  • 3.

    Lux AL, Edwards SW, Osborne JP. Responses of local research ethics committees to a study with approval from a multicentre research ethics committee. BMJ 2000;320:11821183.

    • Search Google Scholar
    • Export Citation
  • 4.

    Edwards SJ, Ashcroft R, Kirchin S. Research ethics committees: differences and moral judgement. Bioethics 2004;18:408427.

  • 5.

    Baer AR, Bridges KD, O’Dwyer M et al. . Clinical research site infrastructure and efficiency. J Oncol Pract 2010;6:249252.

  • 6.

    Stewart PM, Stears A, Tomlinson JW, Brown MJ. Regulation—the real threat to clinical research. BMJ 2008;337:a1732.

  • 7.

    Stewart DJ, Whitney SN, Kurzrock R. Equipoise lost: ethics, costs, and the regulation of cancer clinical research. J Clin Oncol 2010;28:29252935.

    • Search Google Scholar
    • Export Citation
  • 8.

    DiMasi JA, Hansen RW, Grabowski HG. The price of innovation: new estimates of drug development costs. J Health Econ 2003;22:151185.

  • 9.

    Christie DR, Gabriel GS, Dear K. Adverse effects of a multicentre system for ethics approval on the progress of a prospective multicentre trial of cancer treatment: how many patients die waiting? Intern Med J 2007;37:680686.

    • Search Google Scholar
    • Export Citation
  • 10.

    Kurzrock R, Pilat S, Bartolazzi M et al. . Project Zero Delay: a process for accelerating the activation of cancer clinical trials. J Clin Oncol 2009;27:44334440.

    • Search Google Scholar
    • Export Citation
  • 11.

    Hammond AL, Waller EK, Finkelstein LB. CT2—the clinical trials control tower: overcoming barriers to opening oncology clinical trials. J Clin Oncol 2007;25:1288; author reply 1288.

    • Search Google Scholar
    • Export Citation
  • 12.

    Dilts DM, Sandler AB. Invisible barriers to clinical trials: the impact of structural, infrastructural, and procedural barriers to opening oncology clinical trials. J Clin Oncol 2006;24:45454552.

    • Search Google Scholar
    • Export Citation
  • 13.

    Dilts DM, Sandler AB, Cheng SK et al. . Steps and time to process clinical trials at the Cancer Therapy Evaluation Program. J Clin Oncol 2009;27:17611766.

    • Search Google Scholar
    • Export Citation
  • 14.

    Wang-Gillam A, Williams K, Novello S et al. . Time to activate lung cancer clinical trials and patient enrollment: a representative comparison study between two academic centers across the atlantic. J Clin Oncol 2010;28:38033807.

    • Search Google Scholar
    • Export Citation
  • 15.

    Dixon-Woods M, Angell E, Tarrant C, Thomas A. What do research ethics committees say about applications to do cancer trials? Lancet Oncol 2008;9:700701.

    • Search Google Scholar
    • Export Citation
  • 16.

    Mansbach J, Acholonu U, Clark S, Camargo CA Jr. Variation in institutional review board responses to a standard, observational, pediatric research protocol. Acad Emerg Med 2007;14:3773780.

    • Search Google Scholar
    • Export Citation
  • 17.

    Ortega R, Dal-Re R. Clinical trials committees: how long is the protocol review and approval process in Spain? A prospective study. IRB 1995;17:69.

    • Search Google Scholar
    • Export Citation
  • 18.

    Humphreys K, Trafton J, Wagner TH. The cost of institutional review board procedures in multicenter observational research. Ann Intern Med 2003;139:77.

    • Search Google Scholar
    • Export Citation
  • 19.

    Byrne MM, Speckman J, Getz K, Sugarman J. Variability in the costs of institutional review board oversight. Acad Med 2006;81:708712.

  • 20.

    Sugarman J, Getz K, Speckman JL et al. . The cost of institutional review boards in academic medical centers. N Engl J Med 2005;352:18251827.

  • 21.

    Sobolski GK, Flores L, Emanuel EJ. Institutional review board review of multicenter studies. Ann Intern Med 2007;146:759.

  • 22.

    Angell EL, Bryman A, Ashcroft RE, Dixon-Woods M. An analysis of decision letters by research ethics committees: the ethics/scientific quality boundary examined. Qual Saf Health Care 2008;17:131136.

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 60 27 6
PDF Downloads 57 32 2
EPUB Downloads 0 0 0