Creating an Effort Tracking Tool to Improve Therapeutic Cancer Clinical Trials Workload Management and Budgeting

Quantifying data management and regulatory workload for clinical research is a difficult task that would benefit from a robust tool to assess and allocate effort. As in most clinical research environments, The University of Michigan Comprehensive Cancer Center (UMCCC) Clinical Trials Office (CTO) struggled to effectively allocate data management and regulatory time with frequently inaccurate estimates of how much time was required to complete the specific tasks performed by each role. In a dynamic clinical research environment in which volume and intensity of work ebbs and flows, determining requisite effort to meet study objectives was challenging. In addition, a data-driven understanding of how much staff time was required to complete a clinical trial was desired to ensure accurate trial budget development and effective cost recovery. Accordingly, the UMCCC CTO developed and implemented a Web-based effort-tracking application with the goal of determining the true costs of data management and regulatory staff effort in clinical trials. This tool was developed, implemented, and refined over a 3-year period. This article describes the process improvement and subsequent leveling of workload within data management and regulatory that enhanced the efficiency of UMCCC's clinical trials operation.

Quantifying workload activity is difficult for clinical research data managers and regulatory staff. A robust data collection tool is needed to appropriately assess and allocate workloads. The ideal effort-tracking tool should be objective, widely applicable, highly functional, low maintenance, and user-friendly. In a structured yet dynamic clinical research setting, the best practice is to incorporate tracking time spent per trial into a regular routine (daily or weekly).

Articles on workload management in a clinical research environment have described methods that are either too simplistic (e.g., based on accrual) or too subjective (protocol complexity grids). The NCI established a universal workload formula in which one full-time equivalency equals 40 credits, or 25 to 30 active patients and approximately 50 follow-up patients.1 However, this formula neglects to account for the complexity of trials and the resultant increase in workload burden.2 The NCI is currently working on a protocol complexity model with a representative number of those elements “deemed to involve increased effort at the participating sites.”3 The concept of burden is subjective in nature, however, and therefore a site lacking in research infrastructure may find even the simplest protocol burdensome.4

Various business models are used to provide clinical research services.1 In some organizations, research nurses and specialized support services, such as data management, regulatory, finance, and informatics, may be organized into a central office. In other organizations, a loose network of research coordinators may perform multiple functions and report to various individuals (investigator, program director, division administrator, or clinic manager). Still others may use a hybrid of centralized and decentralized structures. This is relevant to the issue of workload metrics because a tool designed for one site may not be applicable to another site if its business model is dissimilar.

The most efficient model allows staff to specialize in research activities. Focusing attention on “a small set of linked tasks” at the institution or departmental level correlates with improved operating performance.5 The University of Michigan Comprehensive Cancer Center (UMCCC) Clinical Trials Office (CTO) is a centralized office that uses specialized research personnel in multitiered roles and disease site–specific teams, serving as a shared resource.

Description of the Problem

Before September 2005, trial budgets within the CTO were developed based on the dollar amount offered by the sponsor to complete the trial. The process of negotiating trial budgets failed to take into account the effort required by the trial, staff salaries, complexity, and length of the trial. Although at that time the CTO was unable to quantify the exact use of its personnel across its trial portfolio, it knew that funding was not sufficient to fully cover CTO services, as evidenced by the UMCCC bearing most of the costs to operate the CTO. Workload was also difficult to quantify and allocate because effort was impacted by many factors required by a trial. What seemed to be theoretically reasonable based on the NCI workload formula could, in actuality, be an unreasonable amount of work for one staff member. The workload formula seemed most appropriate for national cooperative trials, rather than industry or investigatorinitiated institutional trials.

Development of the Effort Tracking Tool

Qualitative Aspects

The UMCCC CTO first developed a comprehensive list of all trials using CTO resources. Sources of funding for each trial were carefully detailed. The investigation showed that many of the open trials were underfunded or lacked a comprehensive budget to cover all CTO services. In some instances, staff worked on trials for years with UMCCC bearing all data management and regulatory costs. The office lacked a systematic approach to develop trial budgets and needed to develop a mechanism for accurate real-time cost recovery of CTO services.

As opposed to the traditional research coordinator model in which one person performs all the tasks of data management and regulatory requirements, the workload of the UMCCC CTO is divided among regulatory coordinators and data managers who are disease site–specific. Disease site–specific teams are believed to be more efficient because education and concentration in one area allow teams and managers to become experts in the clinical research aspects of the particular disease. UMCCC job classifications for data managers and regulatory personnel have 2 tiers, an entry level and an expert level, and workloads are distributed according to complexity, volume, and experience. CTO staff are not responsible for consenting, scheduling patients, and research blood draws. These tasks are performed by clinical research staff in the clinic setting. Although the UMCCC CTO divides responsibilities among regulatory coordinators and data managers according to disease site, the Research Effort Tracking Application (RETA) can be adapted to other staffing models as needed. The tasks itemized in RETA are separated into data management or regulatory task lists; however, if one person was performing both functions they could readily select the appropriate task from either list for the duties performed. The tasks themselves are not disease-specific, and therefore even if staff members worked with multiple disease sites they would still be able to appropriately document how they spent their time.

Staff were directed to develop a task list of all data management and regulatory activities performed in support of the research trials. The tasks were standardized and grouped according to job role (e.g., regulatory coordinator, data manager). In addition to trial-related tasks, each grouping contained non–trial-specific duties to account for items such as meetings and institutional training requirements. The goal was to develop metrics for each role and its associated tasks.

Quantitative Aspects

The aforementioned qualitative information was used by the CTO management team (administrative, finance, and information technology supervisors) to create the RETA tool. RETA was completed and implemented by December 31, 2005, with all staff tracking their efforts using this application. Although it was initially a desktop program, RETA was eventually upgraded to a scalable Web-based system to ensure ease of access and use by the clinical research teams. In its current form, RETA is available as an SAAS (software as a service) and is managed by the CTO's information technology group. The application runs on a Windows IIS platform with an Oracle database on the backend that is supported by institutional servers. The application also nightly polls a specified URL for an XML feed of the study list. In this way, the system is interoperable with the CTO's clinical trials management system. As the system was refined and the significance of the information gleaned became apparent, usage expectations were incorporated into staff performance evaluations. Staff activity was initially monitored on a daily basis by CTO managers. Staff were required to account for 95% of their time by using the role-related tasks listed in RETA (Table 1).

Table 1

Distribution of Effort by Task

Table 1

Challenges in Implementation

Initially the RETA tool was designed with role-specific task lists. Each task list was customized for either data management or regulatory staff. However, one of the main challenges in implementation was standardizing exactly how each role would categorize its specific tasks when entering information into RETA. For example, when communicating with a sponsor to seek clarification about a patient eligibility requirement, some data managers categorized this communication into a general “other, specify” category rather than a more specific category, such as “patient enrollment.” Regular review of the information being entered into RETA was required at the outset to identify disparate interpretations and educate the staff on appropriate allocation of their time. Additionally, managers often found that staff were allocating as little as 40% to 50% of their time toward trial-related activities when, instinctively, they knew the percentage should be higher. One explanation for this was that staff found it challenging to allocate their time toward a particular study when they were not working consistently on a single project for a specified length of time. Examples would include program research meetings in which multiple studies were discussed or time spent answering emails involving numerous studies. Ongoing identification of these issues was essential to obtain the most accurate data possible from RETA. Staff and managers were involved in a continual, iterative process of reeducation and evaluation during the first year of RETA implementation. Ultimately, using the profiles of staff members who were most accurate in allocating their time, coupled with information about staff use of vacation and sick time, the authors determined that 70% to 75% of any given staff member's time could be allocated directly to trial-related tasks. The remaining 25% to 30% fell into non–trial-related activities, which included vacation time, sick time, professional development, team, and office-wide meetings. Those early metrics provided a framework for monitoring compliance on an ongoing basis, and easily identifying staff members who might require further training to correctly use RETA to track their time. Managers held staff accountable for logging their effort accurately and consistently by providing them with reports of their activity and discussing concerns revealed by the data. The requirement to accurately and consistently log effort was incorporated into the general expectations of the evaluation process for the staff and discussed during mid-year and annual reviews. A task (e.g., “logging effort”) was added to RETA to capture the amount of time required for staff members to log their time to clearly understand how much time it took when incorporating expectations for completion into staff performance reviews. On average, logging effort into RETA takes each staff approximately 10 minutes per day. Staff are encouraged to log effort on a daily basis before leaving work each day.

Initially, data management and regulatory staff were resistant to the implementation of RETA. They voiced concerns that it would be used for punitive purposes and questioned how its use would benefit them. Managers reinforced that quantifying effort toward trials was essential to ensure appropriate resource allocation and support workload leveling. The need to quantify the actual effort required to conduct a trial was also emphasized. This understanding was essential to ensure that sufficient resources were available to perform the work. Over time as workload leveled out, staff realized that RETA provided valuable data to justify the need for additional staffing resources. Although in the past CTO staff complained about being overloaded with too many assignments, RETA data provided documentation to support their claims, resulting in a more realistic and fair distribution of work. This process supported a greater acceptance of RETA and increased staff enthusiasm for its use. Notably, the UMCCC annual employee engagement survey, which measures employee job satisfaction across several variables, showed a subsequent improvement in scores for CTO staff after the implementation of RETA. The increase in scores was attributed partly to an improved workload distribution, which could be justified by RETA data and used to more fairly allocate assignments. CTO staff were incrementally increased once RETA documentation was presented to show the gap between workload and staffing.

Since its implementation, RETA has undergone only one major revision to the task lists that staff use to categorize their time. The primary force driving the revision was accumulated practical experience and feedback provided by RETA users. As implementation of RETA moved forward, it became clear that broader categories were needed whenever possible while still maintaining an appropriate level of specificity. Requiring too much specificity would result in the staff struggling to precisely identify the correct task to which to log their time when several similar tasks may be listed. Conversely, creating categories that were too broad did not yield sufficient information to understand how much time was required before trial activation, during active enrollment, or after the trial was closed to enrollment. Understanding how much time was spent during the various phases of trial activity was essential to the budget development process and for effective workload distribution.

In response to the challenges staff encountered in logging time associated with attending program research meetings or answering e-mail for multiple studies, a function was created in RETA that would allow a staff member to select one task (such as “attended research program meeting”) to indicate that they performed that activity for 1 hour, and then select the multiple studies discussed at that meeting. Their 1 hour of effort would then be automatically divided evenly among the multiple studies. RETA's ability to be modified to add new tasks, change existing tasks, or add new categories is one of its strengths as a tool; it can be adapted and tailored to fit any function in clinical research. It is applicable to any research environment, and its usefulness is not limited to the UMCCC CTO.

Effort Tracking Data in the Early Years

At the outset, the primary goal of the development and implementation of RETA was to more effectively manage workload and to appropriately bill studies for the time actually required to perform the essential data management and regulatory functions. After a full year of collecting data in RETA, the group began to more fully grasp its broader usefulness beyond workload leveling and accurate billing. For example, although it allowed the group to appropriately bill studies for data management and regulatory effort, it also provided valuable data to understand if the amount of time required to complete these functions was being properly estimated. At the beginning of a trial, managers are required to estimate effort required by their team to perform the data management or regulatory functions. Before the availability of RETA data, the effort estimates that managers provided were based on instinct and practical experience. RETA not only allowed the group to bill appropriately for data management and regulatory effort, it allowed them to validate whether their effort estimates were accurate. If RETA data showed that the effort estimates were not accurate, the group was able to evaluate and understand which aspects of the trial might be responsible for the outcome requiring more or less effort than projected. As RETA began to accumulate more data about the effort required to conduct various trials, the group was also able to use the data to guide them in developing effort estimates for studies that they knew to be similar. Previously, assessing which studies were similar was also a process guided by instinct and practical experience. RETA data enabled the identification of similarities between studies based on specific trial characteristics, such as inpatient versus outpatient care and oral or intravenous drug dosing. Within 18 months of the implementation of RETA, managers had a rich source of information to draw on when creating effort estimates for new studies.

The group also sought to use RETA data to help identify opportunities for process improvements. For instance, RETA allowed them to fully quantify the amount of time data managers and regulatory coordinators spent on a trial before its activation. That

Figure 1
Figure 1

Research effort tracking application report: project personnel effort.

Abbreviations: DSM, data and safety monitoring; SAE, serious adverse event.

Citation: Journal of the National Comprehensive Cancer Network J Natl Compr Canc Netw 9, 11; 10.6004/jnccn.2011.0103

information was used to create a better cost-recovery structure that allowed CTO to adequately recoup expenses for the required effort. For example, RETA data enabled accurate estimates of the costs associated with the work involved to initiate a study, leading to improved cost recovery through a more realistic non-refundable study start-up fee.

Figure 1 shows the Project Personnel Effort report that can be generated by RETA. This report provides a detailed account of which studies a specific team member worked on during a defined period. Furthermore, which tasks the team member was performing for each of those studies and the amount of time spent on each task can be identified. Figure 2 shows the Project Effort by Role report, which is an example of a longitudinal report generated using a combination of RETA data and information from the clinical trials management system. This report provides information regarding the protocol regulatory approval process timeline, patient enrollment information, and a graphic representation by month of the amount of effort expended on the trial by each personnel group (data management, regulatory). Additionally, although not included in the figure, the Project Details report provides an effort breakdown of the regulatory and data management staff who have logged work on the trial, and details the tasks performed. These details are similar in nature to those provided in the Project Personnel Effort report. If ongoing trials that are similar to new ones could be identified, then actual data from those trials could yield insight to develop more accurate effort estimates than the prior process, which relied on practical experience and instinct. RETA data significantly improved the ability to accurately project requisite trial effort and, concomitantly, whether it was feasible to conduct a trial based on the reimbursement being provided by the sponsor or investigator.

Conclusions

In today's health care environment in which productivity is vital, quality improvement depends on using a systematic approach that provides both quantitative (workload metrics) and qualitative (organization of work) measures. Accordingly, the UMCCC

Figure 2
Figure 2

Research effort tracking application report: project effort by role.

Abbreviations: PRC App, protocol review committee approval; PRC Sub, protocol review committee submission; IRB App, institutional review board approval; IRB Sub, institutional review board submission.

Citation: Journal of the National Comprehensive Cancer Network J Natl Compr Canc Netw 9, 11; 10.6004/jnccn.2011.0103

CTO developed a Web-based data collection tool to capture information about trial-related tasks and the effort required to complete them. Data on RETA have yielded insights that extend well beyond its usefulness as a productivity tool. Over a 4-year period, the data obtained from use of this tool have not only assisted with workload management, trial budget development, and cost recovery, but have also compelled the group to think critically about clinical trials management and enabled them to identify the most complex and time-consuming tasks that affect the bottom line. A subsequent article6 provides an analysis of the data yielded and discusses trial characteristics and the impact of those characteristics on workload and resource allocation.

Dr. Innes has disclosed that he is a co-inventor. All other authors have disclosed that they have no financial interests, arrangements, or affiliations with the manufacturers of any products discussed in this article or their competitors.

The authors would like to thank Janet Tarolli, RN, BSN, CCRC, and Joy Stair, MS, BSN, for serving as editors.

References

  • 1

    Gwede C, Daniels S, Johnson D. Organization of clinical research services at investigative sites: implications for workload measurement. Drug Inf J 2001;35:695705.

    • Search Google Scholar
    • Export Citation
  • 2

    Pharmaceutical Research and Manufacturers of America. Pharmaceutical Industry Profile 2009. Washington, DC: Pharmaceutical Research and Manufacturers of America; 2009:38.

    • Search Google Scholar
    • Export Citation
  • 3

    NCI Clinical Trials Working Group. Trial complexity elements and scoring model. Working document. Available at: http://ctep.cancer.gov/protocolDevelopment/docs/trial_complexity_elements_scoring.doc. Accessed April 21, 2009.

    • Search Google Scholar
    • Export Citation
  • 4

    Stephenson H. Strategic Research: A Practical Handbook for Phase IIIB and Phase IV Clinical Studies. Chapter 8: Optimizing site performance. Journal of Clinical Research Best Practices 2008;4(2).

    • Search Google Scholar
    • Export Citation
  • 5

    Huckman RS, Zinner DE. Does focus improve operational performance? Lessons from the management of clinical trials. Strategic Management Journal 2008;29:173219.

    • Search Google Scholar
    • Export Citation
  • 6

    James P, Bebee P, Beekman L. Effort tracking metrics provide data for optimal budgeting and workload management in therapeutic cancer clinical trials. J Natl Compr Cancer Netw, in press.

    • Search Google Scholar
    • Export Citation

If the inline PDF is not rendering correctly, you can download the PDF file here.

Correspondence: Marcy Waldinger, MHSA, 1500 East Medical Center Drive, 6316 Cancer Center, Ann Arbor, MI 48109-2800. E-mail: wald@umich.edu
  • View in gallery

    Research effort tracking application report: project personnel effort.

    Abbreviations: DSM, data and safety monitoring; SAE, serious adverse event.

  • View in gallery

    Research effort tracking application report: project effort by role.

    Abbreviations: PRC App, protocol review committee approval; PRC Sub, protocol review committee submission; IRB App, institutional review board approval; IRB Sub, institutional review board submission.

  • 1

    Gwede C, Daniels S, Johnson D. Organization of clinical research services at investigative sites: implications for workload measurement. Drug Inf J 2001;35:695705.

    • Search Google Scholar
    • Export Citation
  • 2

    Pharmaceutical Research and Manufacturers of America. Pharmaceutical Industry Profile 2009. Washington, DC: Pharmaceutical Research and Manufacturers of America; 2009:38.

    • Search Google Scholar
    • Export Citation
  • 3

    NCI Clinical Trials Working Group. Trial complexity elements and scoring model. Working document. Available at: http://ctep.cancer.gov/protocolDevelopment/docs/trial_complexity_elements_scoring.doc. Accessed April 21, 2009.

    • Search Google Scholar
    • Export Citation
  • 4

    Stephenson H. Strategic Research: A Practical Handbook for Phase IIIB and Phase IV Clinical Studies. Chapter 8: Optimizing site performance. Journal of Clinical Research Best Practices 2008;4(2).

    • Search Google Scholar
    • Export Citation
  • 5

    Huckman RS, Zinner DE. Does focus improve operational performance? Lessons from the management of clinical trials. Strategic Management Journal 2008;29:173219.

    • Search Google Scholar
    • Export Citation
  • 6

    James P, Bebee P, Beekman L. Effort tracking metrics provide data for optimal budgeting and workload management in therapeutic cancer clinical trials. J Natl Compr Cancer Netw, in press.

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 310 249 30
PDF Downloads 67 51 10
EPUB Downloads 0 0 0