Guidelines for management of patients with myelodysplastic syndromes (MDS) have been generated by the National Comprehensive Cancer Network (NCCN) Myelodysplastic Syndromes Panel. Because MDS is a heterogeneous spectrum of disorders, these patients have been categorized into prognostic subgroups, predominantly using the International Prognostic Scoring System (IPSS). Several drugs have been used to treat these patients, and their selection and sequential recommended use by the panel depend on disease characteristics and responses to treatment. Recombinant erythropoietin alfa and darbepoetin alfa have been the mainstay of therapy for treating anemia associated with MDS. The FDA has recently approved several other drugs for treating MDS, including azacytidine and decitabine for all stages of disease, lenalidomide for low-risk anemic patients with del(5q) chromosomal abnormality, and deferasirox for treating iron overload. For iron chelation, deferoxamine is also used occasionally. Treatment with immunosuppressive therapy (antithymocyte globulin and cyclosporin) has been therapeutically beneficial for a subset of younger patients with MDS. Because the financial cost of these therapies are substantial and have received only limited attention, this article evaluates the costs of specific drugs and their sequential use in the lower-risk IPSS (low and intermediate-1) subgroups based on the NCCN guidelines. Results estimate an average annual cost for potentially anemia-altering drugs of $63,577 per patient, ranging from $26,000 to $95,000, depending on the specific therapies. In patients for whom the therapies fail, annual costs for iron chelation plus red blood cell transfusions are estimated to average $41,412. The economic impact of drug therapy should be weighed against the patient's potential for improvement in clinical outcomes, quality of life, and transfusion requirements.
Peter L. Greenberg, Leon E. Cosler, Salvatore A. Ferro and Gary H. Lyman
Jeffrey Crawford, David C. Dale, Nicole M. Kuderer, Eva Culakova, Marek S. Poniewierski, Debra Wolff and Gary H. Lyman
This study was undertaken to describe the relationship between the occurrence and timing of neutropenic events and chemotherapy treatment in a community-based population of patients with cancer. The study included 2962 patients with breast, lung, colorectal, lymphoma, and ovarian cancers from a prospective U.S. registry of patients initiating a new chemotherapy regimen. Detailed patient-, disease-, and treatment-related data, including toxicities, were captured at baseline, the beginning of each cycle, and each midcycle blood draw for up to 4 cycles of treatment. Primary outcomes included febrile neutropenia (FN), severe neutropenia without fever/infection, and relative dose intensity (RDI). Thirty-seven percent of patients were aged 65 years or older, 43.5% had an Eastern Cooperative Oncology Group performance status of 1 or greater, and 27% had 1 or more comorbidities. Reductions in RDI to less than 85% of standard in the first cycle were planned in 23.6% of patients, whereas primary colony-stimulating factor prophylaxis was used in 18.2%. In the first 3 cycles of treatment, 10.7% of patients experienced FN, with most of these events (58.9%) occurring in the first cycle. This first-cycle pattern was consistently observed despite wide variations in event rates by tumor type, disease stage, chemotherapy regimen and dose, and patient characteristics. Despite frequent planned reductions from standard RDI, the incidence of FN remains high in community oncology practice in the United States. Improved methods of pretreatment assessment of patient risk factors for neutropenia are needed.
Henry J. Henk, Lena E. Winestone, Jennifer J. Wilkes, Laura Becker, Pamela Morin, Gary H. Lyman and Eric J. Chow
Background: Chronic myeloid leukemia (CML) treatment improved considerably after introduction of oral tyrosine kinase inhibitors (TKI). As a result, the number of patients living with CML may reach 250,000 by 2040. We track changes in TKI treatment adherence since 2001 and provide an early assessment of treatment costs following the availability of second-generation TKIs and generic imatinib. Methods: A retrospective cohort from the OptumLabs Data Warehouse, which includes claims data for privately insured and Medicare Advantage (MA) enrollees in a large private U.S. health plan with medical and pharmacy benefits, was used. Patients with CML initiated TKI treatment between May 2001 and October 2016 and were continuously enrolled in the health plan 6 months prior through 12 months following TKI start. Adherence was defined by medication possession ratio (MPR1=total days’ supply of imatinib in 1st year divided by 365, 1=perfect adherence). Total health care costs include medical and prescription medication benefits. MPR1 was modeled using ordinary least squares regression. The association between MPR1 and healthcare costs was estimated using a generalized linear model specified with a gamma error distribution and a log link. Results: We identified 1,793 eligible patients. First-line TKI has changed over time (dasatinib and nilotinib represent 45% of all 2016 starts; imatinib 55%). From 2001 to 2016, adherence increased (Table 1). MPR1 was higher in men and increased with age until age ∼62 after which it declined. MPR1 was lower for patients with more comorbid conditions prior to treatment. Overall, MPR1 was inversely associated with total health care costs (medical and pharmacy) among privately insured (P<.001) but not MA enrollees. The net impact of MPR1 on total healthcare costs diminished over time (P<.001) where a 10% point decrease in MPR1 was associated with 12% and 4% lower total costs, prior to and following availability of 2nd generation TKIs, respectively. When examining medical costs only, MPR1 was inversely associated with medical costs for both privately insured (P<.001) and MA enrollees (P=.016). Conclusions: We found that adherence to TKI treatment increased over time. While imatinib is still used more frequently than other TKIs as first-line therapy, second-generation TKIs are becoming increasingly used as first-line agents. Possible cost-offsets are decreasing over time but it may be too early to formally evaluate the impact of generic imatinib.
Christine G. Kohn, Gary H. Lyman, Jan Beyer-Westendorf, Alex C. Spyropoulos, Thomas J. Bunz, William L. Baker, Daniel Eriksson, Anna-Katharina Meinecke and Craig I. Coleman
Background: Although not designated as guideline-recommended first-line anticoagulation therapy, patients are receiving rivaroxaban for the treatment and secondary prevention of cancer-associated venous thrombosis (CAT). We sought to estimate the cumulative incidence of recurrent venous thromboembolism (VTE), major bleeding, and mortality/hospice care in patients with CAT treated with outpatient rivaroxaban in routine practice. Methods: Using US MarketScan claims data from January 2012 through June 2015, we identified adults with active cancer (using SEER program coding) who had ≥1 primary hospitalization or emergency department discharge diagnosis code for VTE (index event) and received rivaroxaban as their first outpatient anticoagulant within 30 days of the index VTE. Patients were required to have ≥180 days of continuous medical/prescription benefits prior to the index VTE. Patients with a previous claim for VTE, atrial fibrillation, or valvular disease or receiving anticoagulation during the baseline period were excluded. We estimated the cumulative incidence with 95% CIs of recurrent VTE, major bleeding, and mortality or need for hospice care at 180 days, assuming competing risks. Results: A total of 949 patients with active cancer were initiated on rivaroxaban following their index VTE. Time from active cancer diagnosis to index CAT was ≤90 days for 27% of patients, 91 to 180 days for 19%, and >180 days for 54%. The mean [SD] age of patients was 62.5 [12.8] years, 43.6% had pulmonary embolism, and metastatic disease was present in 42.6%. During follow-up, there were 37 cases of recurrent VTE, 22 cases of major bleeding (17 gastrointestinal, 3 intracranial, 1 genitourinary, and 1 other bleed), and 105 deaths/hospice claims. The cumulative incidence estimate was 4.0% (95% CI, 2.8%–5.4%) for recurrent VTE, 2.7% (95% CI, 1.7%–4.0%) for major bleeding, and 11.3% (95% CI, 9.2%–13.6%) for mortality/hospice care. Conclusions: Event rates observed in this rivaroxaban-treated cohort were overall consistent with previous studies of patients with rivaroxaban- and warfarin-managed CAT.
Neelima Denduluri, Debra A. Patt, Yunfei Wang, Menaka Bhor, Xiaoyan Li, Anne M. Favret, Phuong Khanh Morrow, Richard L. Barron, Lina Asmar, Shanmugapriya Saravanan, Yanli Li, Jacob Garcia and Gary H. Lyman
Background: A wide variety of myelosuppressive chemotherapy regimens are used for the treatment of cancer in clinical practice. Neutropenic complications, such as febrile neutropenia, are among the most common side effects of chemotherapy, and they often necessitate delays or reductions in doses of myelosuppressive agents. Reduced relative dose intensity (RDI) may lead to poorer disease-free and overall survival. Methods: Using the McKesson Specialty Health/US Oncology iKnowMed electronic health record database, we retrospectively identified the first course of adjuvant or neoadjuvant chemotherapy received by patients without metastases who initiated treatment between January 1, 2007, and March 31, 2011. For each regimen, we estimated the incidences of dose delays (≥7 days in any cycle of the course), dose reductions (≥ 15% in any cycle of the course), and reduced RDI (<85% over the course) relative to the corresponding standard tumor regimens described in the NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines). Results: This study included 16,233 patients with 6 different tumor types who received 1 of 20 chemotherapy regimens. Chemotherapy dose delays, dose reductions, and reduced RDI were common among patients treated in community oncology practices in the United States, but RDI was highly variable across patients, regimens, and tumor types (0.486–0.935 for standard tumor regimen cohorts). Reduced RDI was more common in older patients, obese patients, and patients whose daily activities were restricted. Conclusions: In this large evaluation of RDI in US clinical practice, physicians frequently administered myelosuppressive agents at dose intensities lower than those of standard regimens.
Derek Weycker, Xiaoyan Li, Rich Barron, Hongsheng Wu, P.K. Morrow, Hairong Xu, Maureen Reiner, Jacob Garcia, Shivani K. Mhatre and Gary H. Lyman
Background: Clinical practice guidelines recommend prophylaxis in patients with cancer receiving a colony-stimulating factor (CSF) when the risk of febrile neutropenia (FN) is high (>20%). For patients receiving chemotherapy regimens not documented as high-risk, the decision regarding CSF prophylaxis use can be challenging, because some patients may be at high risk based on a combination of the regimen and individual risk factors. Methods: A retrospective cohort design and US private health care claims data were used. Study subjects received chemotherapy regimens classified as “low” or “intermediate,” or unclassified, in terms of FN risk, and were stratified by cancer and regimen. For each subject, the first chemotherapy course, and each cycle and FN episode within the course, were identified. FN incidence proportions were estimated by the presence and number of risk factors and chronic comorbidities. Results: Across the 17 tumor/regimen combinations considered (n=160,304 in total), 74% to 98% of patients had 1 or more risk factor for FN and 41% to 89% had 2 or more. Among patients with 1 or more risk factor, FN incidence ranged from 7.2% to 29.0% across regimens, and the relative risk of FN (vs those without risk factors) ranged from 1.1 (95% CI, 0.8–1.3) to 2.2 (95% CI, 1.5–3.0). FN incidence increased in a graded and monotonic fashion with the number of risk factors and comorbidities. Conclusions: In this retrospective evaluation of patients with cancer receiving chemotherapy regimens not classified as high-risk for FN in US clinical practice, most patients had 1 or more FN risk factor and many had 2 or more. FN incidence was found to be elevated in these patients, especially those with multiple risk factors.
Jeffrey Crawford, Jeffrey Allen, James Armitage, Douglas W. Blayney, Spero R. Cataland, Mark L. Heaney, Sally Htoy, Susan Hudock, Dwight D. Kloth, David J. Kuter, Gary H. Lyman, Brandon McMahon, David P. Steensma, Saroj Vadhan-Raj, Peter Westervelt and Michael Westmoreland
Randy C. Miles, Christoph I. Lee, Qin Sun, Aasthaa Bansal, Gary H. Lyman, Jennifer M. Specht, Catherine R. Fedorenko, Mikael Anne Greenwood-Hickman, Scott D. Ramsey and Janie M. Lee
Background: The purpose of this study was to assess advanced imaging (bone scan, CT, or PET/CT) and serum tumor biomarker use in asymptomatic breast cancer survivors during the surveillance period. Patients and Methods: Cancer registry records for 2,923 women diagnosed with primary breast cancer in Washington State between January 1, 2007, and December 31, 2014, were linked with claims data from 2 regional commercial insurance plans. Clinical data including demographic and tumor characteristics were collected. Evaluation and management codes from claims data were used to determine advanced imaging and serum tumor biomarker testing during the peridiagnostic and surveillance phases of care. Multivariable logistic regression models were used to identify clinical factors and patterns of peridiagnostic imaging and biomarker testing associated with surveillance advanced imaging. Results: Of 2,923 eligible women, 16.5% (n=480) underwent surveillance advanced imaging and 31.8% (n=930) received surveillance serum tumor biomarker testing. Compared with women diagnosed before the launch of the Choosing Wisely campaign in 2012, later diagnosis was associated with lower use of surveillance advanced imaging (odds ratio [OR], 0.68; 95% CI, 0.52–0.89). Factors significantly associated with use of surveillance advanced imaging included increasing disease stage (stage III: OR, 3.65; 95% CI, 2.48–5.38), peridiagnostic advanced imaging use (OR, 1.76; 95% CI, 1.33–2.31), and peridiagnostic serum tumor biomarker testing (OR, 1.35; 95% CI, 1.01–1.80). Conclusions: Although use of surveillance advanced imaging in asymptomatic breast cancer survivors has declined since the launch of the Choosing Wisely campaign, frequent use of surveillance serum tumor biomarker testing remains prevalent, representing a potential target for further efforts to reduce low-value practices.
Ang Li, Qian Wu, Suhong Luo, Greg S. Warnick, Neil A. Zakai, Edward N. Libby, Brian F. Gage, David A. Garcia, Gary H. Lyman and Kristen M. Sanfilippo
Background: Although venous thromboembolism (VTE) is a significant complication for patients with multiple myeloma (MM) receiving immunomodulatory drugs (IMiDs), no validated clinical model predicts VTE in this population. This study aimed to derive and validate a new risk assessment model (RAM) for IMiD-associated VTE. Methods: Patients with newly diagnosed MM receiving IMiDs were selected from the SEER-Medicare database (n=2,397) to derive a RAM and then data from the Veterans Health Administration database (n=1,251) were used to externally validate the model. A multivariable cause-specific Cox regression model was used for model development. Results: The final RAM, named the “SAVED” score, included 5 clinical variables: prior surgery, Asian race, VTE history, age ≥80 years, and dexamethasone dose. The model stratified approximately 30% of patients in both the derivation and the validation cohorts as high-risk. Hazard ratios (HRs) were 1.85 (P<.01) and 1.98 (P<.01) for high- versus low-risk groups in the derivation and validation cohorts, respectively. In contrast, the method of stratification recommended in the current NCCN Guidelines for Cancer-Associated Venous Thromboembolic Disease had HRs of 1.21 (P=.17) and 1.41 (P=.07) for the corresponding risk groups in the 2 datasets. Conclusions: The SAVED score outperformed the current NCCN Guidelines in risk-stratification of patients with MM receiving IMiD therapy. This clinical model can help inform providers and patients of VTE risk before IMiD initiation and provides a simplified clinical backbone for further prognostic biomarker development in this population.
Andrew D. Zelenetz, Islah Ahmed, Edward Louis Braud, James D. Cross, Nancy Davenport-Ennis, Barry D. Dickinson, Steven E. Goldberg, Scott Gottlieb, Philip E. Johnson, Gary H. Lyman, Richard Markus, Ursula A. Matulonis, Denise Reinke, Edward C. Li, Jessica DeMartino, Jonathan K. Larsen and James M. Hoffman
Biologics are essential to oncology care. As patents for older biologics begin to expire, the United States is developing an abbreviated regulatory process for the approval of similar biologics (biosimilars), which raises important considerations for the safe and appropriate incorporation of biosimilars into clinical practice for patients with cancer. The potential for biosimilars to reduce the cost of biologics, which are often high-cost components of oncology care, was the impetus behind the Biologics Price Competition and Innovation Act of 2009, a part of the 2010 Affordable Care Act. In March 2011, NCCN assembled a work group consisting of thought leaders from NCCN Member Institutions and other organizations, to provide guidance regarding the challenges health care providers and other key stakeholders face in incorporating biosimilars in health care practice. The work group identified challenges surrounding biosimilars, including health care provider knowledge, substitution practices, pharmacovigilance, naming and product tracking, coverage and reimbursement, use in off-label settings, and data requirements for approval.