Artificial Intelligence in Cancer Care: Opportunities, Challenges, and Governance

Presented by:
Dennis Chornenky
Search for other papers by Dennis Chornenky in
Current site
Google Scholar
PubMed
Close
 MBA, MPH, MS
,
Tufia C. Haddad
Search for other papers by Tufia C. Haddad in
Current site
Google Scholar
PubMed
Close
 MD
, and
Peter D. Stetson
Search for other papers by Peter D. Stetson in
Current site
Google Scholar
PubMed
Close
 MD, MA
Moderated by:
Clifford S. Goodman
Search for other papers by Clifford S. Goodman in
Current site
Google Scholar
PubMed
Close
 PhD
Full access

Experts convened at the NCCN 2025 Annual Conference to discuss the rapidly evolving landscape of artificial intelligence (AI) in cancer care, focusing on governance, opportunities, and challenges. Moderated by Clifford S. Goodman, PhD, the panel explored what makes AI unique in oncology, citing data intensity, multimodal data integration, the rapid pace of drug discovery, and high patient engagement. Current applications highlighted include administrative task reduction through record summarization and ambient listening tools, which are already improving efficiency and reducing clinician burden. Looking ahead, panelists foresee AI playing significant roles in precision medicine, predicting protein folding for drug design, optimizing treatment plans, improving remote patient monitoring for proactive care, enabling cancer interception through early detection, and potentially driving research discovery. However, challenges such as model accuracy, data quality, regulatory lag, ensuring trustworthiness, patient privacy, and ethical considerations remain critical. Robust, multidisciplinary governance frameworks, user engagement from inception, transparency, and a focus on demonstrating value are essential for successful and responsible AI adoption in oncology.

Since the public debut of technologies like ChatGPT in late 2022, interest in and the application of artificial intelligence (AI) in health care have surged. Terms such as machine learning, deep learning, large language models (LLMs), and generative AI are becoming increasingly common as AI permeates various aspects of health care—from reducing administrative burdens and managing clinical trials to enhancing imaging diagnostics and even driving targeted drug design. At the NCCN 2025 Annual Conference, a panel of experts, moderated by Clifford S. Goodman, PhD, an independent consultant in health care technology and policy, explored the current state, future trajectory, and inherent challenges of AI specifically within the complex field of cancer care.

What Makes AI in Cancer Care Unique?

According to the panelists, several factors distinguish the application of AI in oncology from its use in other health care domains. Peter D. Stetson, MD, MA, a hospitalist at Memorial Sloan Kettering Cancer Center, highlighted the “intensity” of AI use in specific areas within cancer centers like Memorial Sloan Kettering. Operationally, managing and summarizing vast amounts of outside records for new patient consultations presents a significant challenge, he said—particularly for destination centers.

“Cancer care is heavily reliant on imaging—including radiology, radiation oncology treatment planning, and digital pathology—making these prime areas for AI development and implementation,” Dr. Stetson continued. “Furthermore, oncology patients are often highly engaged, with high expectations for communication and involvement in their care, necessitating careful consideration when implementing AI tools.”

Dennis Chornenky, MBA, MPH, MS, CEO, Domelabs AI, also underscored the data-intensive nature of oncology, which has “always been more reliant on data than many other disciplines.” He pointed to the interdisciplinary collaboration required in cancer care, which can make deploying simpler AI tools more complex because of the need to integrate across different workflows, data sets, and teams. However, Mr. Chornenky believes that as AI capabilities advance, particularly with generative and agentic AI, these tools will become more adept at navigating complex, collaborative environments, citing the potential for AI to assist in tumor boards by taking notes, summarizing, and even coordinating follow-up actions.

Tufia C. Haddad, MD, Professor of Oncology, and Chair, Platform and Digital Innovation, Mayo Clinic Comprehensive Cancer Center, stressed the unprecedented pace of drug discovery and development in oncology as a driving factor behind the adoption of AI. “With nearly half of new FDA drug approvals and expanded indications occurring in oncology and benign hematology annually, and the rate accelerating from <1 per month a decade ago to approximately 1 per week now, keeping up is overwhelming, especially for community oncologists,” Dr. Haddad said. “This creates a critical need for AI-driven clinical decision support systems that can help to synthesize guidelines, new drug information, and patient data to assist clinicians.”

The complexity of data is another key factor. Dr. Stetson elaborated on the need to integrate multimodal data—genomics (somatic and germline), phenomics, imaging data, pathology reports, and clinical notes—to gain deeper insights. He described “fusion models” that combine analyses from different data types, such as using LLMs on radiology reports alongside image models on the scans themselves, to predict outcomes such as tumor progression. This requires a robust infrastructure to aggregate and process diverse data types effectively.

Current State of AI in Oncology

Although futuristic applications garner attention, AI is already making practical impacts. Dr. Haddad described tools for record summarization using LLMs as “life-changing” for clinicians, acting like a “digital fellow” that quickly synthesizes patient information from disparate sources within the electronic health record. Ambient listening technology, or “AI scribes,” used during clinic visits or in hospital rooms, is also emerging to automate documentation, summarize conversations, and reduce administrative burden. Mr. Chornenky confirmed, citing data from health systems, that these scribe tools are saving physicians time—even a few minutes per encounter adds up significantly—and improving their experience, beginning to fulfill the promise that electronic health records initially failed to deliver.

However, these tools are still evolving. Dr. Haddad noted that, as with early speech recognition, users still need to provide feedback to train and refine the software. Mr. Chornenky added that current LLMs aren’t inherently designed for continuous learning from feedback in the same way human colleagues would learn. LLMs typically require periodic retraining.

Beyond documentation, AI is being applied to workflow challenges. Dr. Haddad mentioned systems using LLMs to help complete prior authorization forms and respond to insurance denials by providing supporting evidence.

The Future of AI in Oncology: Toward Agentic AI?

Panelists painted a picture of accelerating AI capabilities and impact in the coming years. Mr. Chornenky cautioned that human minds struggle to predict exponential change, which is characteristic of AI development. He outlined a progression from current generative AI (taking over tasks) to “agentic AI” (AI agents with more autonomy, coordinating tasks, requiring memory and learning) and eventually to “artificial general intelligence” (AGI)—systems capable of managing multiple AI agents and potentially entire enterprises. Mr. Chornenky anticipates models becoming “orders of magnitude more capable in reasoning and coordination” within the next 5 years.

In the nearer term (2–5 years), Dr. Stetson predicted AI will enhance precise decision support for targeted treatments, aid in treatment de-escalation to avoid toxicity, improve remote patient monitoring (filtering noise to provide smarter signals from home), and assist in complex cancer care navigation with shared decision-making capabilities. AI is already being used in preclinical drug design to identify targets by predicting protein folding, Dr. Stetson noted.

Dr. Haddad expanded on remote monitoring, envisioning AI enabling a shift from episodic to continuous, proactive care. AI models could select appropriate patients for home monitoring programs, predict adverse events earlier, and personalize follow-up schedules, potentially de-escalating care for stable survivors while intervening sooner for others.

Mr. Chornenky suggested that this trajectory could lead toward an “AI doctor,” particularly in telehealth or technology-enabled care settings. However, he emphasized that true integration into the clinical environment (eg, accessing records, scheduling, prescribing) is crucial for such a concept to be realized.

Regarding clinical trials, Dr. Stetson and Dr. Haddad envision AI improving equity by matching patients to trials they might otherwise miss. Empowering patients with AI tools to find trials themselves, coupled with decentralization, may overcome traditional access barriers.

Looking 10 years ahead, Mr. Chornenky predicted AI will drive a significant portion of major research discoveries. Dr. Haddad envisions AI enabling “cancer interception,” using data from wearables and other sources to detect cancers much earlier, leading to NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines) focused on prevention. Dr. Stetson hopes AI will solve the current “garbage in, garbage out faster” problem by enabling high-quality data curation.

Managing Risks and Ensuring Effective Governance

Despite the promise of AI, significant risks and challenges require careful management. Dr. Haddad emphasized the need for evidence generation through research to build trust among clinicians accustomed to evidence-based medicine. Studying AI models in practice rigorously for safety, usability, and feasibility—akin to early-phase trials—is critical before assessing their effectiveness in larger trials. Integrating AI smoothly into complex clinical workflows (“the last mile”) is often overlooked but vital for adoption, according to Dr. Haddad.

A key technical concern with generative AI is “hallucinations,” where models generate incorrect or fabricated information. Dr. Stetson explained that this can occur when models extrapolate beyond their training data. Mitigation techniques such as retrieval-augmented generation, which supplements the model with current information, may help but may not be foolproof. Mr. Chornenky noted a trade-off: Although hallucinations are problematic for precision, they can also be viewed as a form of creativity; overly constraining models might stifle innovation. “Model drift”—in which a model’s performance changes over time as data or conditions change—also requires ongoing monitoring.

Mr. Chornenky stressed that the rapid pace of AI innovation may outstrip regulatory and governance frameworks, creating new risks. “Organizations need robust internal governance addressing multiple dimensions: technical, regulatory compliance, and financial/strategic, demonstrating return on investment to build trust,” he said. “Patient privacy is paramount, especially with ambient listening and remote monitoring.”

The panelists agreed that patient consent is essential, along with transparency about AI use. Mr. Chornenky advocated for “persuasive design” to clearly explain the benefits and protections to patients, empowering informed decisions rather than burying disclosures. “Building trust with clinicians requires integrating AI tools into workflows and ensuring transparency and explainability,” he stated. “Clinicians should expect AI recommendations to come with explanations of their basis and associated uncertainties. Ethical pitfalls must be addressed, ensuring AI reflects human values and maintains human oversight.”

Liability also remains a complex question. Clinicians are typically responsible for decisions made using AI tools (treating AI as informational), but institutional liability could arise if automated systems cause harm as a result of inadequate safeguards. Dr. Stetson added that sound governance must include tracking safety events related to AI use, and speculated that liability could eventually flip, making the nonuse of proven AI tools a potential liability issue.

“Engaging payers early is crucial, especially as AI enables hyperpersonalized care pathways that may deviate from standard guidelines, requiring new reimbursement models,” said Dr. Stetson.

According to Dr. Stetson, effective governance requires a multidisciplinary team (clinicians, patients, quality/safety experts, regulatory/risk personnel, machine-learning operations experts), alignment with organizational strategic priorities, ongoing monitoring of deployed models, and a risk management framework to assess potential harms and determine appropriate oversight levels. User engagement from the design phase through deployment is critical for success.

Disclosures: Dr. Chornenky has disclosed being employed by Domelabs AI. Dr. Goodman has disclosed receiving consulting fees from Bayer HealthCare, Janssen Pharmaceutica Products, LP, Medtronic plc, Merck & Co., Inc., Novartis Pharmaceuticals Corporation, Roche Laboratories, Inc., and Takeda Pharmaceuticals North America, Inc.; and receiving honoraria from Genentech, Inc. The remaining presenters have disclosed no relevant financial relationships.

Correspondence: Dennis Chornenky, MBA, MPH, MS, Domelabs AI, 5021 Vernon Avenue S, #302, Edina, MN 55436. Email: dennis@domelabs.ai;
Tufia C. Haddad, MD, Mayo Clinic Comprehensive Cancer Center, 200 First Street SW, Rochester, MN 55905. Email: haddad.tufia@mayo.edu; and
Peter D. Stetson, MD, MA, Memorial Sloan Kettering Cancer Center, 633 Third Avenue, Suite 220N, New York, NY 10017. Email: stetsonp@mskcc.org; and Clifford S. Goodman, PhD, Clifford Goodman LLC, 5616 Marengo Road, Bethesda, MD 20816. Email: cliff.s.goodman@gmail.com
  • Collapse
  • Expand

Metrics

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 12299 12299 12271
PDF Downloads 1010 1010 994
EPUB Downloads 0 0 0