One of the treasures of oncology practice is a vast and accessible clinical literature to draw on to guide clinical treatment. Truly, oncology is a field dominated by science and evidence. Some great resources in oncology are comprehensive treatment guidelines, such as those from NCCN, that review, interpret and instill the literature, and infuse it with clinical expertise to create a road map for effective treatment.
Given my thoughts on these, it was sobering to read a recent paper that investigated the levels of evidence underlying NCCN recommendations for 10 common cancers.1 The authors reviewed NCCN Guidelines levels of evidence and consensus (1, high evidence, uniform consensus; 2A, lower evidence, uniform consensus; 2B, lower evidence without uniform consensus but no major disagreement; 3, any level of evidence but major disagreement), identified all treatment recommendations in the Guidelines, and tallied the level of evidence behind those recommendations.
The major findings were that only 6% of recommendations were level 1; the vast majority (83%) were level 2A—consensus, but less evidence. Among diagnostic workup and cancer surveillance, 0% of recommendations were level 1, and in many major cancer types (lung, prostate, colorectal, melanoma, pancreas, lymphoma, bladder, uterus), fewer than 10% of all recommendations were level 1. The most level 1 recommendations were in initial therapy; beyond that, the rate drops substantially.
These observations point to a large gap in evidence for many oncology practices, and though these observations are not necessarily new, it is time for all of us in oncology to mind the gap. It underscores the need for practical and well-documented ongoing research. This research includes the study of new drugs or treatments, but the gap also demands inquiry into the processes of care—staging, surveillance, survivorship. As much research as there is, we need more to fill these many, many gaps.
For all oncologists—in community practice or not directly involved in research or those engaged in clinical research—these gaps are a call to action. First, the obvious recognition of the deficiencies of the clinical literature should invite everyone to pursue clinical investigation more vigorously. There is so much we yet need to learn, at every level.
For those who care about guideline-based clinical practice, this paper is an alarm that the goals of evidence-based treatment remain elusive. The fact that 5 of 6 recommendations in oncology are based on uniform opinion and some good evidence is hardly a strong endorsement of rigor in clinical decision-making. Any guideline may represent the convergence of the best evidence and thinking from “thought leaders” at any given time. In the long-run, however, patients are best served by more evidence and less thinking. When evidence is compelling, little opinion is needed.
Finally, the data in the paper by Poonacha and Go underscore the need to clarify exactly what a level 2A recommendation is. My experience serving on panels tells me that there is “evidence” and then there is “evidence” and then there is “consensus.” Teasing these apart to a greater degree will be critical for future guidelines and cancer care. This is particularly true as tumor types are splintered into subsets. Evidence for the care of patients with a subgroup or subtype of cancer is garnered less often from “all-comers” phase III trials and more commonly from smaller studies that more accurately capture the clinical experience. Relying on these narrower or phase II investigations will be a major challenge for guideline authors and organization.
Reference
Poonacha TK, Go RS. Level of scientific evidence underlying recommendations arising from the National Comprehensive Cancer Network clinical practice guidelines. J Clin Oncol 2011;29:186–191.