Increasing the effectiveness and efficiency of cancer drug development
New data released by the American Cancer Society in May 2009 documented that age-adjusted incidence and death rates from cancer in the United States have decreased, resulting in the avoidance of about 650,000 deaths over the past 15 years. Although progress has been made in reducing incidence and mortality rates and improving survival, cancer still accounts for more deaths than heart disease in Americans younger than 85 years of age - and over 562,000 deaths from cancer are projected to occur in 2009 alone. Similar patterns have been observed in most developed countries, where cancer is the second leading cause of death. The global number of cancer deaths is projected to increase by 45% from 2007 to 2030 (from 7.9 million to 11.5 million), in part due to a growing and aging global population.
>> Emerging breakthroughs in genomics, proteomics, and molecularly targeted cancer therapies hold great promise for improving outcomes for patients with cancer. However, progress has been much slower than hoped, raising important questions about how to improve the effectiveness and efficiency of clinical cancer research. This paper analyzes the productivity of cancer treatment development and offers specific recommendations to improve the effectiveness and efficiency of clinical cancer research.
Between 1990 and 2005, 920 cancer compounds underwent clinical trials, yet only 32 were approved by the Food and Drug Administration in the US. The average amount of time spent in development and approval for those 32 therapies was 9.1 years. Particularly concerning is that half of cancer therapies that ultimately failed reached expensive and time consuming late-stage clinical testing before eventually being abandoned. A similar pattern was observed in analyses by Cancer Research UK, which found that of 974 cancer drugs in clinical development during 1995-2007, 18% made it to market and 5% became standard treatments for the disease. Another study compared development attrition (failure) rates across a variety of clinical indications, and found remarkable differences. For example, the attrition rate for cardiovascular products was 80% (success rate of 1 in 5), while that of cancer products was 95% (success rate of 1 in 20). Though the precise rate of attrition for cancer products varied across these studies depending on the years and products included, the messages are clear: too few cancer therapies are coming to market, and candidates that ultimately fail are taking too long to do so.
The problems of inefficient and uncertain development are not new - nor unique to cancer, but they have become more urgent in an era that should be defined by clinically meaningful advances in cancer therapy based on emerging science. Historically, cancer therapy has consisted of cytotoxic drugs that lack a tumor-specific mechanism of action. The effectiveness of such nonselective treatments is often limited by their tolerability and toxicity to the patient. Molecularly targeted agents, such as the kinase inhibitors, represent a new era of therapies that aim specifically at tumor cells, making them more tolerable and thus more effective.
In fact, the work by Cancer Research UK suggests that the same science that enables kinase inhibitors to target tumor cells may also have improved the efficiency of their development and approval. Compared to the 18% success rate for all cancer therapies, 47% of kinase inhibitors that entered phase I testing were ultimately approved. A key reason for this difference seems to be that kinase inhibitors make the critical transition from phase II to phase III (where most attrition occurs) 69% of the time. Factors underlying this difference are likely to include the targeted nature of kinase inhibitors and improved design of clinical trials, including biomarker-driven patient stratification.
A number of specific opportunities for improving cancer clinical research were discussed at a September 2008 conference convened by the Brookings Institution’s Engelberg Center for Healthcare Reform , and supported by Friends of Cancer Research, the American Society of Clinical Oncology, and the American Association for Cancer Research. Cancer experts from academia, the National Cancer Institute, the Food and Drug Administration, patient advocacy groups, and the medical products industry discussed practical, consensus-based recommendations to make cancer research more effective and efficient. What emerged was a shared vision for a more quantitative and systematic process for predicting and determining the safety and efficacy of cancer therapies, linked to more comprehensive and consistent evidence from collaborative trials and clinical practice. By combining insights from genomics, systems biology, and other emerging fields with consistently-collected empirical data in support of these emerging models, this approach will enable better prediction of which patients will respond to treatments or combinations of treatments, and faster and more certain conclusions about when there is a response, or lack of response, or important safety problem.
What’s needed now is a collaborative effort among the global cancer research community - including product developers, researchers, patient advocates, and regulators - to develop better molecular targets and more efficient trial designs for evaluating potential therapies.
New and Better Molecular Targets
Much of the success of targeted drug development rests on high quality, basic science. Biomarkers specific to different cancers must be identified and validated as predictors of cancer progression and treatment response. Such biomarkers become the “targets” that new targeted therapies are engineered to hit. An example of a biomarker is bcr-abl, a specific genetic abnormality found in 95% of people with chronic myelogenous leukemia. Imatinib (sold by Novartis as Gleevec in the US and as Glivec in Europe) was developed specifically to inhibit proliferation of bcr-abl expressing blood cells.
The scientific and policy challenge is to develop the capacity to identify such targets systematically and rapidly. Doing so will require the collaboration of various stakeholders on research infrastructure and consensus methods. Pooling data from failed trials is one way to assemble the large samples of patient-level information that is required to identify predictors of disease progression and treatment effects, but such information is not typically shared by the companies that own it. The development of large patient registries that collect detailed clinical and genomic data on cancer patients is another mechanism of creating the needed data infrastructure. Organizations in the US and Europe have called for greater collaboration among members of industry and academia to improve the availability of data from drug development programs for the purpose of enhancing the pre-clinical knowledge of various cancers and enabling the development of more targeted therapies.
This year, the Innovative Medicines Initiative (IMI) announced funding for 15 new research projects aimed at developing innovative medicines more efficiently through public private partnerships in the EU. One of these projects is focused on biomarkers in the prediction of cancer development. In the US, the National Cancer Institute continues to develop a collaborative cancer research infrastructure linking investigators and data repositories across the world through its Cancer Bio-Informatics Grid (CA-BIG). To make full use of such infrastructure, incentives must be created for data to be contributed and for investigators to collaborate in research.
Next, consensus methodological standards are needed for the validation of biomarkers, and for their “qualification” for use in patient targeting or as a surrogate endpoint. Because candidate markers may be most efficiently identified through retrospective analyses of large data repositories like those described above, there are questions about whether such data is sufficient for evaluating a candidate therapy. However, since biomarkers associated with response to a drug are often discovered after the drug is already in use, it could be considered unethical to prospectively randomize patients with and without the marker to a treatment for which there is evidence to suggest it will be ineffective. Regulators in the U.S. confronted this particular issue late last year when they considered whether to change the labels for two colorectal cancer products (panitumumab and cetuximab) based on post-hoc analyses of clinical trials that suggested a mutation in the K-RAS gene of tumors was strongly predictive of treatment effectiveness. FDA ultimately advised manufacturers that under specific conditions, such retrospective data would be considered in regulatory decisions.
More Efficient Trial Designs
Another major bottleneck contributing to inefficient cancer drug development is the time required to conduct typical prospective clinical trials. A new area of cancer therapy innovation could arrive much sooner if consensus was reached on ways of increasing the efficiency of trials.
Standards for data collection and submission are a good place to start. Because of uncertainty about the data that will ultimately be required to demonstrate safety and efficacy of a cancer therapy, researchers often exercise an “abundance of comprehensiveness,” collecting copius data on a large number of subjects. Moreover, there is wide variability in how measures are collected and analyzed across clinical development programs. While more data might seem to be better, inconsistent data and additional data that have little or no value in reaching conclusions about safety and effectiveness tend to add to the burdens, costs, and time of clinical studies without clear benefits. Perhaps the most straightforward opportunities for addressing this problem involve data collection for additional indications of products already in use. Standards for streamlined data collection for treatments about which much is already known will make trials faster, less costly, and less burdensome on volunteers.
Auxiliary endpoints (outcomes other than overall survival that may be used to learn about the benefits and risks of cancer therapies) are another way to improve the efficiency of trials. These may include progression-free survival (PFS), patient-reported outcomes, and tumor biomarkers. Auxiliary endpoints should not be studied instead of overall survival, but they offer the potential to learn more about therapies - and faster - than if the only outcome is overall survival. However, as described above for biomarkers more generally, there is a need for consensus on how auxiliary measures should be validated and interpreted to achieve consistency across trials.
The challenges associated with validating and interpreting auxiliary endpoints are clearly illustrated in the ongoing debate about appropriate use of PFS, which has been accepted by regulators as a measure of efficacy for some new cancer drug approvals. Because a progression endpoint is usually reached sooner than a survival endpoint, PFS offers hope of more quickly identifying therapies that are unlikely to demonstrate a clinically meaningful benefit. But while employing PFS may result in shorter trials, PFS rates have not correlated consistently with overall survival. It is unclear whether this is because progression and survival are not always causally related, whether potential biases in the measurement of PFS may be confounding the association, or both. This concern has prompted significant discussion and further research on assuring consistently valid measurement of PFS, in particular, whether and to what extent auditing of tumor progression via blinded independent central teview (BICR) is needed.
A third opportunity to improve the efficiency of cancer trials is to develop guidelines for the appropriate use of adaptive and Bayesian clinical trials. In the traditional development process, trials are conducted in sequential phases of progressively greater size and duration. One of the primary drivers of extended development times is the difficulty in recruiting sufficient patients with relatively rare cancers. The premise of adaptive and Bayesian clinical trials is to “adapt” the design of the study based on information accrued during the trial. Examples include stopping, slowing or expanding enrollment, imbalancing randomization to favor better-performing therapies, dropping or adding treatment arms, and changing the trial population to focus on patient subsets that are responding better to the experimental therapies. Analyses of Bayesian trials use available patient-outcome information, including biomarkers that accumulating data indicate might be related to clinical outcome. They also allow for the use of historical information and for synthesizing results of relevant trials. Despite their potential appeal, adaptive designs represent a radical departure from traditional clinical trial methods. In a March 2007 report, the European Medicines Agency (EMEA) concluded that although Bayesian methods have a place in “hypothesis-generating early phases” and “assessment of futility,” it has no role in phase III trials, which should continue to provide “stand alone confirmatory data of efficacy and safety.” There is clearly a need for consensus building among researchers and regulators on the appropriate use of adaptive designs.
In summary, the growing global burden of cancer demands a more productive and efficient process for developing new therapies. The relatively low attrition rates of molecularly targeted cancer therapies suggest that efforts to accelerate development must start with a strong foundation of data and consensus methods for identifying biomarkers associated with cancer progression and treatment response. In addition, work is needed to improve the efficiency and consistency of trials in identifying therapies that meaningfully improve outcomes. Delivering on the promise of 21st Century cancer care will require a collaborative effort that engages all stakeholders in the cancer community - researchers, manufacturers, regulators and patients. <<