stanford medicine


main image

Fixing Trial Tribulations

Solutions from Stanford

Four Stanford experts weigh in on some of the more troublesome issues relating to clinical trials and offer their prescriptions for solving them.

Unrepresentative patient samples

Keith Humphreys, PhD, is a professor of psychiatry and behavioral sciences at the School of Medicine and director of the Program Evaluation and Resource Center at the Palo Alto Veterans Affairs Health Care System. Humphreys studies the effectiveness of interventions for substance abuse and psychiatric disorders, and is involved in rebuilding Iraq’s mental-health-care system. In addition, he has published several papers highlighting problems with clinical trials, such as how exclusion criteria might unintentionally prevent women, the elderly and many other patient populations from participating.

portrait of Keith Humphreys, PhD
Taking a pill prescribed by a physician is a bigger leap of faith than most U.S. patients realize. Keith Humphreys, PhD

Humphreys’ Perspective: Taking a pill prescribed by a physician is a bigger leap of faith than most U.S. patients recognize. In many instances, there is scant evidence showing that a specific medication or medical procedure will work for someone of your age, gender or race, or with your particular set of health issues. This happens because most treatment trials have extensive exclusion criteria that allow only a small (as low as 5 percent for some diseases) and unrepresentative subsample of all patients to enroll. However, if the trial results are positive, the health-care system will provide the treatment to a much broader range of people.

For example, many medication trials exclude elderly patients because they tend to have more health problems and take more medications that might interact with the drug being evaluated. But the moment the medication is approved based on a study’s findings, elderly patients across the country will begin receiving it from their physicians and, unlike in the original research study, their outcomes and side effects will not be systematically monitored. The same phenomenon occurs in studies of other types of health-care interventions, including surgical procedures and psychosocial treatments.

One might suspect that most treatment researchers are aware of this problem and take pains not to exclude anyone from outcome studies without good reason. But a 2007 review in The Journal of the American Medical Association showed that most published studies provide little or no justification for why certain types of patients were excluded. My own group’s research has shown that, in mental health research, study enrollment procedures have been getting more — rather than less — restrictive in recent years. Even worse, those who are excluded tend to be from vulnerable populations, including African-Americans, people with serious psychiatric problems and the elderly. These segments of the population end up bearing a much greater risk from receiving new medical treatments than do other patients.

Humphreys’ Solution: Those of us involved in medical education and in evaluating medical research (e.g., NIH grant review committees, journal reviewers and editors) should hold exclusion criteria to the same standard used for all other methodological decisions, meaning that a solid justification must be provided rather than relying on habit or convenience. It might help U.S. researchers who think medical research “has to” study narrow patient samples to know that countries such as the United Kingdom view “treatment research participation for all” as a more prominent consideration.

At a policy level, the federal government should create a “post-proof” mechanism to monitor whether interventions that worked for a select few in clinical trials are as effective and safe for the many who will receive them in the real world. The FDA is too overburdened, slow and underfunded to do this well. An alternative model would be to create contracted evaluation units within well-organized health-care systems that have excellent electronic medical record systems (e.g., Kaiser Permanente). These centers would continuously monitor and report on whether newly approved treatments are proving as effective with a broader range of patients as they did in the original studies, with a particular focus on the populations not enrolled in the “definitive” research underlying the intervention.

IRB costs

Todd Wagner, PhD, is a health economist at the Veterans Affairs Palo Alto Health Care System and a consulting assistant professor of health research and policy at the School of Medicine. His research focuses on consumer health information, cost-effectiveness analyses and financing for institutional review boards.

portrait of Todd Wagner, PhD
Focusing on financing might sound radical, and I acknowledge that it would entail risks. Todd Wagner, PhD

Wagner’s Perspective: For the past 40 years, institutional review boards have been the primary method of protecting the rights and welfare of clinical trial participants. At research organizations such as Stanford, an IRB is a panel of independent experts who review and monitor clinical trials to ensure the research methods are consistent with sound scientific and ethical principles. In the late 1990s, there was mounting evidence that IRBs were not doing enough to protect study participants. 2001 was a turning point after the death of a healthy volunteer in an asthma study. Regulators and institutions severely clamped down on all research activities involving human subjects. Undeniably, there were benefits from these changes, including a change in culture and a reinforcement of the principle that we, as researchers, play an integral role in protecting participants. However, the changes also resulted in a number of forms and administrative requirements being added, making the IRB review process much more time-consuming and costly.

Multisite clinical trials, those studies responsible for most of the definitive medical research, were hit the hardest by the regulatory changes. Proving that the shingles vaccine was safe and effective, for instance, required recruiting more than 38,000 people at 22 sites, and each site had to receive its own IRB approval before participating in the trial. Researchers conducting multisite trials have reported long and highly unpredictable delays (up to 18 months) in getting their protocols approved by the various sites, and one of our studies found that almost one-fifth of the total research grant was spent on IRB activities. Some researchers have reacted by using fewer sites, but this raises a number of scientific concerns, such as increasing the likelihood that the patient population will not be broad enough to fully demonstrate the therapy’s possible effects.

Wagner’s Solution: National experts diverge in their opinions on how to fix the system. Some advocate for centralized IRBs, and my own team is evaluating the central IRB used by the National Cancer Institute as well as the one under development for the Department of Veterans Affairs. However, I think the more fundamental question is, “How should we pay for IRBs?” At present, local IRBs receive most — or all — of their funding from their parent organization. When IRB administrators want more money, they have to compete with other departments for funds. Consequently, providing more resources to the IRB could indirectly hurt other areas, such as patient care or lab science. More important, this method of funding will never create incentives that reward the IRB for efficiency or innovation, which I believe are important. Therefore, I would build a system in which the IRB is independent of the parent institution. Institutions could establish contracts based on quality and price with IRBs; in turn, researchers would pay the IRB on contract to review their study protocol — perhaps $2,500 the first time a study is reviewed.

Focusing on financing might seem radical, and I acknowledge that it would entail risks — some IRBs might cut quality to save money. There are ways to mitigate these risks, but we need to acknowledge that IRBs do very important work and to disavow ourselves of the notion that local IRBs are inexpensive or that they can be effectively run by volunteers. It is time that we stop treating them as a back office and, as former GE chairman Jack Welch would say, turn them into a front office.

Conflicts of interest

Mildred Cho, PhD, is the associate director of the Stanford Center for Biomedical Ethics and an associate professor of pediatrics. In addition to her work on the ethical and social issues surrounding genetic, stem cell and bioweapons research, she has published several papers on how academic-industry ties affect biomedical research.

portrait of Mildred Cho, PhD
The Vioxx example is not an isolated case; there are dozens of cases of corporate manipulation. Mildred Cho, PhD

Cho’s Perspective: Physicians and patients are just a few of the groups who’ve grown increasingly concerned about conflicts of interest in clinical trials conducted by university researchers.

The primary strategy for easing concerns about this has been to disclose the potential financial conflicts. However, it is becoming clear that this approach is inadequate. First, publishers often make the disclosures difficult to find. For example, an April 2008 study published in The Journal of the American Medical Association by Ross et al. reported that in 43 percent of review articles published about the drug rofecoxib (Vioxx) that disclosed industry sponsorship, the disclosures were published in a different part of the printed journal and weren’t linked to the online version of the studies. More problematic is a previous finding published in Science and Engineering Ethics by Krimsky et al. that 34 percent of articles in the most highly cited biomedical journals had failed to disclose the financial ties of the lead author. A 2008 report by the Office of the Inspector General of the Department of Health and Human Services also found that it could not obtain an accurate count of reported conflicts of interest in grantee institutions.

Even more disturbing is the recent evidence from court documents that publications about clinical trials of Vioxx routinely disclosed the manufacturer’s (Merck) sponsorship, but did not reveal that the lead authors, largely from academia, were sometimes recruited as authors after the trials were designed or conducted, and that the papers were drafted by Merck’s employees or a contracted medical writer. JAMA editors now propose that authors report their specific contributions to a study. But merely disclosing this information along with financial interests ignores the extent to which the role of academic biomedical institutions has been undermined. What if even the small proportion of clinical trials that appear to be performed by academic researchers were actually designed, conducted, analyzed and written by corporate sponsors who did not conduct the research? If academic researchers are not performing these functions, what is their role?

Cho’s Solution: The Vioxx example is not an isolated case; there are dozens of cases of corporate manipulation of how and when results are presented. As a result, JAMA editors have asserted that “drastic action is essential” on the part of all involved in medical research to prevent becoming complicit in the manipulation of clinical trials. Disclosure alone or even limitations on financial interests are insufficient to ensure the integrity of the research. Academic institutions must go beyond disclosure and, at the very least, ensure that their investigators are participating significantly in the research (as implied by authorship of publications, the currency of academia) and have full control of their studies, data and publications. Researchers should not participate in, allow their names to be used as authors of or accept sponsorship for clinical trials in which key components of the study — such as design, analysis or writing — are performed by the sponsor. Researchers and university contract offices should ensure in any agreements that researchers, not sponsors, control and perform all aspects of their studies.

Adopting findings into physician practice

Randall Stafford, MD, PhD, is an associate professor of medicine at the Stanford Prevention Research Center and director of the center’s Program on Prevention Outcomes and Practices. Much of his work is focused on advancing the scientific understanding of the forces that influence physician and patient behavior.

portrait of Randall Stafford, PhD
It is unrealistic for physicians to read all of the relevant clinical trials that are published. Randall Stafford, PhD

Stafford’s Perspective: Physicians face substantial challenges when it comes to interpreting clinical trial results and implementing those results into their practices. These challenges are magnified by the high expectations about the role of clinical trial evidence in their treatment decisions.

Over the past two decades “evidence-based practice” has become a guiding principle both in the practice of medicine and the training of new physicians. “Evidence-based” means that the therapies recommended by physicians are expected to closely reflect the current state of medical evidence. “Evidence” encompasses a broad range of scientific information that runs from laboratory investigations to the collection of population-level data on disease burden.

The Women’s Health Initiative is a good example of how clinical trial evidence can spark change in physician practice. This clinical trial was prompted by the perception that hormone therapy would reduce heart disease, but found just the opposite. For more than five years, the women in the study were monitored for many outcomes, including heart disease, strokes, fractures, blood clots and cancer. Overall, they experienced more harm than benefit from hormone therapy. As a result of the findings published in 2002,hormone therapy use has now fallen to 50 percent of its 2001 level and continues to decline.

The WHI example, though, is a rarity. My own research has shown that physicians are often slow to change their practices in response to clinical trial evidence. For instance, the use of diuretics for high blood pressure has not been widely adopted despite clinical trial evidence that this class of medications was at least as good as other classes and was much less expensive.

And clinical trials aren’t perfect. For instance, studies can include too few participants, follow participants for too short a time period, inadequately measure outcomes or be subject to a wide range of biases, including some that favor corporate sponsors. Even the best-designed studies might not be relevant to the types of patients physicians see in their offices. A constant question that physicians face when interpreting clinical trial results is whether the patients studied are comparable to the patients they encounter on a daily basis.

Stafford’s Solution: It is unrealistic for physicians to read all of the relevant clinical trials that are published. Primary care physicians, in particular, often need to rely on other sources of information to help them interpret the results of clinical trials. Selecting the right sources of summary information is crucial. I believe that physicians should avoid information sources with a direct financial interest in their treatment decisions, such as marketing materials from pharmaceutical companies. The best sources are often evidence-based reviews or guidelines compiled by professional organizations or in conjunction with the federal government. Additionally, physicians should monitor how well their practices match up with published evidence and with those of their colleagues. In the future, electronic medical records might provide important help in tracking physician practice patterns.






©2008 Stanford University  |  Terms of Use  |  About Us