
PatientsChartingCourse
.pdf64 |
PATIENTS CHARTING THE COURSE |
dinary people’s contributions to public health. During screenings before test audiences, the ads were lauded for their “humanity and emotional appeal.” Viewers declared, “These people are heroes in their own way,” “They’ve done something great for all of us,” and “I see the benefit of clinical research to society.”
This “Medical Heroes” public service campaign was market tested in 30 sites across 18 U.S. markets by Eli Lilly and Company. In the first wave of the market test, the control group was established as 12 markets that ran their typical recruitment ads; in 6 comparable markets, concurrent “Medical Heroes” ads were run, as well as recruitment ads, and these markets showed a 38 percent increase in patient recruitment rates relative to the control group. The test was repeated, and the results of the second wave showed that rates of response to recruitment ads more than doubled in the markets where the “Medical Heroes” campaign was run. The campaign met its ultimate goal of providing the public with increased awareness of research participation and an improved perception of clinical research volunteers.
Another Center for Information and Study on Clinical Research Participation initiative is a grassroots education and outreach process known as “AWARE for All-Clinical Research Education.” AWARE programs, held in major cities across the United States, bring together disease advocacy groups, hospitals and healthcare organizations, educational institutions, and community organizations to provide AWARE’s message directly to their constituents. In addition, distinguished local politicians and opinion leaders, physicians, healthcare providers, and clinical research professionals serve as keynote speakers and workshop leaders—all volunteering their time to help educate the public.
To date, more than 300,000 people have been impacted by the program. AWARE has put a human face on the people who volunteer for clinical trials while building public understanding of the risks and benefits of participating. The initiative is creating a movement at the local level, and there is a need to bring this form of outreach to many more communities. When asked whether they were more or less likely to participate in a clinical trial after attending AWARE, 75 percent of attendees responded “more likely.”
An additional example of an outreach initiative is post-trial communication with research volunteers. The Center for Information and Study on Clinical Research Participation and Pfizer collaborated to test a new process for routinely communicating clinical trial results to study volunteers after their participation has ended. Between June and December 2009, trial results for Celebrex®/Celecoxib and Sutent®/Sunitinib were translated into lay language by a team of consumer, science, and medical writers and published in print, web, and audio formats. These summaries were then tested
CLINICAL RESEARCH, PATIENT CARE, AND LEARNING |
65 |
in focus groups among volunteers who had participated in the studies. The pilot study results demonstrate that a process for preparing and disseminating summaries of trial results to patients following their participation in clinical trials is feasible. Moreover, patients reacted very positively to the variety of formats and showed marked improvement in their comprehension of their clinical trial findings. Study personnel are also very receptive to disseminating summaries of trial results to their volunteers.
A final example of a public education and outreach initiative is the devel opment of a traveling exhibit for science museums. Still in the preliminary planning stages, such an exhibit would provide inquiry-based, multimedia learning experiences focused on the how-to and importance of health research as presented by practicing scientists. It would use an innovative mix of video storytelling and digital support technologies to show people how realworld scientists conduct their research and create a continuum from basic to translational science to clinical trials that produce new treatments and solutions. This type of exhibit would highlight what it means to participate in a clinical trial and the impact of participation on science and drug discovery.
Conclusions
Despite low levels of trust and confidence today, there is no evidence to suggest that the public will abandon the clinical research enterprise outright. A foundation of general public support exists on which to rebuild public confidence and trust through education and outreach initiatives. Such initiatives need to focus on improving public awareness and appreciation of the study volunteer and the value of clinical research to the public health; repairing the credibility of research sponsors, study staff, and regulatory and human subject protection professionals; and engaging the public as partners in the development of new medical and health advances. Given how far public support has fallen, however, there is no time to waste in repairing and rebuilding trust and confidence.
To enhance the culture of patient contributions to learning in health care, a portfolio of strategic initiatives is needed, as shown in Figure 2-5. If general education about and awareness of the clinical research process are enhanced and if patients are enabled to participate because of the support network and tools provided to help them become active participants in clinical trials, recruitment and retention in trials will improve. With this solid foundation, volunteers will become a community of participants and ultimately the ambassadors of a process that advances medical science and improves the public health.

66 |
PATIENTS CHARTING THE COURSE |
SUSTAIN
Community
|
of |
|
Participants |
|
and |
|
Ambassadors |
RECRUIT AND |
|
|
RETAIN |
ENHANCE |
ENABLE |
General |
Support |
Education and |
Network and |
Awareness |
Tools |
FIGURE 2-5 Model for enhancing the culture of patient contributions to learning in health care. A portfolio of strategic initiatives is needed that enhances the public’s general education about and awareness of the clinical research process and enables patients to become active participants in clinical trials. This foundation will lead to improvements in patient recruitment and retention and ultimately to the formation of a community of research volunteers.
SOURCE: Center for Information and Study on Clinical Research Participation model—permission for use in this publication authorized by Diane Simmons, President and CEO.
REFERENCES
Advisory Commission on Consumer Protection and Quality in the Health Care Industry. 1998.
Quality first: Better health care for all Americans. http://www.hcqualitycommission.gov/ final/ (accessed October 11, 2010).
Balas, E., and S. Boren. 2000. Managing clinical knowledge for health care improvement. In Yearbook of medical informatics. Bethesda, MD: National Library of Medicine. Pp. 65-70.
Barbaresi, W., S. Katusic, R. Colligan, A. Weaver, and S. Jacobsen. 2005. The incidence of autism in Olmsted County, Minnesota, 1976-1997: Results from a population-based study. Archives of Pediatrics & Adolescent Medicine 159(1):37-44.
Beghi, E., L. Kurland, D. Mulder, and W. Wiederholt. 1985. Guillain-Barre syndrome. Clinicoepidemiologic features and effect of influenza vaccine. Archives of Neurology 42(11):1053-1057.
Braunstein, J., N. Sherber, S. Schulman, E. Ding, and N. Powe. 2008. Race medical researcher distrust, perceived harm and willingness to participate in cardiovascular prevention trials. Medicine 87(1):1-9.
CLINICAL RESEARCH, PATIENT CARE, AND LEARNING |
67 |
Califf, R. M. 2009. Clincal research sites—the underappreciated component of the clinical research system. Journal of the American Medical Association 302(18):2025-2027.
Chalkidou, K., D. Whicher, W. Kary, and S. Tunis. 2009. Comparative effectiveness research priorities: Identifying critical gaps in evidence for clinical and health policy decision making. International Journal of Technology Assessment in Health Care 25:241-248.
Center for Information and Study on Clinical Research Participation. 2006. Survey of 900 people’s perceptions of clinical research. http://www.ciscrp.org/professional/surveys/ survey_dec2006_slide1.html (accessed October 11, 2010).
Conway, P., and C. Clancy. 2009. Comparative-effectiveness research—implications of the Federal Coordinating Council’s report. New England Journal of Medicine 361:328-330.
Dougherty, D., and P. Conway. 2008. The T3 roadmap to transform U.S. health care: The how of high quality care. Journal of the American Medical Association 299(19):2319-2321.
FCC (Federal Coordinating Council for Comparative Effectiveness Research). 2009. Report to the President and Congress. http://www.hhs.gov/recovery/programs/cer/cerannualrpt. pdf (accessed October 11, 2010).
Gabriel, S., W. O’Fallon, L. Kurland, C. Beard, J. Woods, and L. R. Melton. 1994. Risk of connective-tissue diseases and other disorders after breast implantation. New England Journal of Medicine 330(24):1697-1702.
Gabriel, S., J. Woods, W. O’Fallon, C. Beard, L. Kurland, and L. R. Melton, 3rd. 1997. Complications leading to surgery after breast implantation. New England Journal of Medicine 336(10):677-682.
Getz, K. 2004. Survey of 1,170 adults at the clinical trial congress in Philadelphia. Institute for International Research.
Getz, K., and L. Faden. 2008. Racial disparities among clinical research investigators. American Journal of Theraputics 15(1):3-11.
HarrisInteractive. 2002. There are many reasons why people are reluctant to participate in clinical trials. HealthCare News 2(7):1-4.
———.2004. Views and attitudes of clinical research studies. HealthCare News 4(15):1-2.
———.2005. New survey shows public perceptions of opportunities for participation in clinical trials has decreased slightly from last year. HealthCare News 5(6):1-13.
———.2007. Lack of trust in both FDA and pharmaceutical companies makes drug safety a concern for many. HealthCare News 7(6):1-5.
IOM (Institute of Medicine). 2009. Initial national priorities for comparative effectiveness research. Washington, DC: The National Academies Press.
———.2010. Redesigning the clinical effectiveness research paradigm: Innovation and practice-based approaches: Workshop summary. Washington, DC: The National Academies Press.
———.2011. Learning what works: Infrastructure required for comparative effectiveness research: Workshop summary. Washington, DC: The National Academies Press.
Kaiser Family Foundation. 2008. Public opinions spotlight.
Kaitin, K. 2008. Growing protocol design complexity stresses investigators. Tufts.
Korieth, K. 2004. Taking patient recruitment in-house. CenterWatch Monthly Newsletter, 6-7. Kurland L. T., L. R. Elveback, and F. T. Nobrega. 1970. Population studies in Rochester and Olmsted County, Minnesota, 1900–1968. In The Community as an Epidemiologic Laboratory: A Casebook of Community Studies. Edited by I. I. Kessler and M. L. Levin.
Baltimore, MD: Johns Hopkins Press. Pp. 47-70.
Kurland, L., and C. Molgaard. 1981. The patient record in epidemiology. Scientific American 245(4):54-63.
Lauer, M. S., and F. S. Collins. 2010. Using science to improve the nation’s health system: NIH’s commitment to comparative effectiveness research. Journal of the American Medical Association 303(21):2182-2183.
68 |
PATIENTS CHARTING THE COURSE |
Melton, L. 1996. History of the Rochester Epidemiology Project. Mayo Clinic Proceedings 71(3):266-274.
———. 1997. The threat to medical-records research. New England Journal of Medicine 337(20):1466-1470.
Rocca, W., J. Bower, D. Maraganore, et al. 2007. Increased risk of cognitive impairment or dementia in women who underwent oophorectomy before menopause. Neurology 69(11):1074-1083.
Rocca, W., B. Grossardt, M. de Andrade, G. Malkasian, and L. Melton, 3rd. 2006. Survival patterns after oophorectomy in premenopausal women: A population-based cohort study.
The Lancet Oncology 7(10):821-828.
Rocca, W., B. Grossardt, and D. Maraganore. 2008. The long-term effects of oophorectomy on cognitive and motor aging are age dependent. Neurodegenerative Diseases 5(3-4):257-260.
VanLare, J., P. Conway, and H. Sox. 2010. Five next steps for a new national program for comparative-effectiveness research. New England Journal of Medicine 362(11):970-973.
Woolley, M., and S. Propst. 2005. Public attitudes and perceptions about health-related research. Journal of the American Medical Association 294(11):1380-1384.
3
Clinical Data as a Public Good
for Discovery
INTRODUCTION
Clinical data have immense potential to drive progress in health care by providing the means to measure and track care processes and outcomes, to develop and refine best practices, and to enable rapid discovery and innovation. Over the past decade, the amount of data captured in electronic form has increased exponentially. Given the federal government’s encouragement of the adoption of electronic health records (EHRs), these data will increasingly be augmented by clinically rich course-of-care data. With the potential to be easily stored, aggregated, and shared, these data can enable rapid learning and continuous improvement in the efficiency and effectiveness of care practices (Blumenthal, 2010; IOM, 2007). Harnessing the power of these data, however, will require efforts to address barriers to their access and use.
The papers in this chapter explore the potential of clinical data to improve research and health care, strategies to enhance access to health data and information, and key challenges to ensure data integrity (e.g., privacy, security, and proprietary concerns). Additionally, this chapter discusses opportunities to better inform and engage patients and the public as advocates.
The first paper, by Farzad Mostashari of the Office of the National Coordinator for Health Information Technology (ONC), argues that as the nation works to develop a unified health information technology (HIT) infrastructure, efforts will be needed to identify a limited set of core data that can meet the basic needs of multiple functions. As ONC continues its efforts to encourage and support data capture and use, it is considering how
69
70 |
PATIENTS CHARTING THE COURSE |
to create data requirements that are relevant and not burdensome, how to reward patients and providers for the creation and documentation of structured data, and the merits of distributed versus centralized approaches to information exchange.
Todd Park of the Department of Health and Human Services (HHS) highlights the significant amount of data currently held by the various agencies of HHS and how these data could improve the value, science base, and patient experience of health care. He describes HHS’s efforts to open access to high-value data sets and to encourage public participation in the use of these data for socially beneficial purposes.
The quality and accuracy of research results depend on the availability and integrity of data. Don Detmer of the University of Virginia discusses opportunities to increase the quantity and quality of data for health care and research. To achieve this end, he proposes several options for national policy that address security and privacy concerns while empowering citizens to allow their health data to be used for learning and discovery.
INFORMATION NEEDS FOR A LEARNING HEALTH SYSTEM
Farzad Mostashari, M.D., Sc.M.
Office of the National Coordinator for Health Information Technology
The mission and goal of the Office of the National Coordinator for Health Information Technology is to improve health and health care for all Americans through the appropriate use of HIT. ONC is therefore focused on characterizing key system needs and outcomes in addition to determining how technology can be a means to that end.
Meaningful Use and a Learning Healthcare System
“Meaningful use” is the term coined by Congress to connect technology to desired outcomes, and ONC’s proposed regulations represent our best guess as to how technology should be used to achieve these outcomes. If providers—eligible professionals or hospitals—use HIT systems in a meaningful way, they will be able to qualify for payments from CMS.
It is our goal, however, that meaningful use be applied for more than just qualifying for payments. Rather, it should be used to ensure that measurable improvements are made in health and in the quality, safety, and efficiency of health care. A secondary goal is to move beyond improving care for an individual patient at the point of care to creating a learning healthcare system. This is an outcome to which ONC aspires and a worthy endpoint toward which to build.
The next decade will witness a fundamental transformation of the
CLINICAL DATA AS A PUBLIC GOOD FOR DISCOVERY |
71 |
healthcare delivery system, including changes that in many ways will be more profound than those that transpired during the previous decades of American medicine. HIT will certainly help providers take better care of patients, but there is also the prospect that, in the aggregate, electronic health systems will contribute to a learning healthcare system and enable providers to understand and influence healthcare improvement—whether in public health, quality improvement, drug discovery, or clinical effectiveness research. The HIT infrastructure of the future should not just be the eyes and ears but also be the action arm of population health.
Key Considerations for Developing a National Unified HIT Infrastructure
A number of projects now under way focus on developing HIT infra structure for many different functions, including clinical effectiveness research, drug discovery, quality measurement, and public health surveillance. Because these projects are taking different approaches and building different architectures, it is temping to call for a halt to work on these siloed activities. Indeed, if investment in HIT continues to create and support a multitude of data islands, the nation will not achieve a unified HIT infrastructure.
This is the critical challenge faced by ONC in its work to implement a national system, to define meaningful use, and to develop grant programs: How can we work to develop a common, national HIT infrastructure while not creating additional network and system silos? Because projects now under way will not be put on hold, this is a critical time to bring key stakeholders and HHS together to begin a discussion on laying the groundwork for a unified HIT infrastructure.
Clinical Data Needs and the 80/20 Rule
One approach to clarifying key data needs for a learning health system is to agree on a core data set—sufficient, if not perfect, for a number of information needs. Known as the 80/20 rule, this approach advocates starting with something simple that can be achieved now and developing clever approaches over time. The American Recovery and Reinvestment Act (ARRA) presented a significant opportunity for HIT in this respect, as it gave patients a right to their records in electronic format. Even if only a small fraction of the patients in any given system choose to exercise this right, every healthcare provider and EHR vendor must produce a patient summary document in a common format. This could be a key opportunity to use the clinical care summary—which includes the medications list, the problem list, the allergies, the lab values, and the patient encounters—as
72 |
PATIENTS CHARTING THE COURSE |
the foundational 80 percent, the common data core on which to build a unified HIT infrastructure.
As data users have a wide variety of needs and will require additional information based on these needs, some users are likely to believe that starting with the clinical care summary is inadequate. For quality reporting, for example, detailed information with which to calculate exclusions may be needed; for public health case reporting, information on whether a certain infection was central line associated may be required; for drug safety, the first date of prescribing of the medication may be needed. Although this may be true in each instance, the nation will never achieve a unified HIT infrastructure if the starting point is to identify a data set that is perfect for each user. An alternative approach is to leverage the 80/20 rule and supplement it with targeted additional data collection where needed.
Creating Infrastructure for Targeted Data Collection
The example of clinical trials recruitment at a storefront doctor’s office illustrates this approach. One cannot expect a provider to use an EHR in the routine delivery of care while also collecting all the information needed to determine patient eligibility for a multitude of clinical trials. As one example, an individual’s occupation may be a data element necessary to determine eligibility for a clinical trial. Although those with an interest in this information may think that requiring it as a data element should be elementary, a provider’s front-office staff is not going to determine and select each patient’s occupation from among the hundreds of potential Census Bureau categories as a routine part of delivering care.
A more feasible approach would be to collect a limited amount of information in the routine delivery of care, which could also serve as an opportunity to screen for the need for additional data collection. Instead of asking each provider to record each patient’s occupation and expecting these data to be collected in a structured form, a system could be developed to trigger deeper data collection when appropriate (i.e., manual rather than routine data collection on a small subset of patients). If a patient is diagnosed with hepatitis A, for example, it is entirely appropriate to prompt the clinician to inquire about the patient’s occupation—specifically, whether the patient is a healthcare worker, daycare worker, or food worker. This type of approach makes sense to providers, as the information requested is relevant to the person’s care and is limited in scope. For clinical trials, minimal information can also serve as an initial screen, with follow-up questions about trial participation asked only as appropriate.
This example clearly illustrates that despite the broad data needs of a learning health system, the nation should start with something simple— perhaps the humble clinical care summary. Additional work could expand
CLINICAL DATA AS A PUBLIC GOOD FOR DISCOVERY |
73 |
on this data core to create infrastructure that supports additional and targeted data collection only as needed.
Adding Value for Providers and Patients
To ensure data quality, it is also important to consider the value proposition for data capture and use. The provider is doing work that involves both the care of the patient and the creation and documentation of the structured data needed for other purposes. A business case needs to be made to ensure that the latter work is done well. For recruitment of clinical trial participants, for example, it would be beneficial for all if recruiting appropriate patients for trials was not just the right thing to do, but also a source of income for the primary care provider. As the owner of the data, this notion extends to the patient as well. Perhaps patients should also benefit financially from the use of their health information.
Choosing the Best Approach for Critical Infrastructure
ONC continues to identify the key principles that should underlie the creation of a national HIT infrastructure and develop supporting policies and incentives. A particularly important principle to consider as we work toward achieving our goals is to use a distributed rather than centralized infrastructure. Although a centralized infrastructure works well for some purposes, it is not particularly effective for supporting the participation of individual physicians in advancing a wide range of population health missions.
A centralized infrastructure requires separation of the data producer from the data, centralization of the data, and central analysis. In today’s fragmented and heterogeneous system, this approach typically does not yield the answers people expect in a timely way. The costs are always higher, the points of failure are always greater, the system is more fragile, and the quality and cost of cleaning the data (once separated from the source) become prohibitive. My view is that a decentralized system whereby the questions go out to the data and the answers come back is a much more resilient, feasible, cost-effective, and privacy-protective approach.
Part of ONC’s mission is to think about how such distributed queries can be expressed in a standardized way. Again, the 80/20 rule can be applied to these issues—whether to develop a standardized data model across spheres of activity or a standardized way of expressing the question or receiving responses. Formulating a strategy across fields, whether public health surveillance, quality measurement, or effectiveness research, will require simple case definitions tailored to the needs of the field—perhaps