摘要:Research involving human participants continues to grow dramatically, fueled by advances in medical technology, globalization of research, and financial and professional incentives. This creates increasing opportunities for ethical errors with devastating effects. The typical professional and policy response to calamities involving human participants in research is to layer on more ethical guidelines or strictures. We used a recent case—the Johns Hopkins University/Kennedy Kreiger Institute Lead Paint Study—to examine lessons learned since the Tuskegee Syphilis Study about the role of institutionalized science ethics in the protection of human participants in research. We address the role of the institutional review board as the focal point for policy attention. THE HISTORY OF EFFORTS TO institutionalize ethical health science is one of disaster response. The script changes little from one episode to the next, a situation that is troubling given increasing layers of regulation and bureaucracy specifically designed to address the problem. Scientists or medical professionals, through callousness, ignorance, or misguided good intentions, perpetrate a human calamity and, in its aftermath, professional associations or policymakers develop additional organizational ethical guidelines or strictures. We do not suggest that these ethical code responses are disingenuous. In most instances of science-induced disaster, the vast majority of scientists are as horrified as other outraged citizens. Nor do we feel that the incidence of insensitive and arrogant individuals in the health sciences is any higher than in business, the legal profession, education, the military, or most other fields. When scientists and medical professionals err, no matter how infrequently, no matter how pure or impure the motives, the results can be devastating, and their effects can resonate far beyond the immediate circumstances. So for medical scientists there is little or no tolerance for lapses in judgment or faulty response to confounding bureaucracy. For example, one of the many results of occasional instances of science-induced catastrophe is a decline in societies' and individuals' trust in institutions of science and in practicing scientists. Studies have shown that distrust is one of the reasons that the poor are less likely to take advantage of available medical care and of economically and socially advantageous technologies. 1 , 2 Distrust is also one of many factors in the low proportions of poor and disadvantaged persons seeking careers in science and research. 3 , 4 It is the predictability of this cycle that is especially troubling. We used the term “institutionalized science ethics” to refer to the statutory, professional, and institution-based ethical standards that guide and constrain scientists' research work, and we focused on the primary institution responsible for implementing institutionalized science ethics in medical centers, the institutional review board (IRB). 5 We considered the progress of institutionalized science ethics to prevent catastrophic outcomes in science and medical research for which it is designed. We reviewed two examples, one recent and one that occurred before the contemporary regime of institutionalized science ethics, and explored the fundamental question, “Why can't we build a better IRB?” The first case we examined is the familiar Tuskegee Syphilis Study, the landmark exploitative research project that ultimately gave rise to many of the human participant protections now in force, including IRBs. The second case, a relatively recent one, is the Johns Hopkins University/Kennedy Krieger Institute study (hereafter designated as the KKI study). The latter study, designed to assess cost-effective methods of household lead paint abatement, was terminated by the Maryland Court of Appeals in a decision whose opinion described the study as “a new Tuskegee.” 6 We believe the comparison of the two cases sheds light on some of the prospects for institutionalized science ethics, 7 as well as provides food for thought about the adequacy of the organization and operation of IRBs to protect both researchers and the public from research calamities.