Part 1 of this paper is written under the assumption that any decision concerning the optimal organizational site within the U.S. government for the oversight of human subjects research by the Office for Protection from Research Risks (OPRR) should be informed and guided by the historical origins of OPRR; the legislative mandate under which OPRR currently operates; Department of Health and Human Services (DHHS)/Food and Drug Administration (FDA) regulations for the protection of human subjects; compliance issues and regulatory experience; and the Common Rule and OPRRs interface with regulatory activities of other federal departments and agencies. Treatment of these issues will constitute the background sections of Part 1 of the paper. The final portions of Part 1 will provide findings and recommendations.
Part 2 of the paper addresses similar organizational considerations pertaining to OPRRs responsibilities for assuring the humane care and use of laboratory animals. The organizational location of that responsibility will be considered in the light of the historical background of oversight responsibilities for humane care and use of laboratory animals, with special emphasis on two major noncompliance cases; OPRRs legislative mandate regarding laboratory animals; OPRRs relationship to the U.S. Department of Agriculture (USDA) and the Animal Welfare Act; and animal welfare compliance issues and experiences. As will be seen, the nature of OPRRs responsibility for laboratory animals, although superficially similar to its responsibilities for human subjects protections, is substantively different from them. The oversight functions pertaining to the care and use of laboratory animals strongly suggests that it be separated from oversight of human research subjects and placed in a different organizational context. Optimal organizational location of the responsibility for laboratory animals will be discussed in the final portion of Part 2 of this paper under findings and recommendations. Appendix I of the paper will comment on recommendations raised by John C. Fletcher, Ph.D.
The author of this paper served as Director, OPRR, for 14 years, from 1978 until 1992. Prior to 1978, he collaborated for 8 years with OPRR (and its predecessor office, the Institutional Relations Branch (IRB) of the Division of Research Grants (DRG), National Institutes of Health (NIH)). Consequently, virtually all of his 23 years as a federal employee were spent in dealing with policies, issues, and organizational questions related to the protection of human research subjects and the humane care and use of laboratory animals. Much of the information found in the paper is publicly documented. However, some of the information is derived from the memory of the author. To a considerable extent, this paper manifests his reflections on a public career devoted, in large measure, to providing protections for the rights and well-being of research subjects and promoting the humane care and use of laboratory animals. Reference is made to some of the individuals who made decisions that affected OPRR. No effort has been made to evaluate all the reasons why those decisions were made or to evaluate the overall performance of these individuals. Some of their decisions, in the authors opinion, had negative consequences for OPRR, but no criticism of their overall performance is intended or inferred.
OPRR came into existence officially in 1972. However, it had existed in another form since 1964. To understand the relationship of OPRR to the institutions that are subject to the regulations administered by OPRR, it is useful to look at the functions of the office that was the predecessor to OPRRthe IRB of the DRG/NIH.
In the final year of World War II, the NIH annual research budget was less than $80 million. Even by the standards of U.S. government agencies in that period of history, NIH was a small agency. After World War II and throughout the next two decades, NIH budgets increased precipitously. For most of the decade of the 1950s, a major portion of the NIH budget was consumed by its intramural clinical research program. The NIH Clinical Center (subsequently named the Warren Grant Magnuson Clinical Center) opened its doors to research subjects in 1953. At that time it was a 500-bed, state-of-the-art research facility.1
In its early days, the NIH Clinical Center housed the largest and most respected clinical research program in the world. From the time it opened its doors in 1953, the Clinical Center operated under a policy for the protection of normal volunteers involved in research.2 Normal volunteers were recruited for many studies in order to establish baseline data against which to measure data pertaining to disease or to serve as normal controls in clinical trials. Whenever normal volunteers were to be involved in research, the Clinical Center policy required prior review and approval of proposed research designs by a disinterested committee of scientists called a Clinical Research Committee (CRC). The policy required that informed consent be obtained from normal volunteer subjects each time they were invited to serve as subjects of research.
The Clinical Center policy also called for CRC review of research that involved unusual hazards, but few research projects were identified as involving such hazards. For practical purposes, the policy affected only normal volunteers.
Potential research subjects whose disease or condition was under study were referred to the Clinical Center by their personal physicians. Typically such patient/subjects had already exhausted standard treatments for their disease or condition. In many cases their best prospects lay in research. They came to the Clinical Center in hopes of finding in research a cure or amelioration of their disease or condition not found in the standard practice of medicine. These patient/subjects saw little, if any, difference between innovative therapy by a physician (ministrations that exceeded the boundaries of the standard practice of medicine that were administered with the intent of providing a therapeutic benefit to the patient) and research (a systematic study designed to produce generalizable knowledge about disease or underlying biological functions, primarily intended for the benefit of society). Patient/subjects also came because the Clinical Center enjoyed the reputation of providing better quality care than most hospitals at no financial cost to the patient/subject.
Research investigators at the NIH usually regarded persons referred to NIH by their physicians as patients, rather than research subjects. Research was commonly referred to as treatment or patient therapy.3 Given that environment, it is not surprising that the NIH had no policy of protections for patient/subjects involved in research. The amount of information given to these patients was left to the discretion of research investigators who were viewed and who viewed themselves primarily as physicians.
In 1966 Dr. Jack Masur, Director of the Clinical Center, appointed a committee headed by Dr. Nathaniel Berlin to update the Clinical Center policy. Masur was responding, in part, to the U.S. Public Health Service (PHS) policy issued in February of 1966 by Surgeon General Stewart. Although technically not bound by the PHS policy, the revised Clinical Center policy adopted some, but by no means all, of the provisions of the PHS policy. CRCs were created in the clinical units of each categorical Institute within the NIH that conducted intramural research.4 Controversial research projects could be referred to the CRC of the Clinical Center Medical Board (the governing body of the Clinical Center). Patient consent was required only to the extent that the investigator was expected to make a note in each patients chart that verbal consent had been obtained.
Following World War II, the NIH annual budget increased substantially each year until 1969. After a brief hiatus in 19701971 it has continued to grow steadily until the present time. The budget expanded from $80 million in 1944 to more than $1 billion in 1969. Currently the budget has climbed to nearly $14 billion, and the prospects for further growth seem almost limitless. By the year 1964, the expansion of the NIH intramural research program had slowed, but extramural research - that is research funded by the NIH but conducted in institutions throughout the U.S. and in many other countries - continued to grow at an impressive rate. DRG conducted, on behalf of the Institutes that comprised the NIH, peer review for scientific merit of research proposals submitted to the NIH by institutions outside NIH.
Biomedical research funds are, legally speaking, awarded to research institutions, not to the principal investigators (PIs) who conduct research. Some of these awards raised technical or ethical problems not governed by general policies. They required special attention. A process gradually developed within DRG for handling problems not covered by general policy. Such matters as, for example, research cost overruns, ownership of research equipment when a PI moved from one institution to another, or the provision of supplementary funds for promising research, were handled on a case-by-case basis. The IRB/DRG/NIH was created to deal with and settle such problems on an ad hoc basis. From the outset, IRB/DRG/NIH dealt with extramural research institutions by means of negotiation. Its decisions took into account not only the interests of the taxpayers and the policies of the NIH but the organizational structure, traditions, and policies of the research institutions where the research was conducted. The talents and preferences of the investigators and the rights and welfare of research subjects were also considered, wherever appropriate.
In this way, the IRB/DRG/NIH had already begun to provide some protections for human research subjects before the publication of the first policy for the protection of human subjects. From the time of its inception, negotiation characterized and comprised most of the work of the IRB/DRG/NIH office.
Prior to 1966, the NIH intramural research program lacked a comprehensive policy for the protection of human research subjects, and the NIH extramural research program provided no protections of any kind for research subjects. The events that brought into existence the extramural Policy for the Protection of Human Subjects in 1966 are already well documented elsewhere. They are treated here only in summary fashion.5 In summary, the 1966 PHS policy pertaining to subjects of extramural research was occasioned by findings of serious abuses of the rights and well-being of research subjects involved in biomedical research. Hearings conducted by Senator Estes Kefauver in 1958 - 19596 demonstrated that most drugs were tested on patients who were unaware that they were research subjects. The dramatic televised account of the thalidomide tragedy that culminated in the birth of hundreds of deformed infants in Europe and Canada focused public attention on the regulation of investigational drugs; experimental transplantation of a sheeps heart into a cardiac patient without independent review and without informed consent;7 whole-body radiation experiments in Ohio and their cover-up by Senator Robert Taft;8 the introduction of live cancer cells into elderly, indigent charity patients without their consent by investigators at the Sloan-Kettering Cancer Foundation and Jewish Chronic Diseases Hospital;9 and the Willowbrook study involving deliberate introduction of hepatitis into severely retarded children.10 This made NIH officials aware that if research was to continue to enjoy public confidence and if it was to continue to be funded with public dollars, then a policy for the protection of research subjects must be conceived and implemented.11 After several years of deliberation on the part of NIH officials, Dr. James Shannon, Director, NIH, recommended that Surgeon General Stewart issue a comprehensive policy for the protection of human subjects on behalf of the U.S. PHS - the health agencies within the Department of Health Education and Welfare (DHEW) - of which NIH is the largest.
On February 8, 1966, Surgeon General Stewart issued Policy and Procedure Order 129,12 the first comprehensive extramural federal policy for the protection of human subjects. Responsibility for implementing the policy was assigned to the IRB/DRG/NIH. That tiny office undertook the task of implementing the policy in a manner consistent with the way it had always done business - that is to say, it negotiated assurances of compliance with the PHS policy with each of the awardee institutions.
The assurance negotiations enabled each institution to create its own internal policy for the protection of human subjects that both complied with the very general terms of the PHS policy and allowed the institution to develop compliance mechanisms and policies consistent with the organizational structure, traditions, and preferences of the institution. The negotiations also enabled federal staff to explain to institutional officials why the requirements for prior review and approval by an institutional committee (later designated an Institutional Review Board), and why requirements for eliciting informed consent from subjects were included in the policy. It also enabled the NIH, acting through the IRB/DRG/NIH, to teach institutions that their obligation to respect the rights and welfare of human subjects is or should be as important as their obligation to conduct sound scientific studies.
From the outset, the IRB/DRG/NIH, unlike most federal regulatory agents, used education as the primary tool of promoting compliance with the new policy. Although that office had authority to withhold awarded funds from an institution found to be noncompliant with the policy, it never actually used that power (though it sometimes threatened to do so).
For more than ten years after the Policy for the Protection of Human Subjects was issued in 1966, the only sanction imposed on any research institution was the discontinuance of the Tuskegee Syphilis Study (housed at that time in the Centers for Disease Control and Prevention (CDC), one of the PHS agencies). That action was taken by the Assistant Secretary for Health outside of ordinary channels of policy oversight.
No doubt the IRB/DRG/NIH is open to criticism for relying solely on education, persuasion, negotiation, and occasional threats to bring about compliance with the 1966 policy. Nevertheless, IRB/DRG/NIH can be applauded for recognizing that biomedical research institutions and investigators subject to the policy are, by profession, dedicated to improving the quality of life of fellow human beings. As a consequence, with rare exceptions, researchers are anxious to respect the rights and welfare of research subjects. The IRB/DRG/NIH believed that the best, most efficient, and least costly method of promoting compliance with the policy was to raise the consciousness of investigators and administrators concerning their moral obligations to research subjects. The policy required minimally acceptable ethical standards. Assurance negotiations and education promoted a higher level of compliance than that literally required by the policy. This view has governed compliance efforts from the inception of the policy. It accounts, in part, for the fact that most institutions voluntarily apply federal standards to all research conducted in their institutions, not just to research that is funded by the federal government. Education and persuasion were then and remain today the most effective tools of policy implementation.
The February 1966 PHS Policy for the Protection of Human Subjects underwent minor revisions in the summer of 1966, and it was further clarified in 1967 and 1969. The 1969 clarification made it clear that the policy extended to behavioral and social science research as well as to biomedical research.
In 1971 the policy was extended to all research studies involving human subjects conducted or supported by any agency within the DHEW.13 Consistent with the educational approach described above, the DHEW policycalled the Yellow Book because of the color of the pamphlet in which it was publishedset forth policy requirements that included: 1) institutional assurances of compliance; 2) risk-benefit analysis; 3) review by committee; and 4) subjects informed consent. Of greater importance, it included a running commentary, in a column parallel to the policy requirements, presenting reasons why these requirements were necessary to safeguard the rights and welfare of human research subjects. The commentary, written primarily by Donald S. Chalkley, Ph.D., Director, IRB/DRG/NIH, came to be regarded as a classical defense of subjects rights and well-being.
In 1971 the news media published accounts of the infamous Tuskegee Syphilis Study conducted by PHS scientists in which approximately 400 syphilitic African-American males were systematically denied treatment for their illness over a period of more than three decades. Details of that tragic and scandalous study are published elsewhere.14 One of the consequences of the Tuskegee episode was a speech delivered at the University of Virginia by Robert Q. Marston, Director, NIH, calling for additional protections for vulnerable research subjects.15 Following that speech in 1972, Marston upgraded the IRB/DRG/NIH. He changed the name from the Institutional Relations Branch to OPRR and incorporated it into the Office of the Director, NIH. He increased OPRR staff and ordered it to report to Dr. Ronald Lamont-Havers, Associate Director for Extramural Research. OPRR Director, Donald S. Chalkley, was subsequently promoted to the Senior Executive Service. The fact that OPRR reported to the Deputy Director for Extramural Research, who was ultimately responsible for all research awards, placed OPRR in a position of potential conflict with its own supervisor. So long as Dr. Lamont-Havers served in that position, the system worked well. As will be seen, conflict arose some four years later.
Dr. Marston also created a task force under the direction of Dr. Lamont-Havers to consider how best to implement the recommendations outlined in his speech at the University of Virginia. The task force was expanded to include representatives of all of the PHS agencies. It incorporated into itself a committee chaired by Dr. Charles Lowe of the Institute for Child Health and Human Development that was already addressing the ethical questions of fetal research. The task force was organized into subcommittees that developed position papers dealing with research involving human fetuses, research involving children, research involving prisoners, and research involving physically, mentally, and socially handicapped persons.
These position papers, in various stages of completion, were eventually submitted to the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (National Commission or Commission). The Commission incorporated much of the work of the task force into its final reports.
NIH was not the only component of the DHEW that responded to the Tuskegee Study. The Assistant Secretary for Health, Dr. Monty DuVal, created an investigative task force, chaired by Professor Jay Katz of Yale University, to review the Tuskegee Study and to make recommendations for action. The study was terminated within a matter of days. The U.S. Congress, particularly the Senate Health Subcommittee, chaired by newly elected Senator Edward Kennedy (D. MA), held a series of hearings that continued periodically for more than two years. The Senate hearings were among the earliest congressional hearings to be televised. Coming as they did, after the civil rights debates of the 1960s, the hearings evoked public criticism of injustices toward African Americans. As a consequence of the television coverage and the resulting widespread public knowledge of abuses carried out under the Tuskegee Study, the hearings had a substantial impact.
The Kennedy hearings touched on many health issues besides the Tuskegee trial and the rights of human subjects, but they dealt primarily with research ethics and the regulation of research involving human subjects. One of the topics Senator Kennedy scheduled for hearings concerned research involving whole-body radiation conducted on military veterans in Cincinnati. However, Robert Taft (R. OH) accused Kennedy of meddling in the affairs of the State of Ohio. The powerful Senior Senator succeeded in quashing the hearings. Nevertheless, Kennedy was able to amend the appropriations of the Department of Defense (which at that time included the Department of Veterans Affairs) to require informed consent for all research conducted by that department.16 Hearings similar to those conducted in the Senate were held in the House of Representatives by the House Health Subcommittee chaired by Representative Paul Rogers (D. FL). Numerous bills and amendments to pending bills were introduced in both the Senate and the House of Representatives. Virtually all of the proposed bills called for promulgation of regulations for the protection of human subjects. However, the proposed legislation in the House of Representatives manifested a very different approach to the regulation of research than did the Senate bills.
Until it became apparent that issuance of regulations was inevitable, NIH had steadfastly opposed the issuance of regulations for the protection of human subjects. Donald Fredrickson, Scientific Director of the National Heart, Lung and Blood Institute (subsequently the Director, NIH) was fond of repeating in staff meetings, NIH is not a regulatory agency. By this he meant that, in his judgment, the fewer administrative encumbrances that scientists faced, the better would be their scientific production. Although his view of the utility of regulations changed after he became the NIH director,17 he always referred to the regulations for the protection of human subjects as the policy. The view that regulations could stifle research was shared by most intramural scientists of the time.
The Senate bill introduced by Senator Kennedy called for creation of a permanent federal regulatory commission for the protection of human subjects that would be patterned after the federal Securities and Exchange Commission that regulates each transaction that takes place in the U.S. stock market. The proposed commission was to be a separate regulatory agency with broad investigative powers. It could bring criminal charges against those who violated its regulations, and it could assess punitive damages on persons and institutions that failed to protect research subjects. It would have authority to regulate research involving human subjects funded by the federal government and research conducted in the private sector, including research funded by foundations, pharmaceutical companies, medical device manufacturers, and private individuals.
The House bill sponsored by Mr. Rogers borrowed concepts from S.J. Res. 75 introduced by Senator Walter Mondale (D. MN). It called for the creation of a National (Advisory) Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to make recommendations to the Secretary, DHEW, concerning the protection of human subjects, particularly vulnerable subjects such as prisoners, children, fetuses, and the cognitively impaired. Much of its mandate derived from the Marston speech at the University of Virginia.
Senator Kennedy made it known to DHEW that if the department were to issue regulations for the protection of human subjects, he would support the House bill proposed by Mr. Rogers. The department, which had steadfastly opposed the issuance of regulations up until that time, quickly formed a drafting committee to produce regulations that would, it was hoped, enlist the support of Senator Kennedy for the Rogers bill.
The PHS Drafting Committee was given only a few weeks to produce a new set of regulations. The committee, inexperienced in writing regulations and pressed for time, elected to transform into regulatory form the provisions in the 1971 Policy for the Protection of Human Subjects (Yellow Book) issued by DHEW. However, the resulting regulations lacked the commentary found in the Yellow Book that instructed Institutional Review Boards on how to interpret the rules. Because of the time pressure imposed by Senator Kennedy, the customary DHEW clearance points for the issuance of regulations were either bypassed or given extremely brief deadlines. The result was a set of flawed regulations that did not extend to intramural research, that lacked requirements for recordkeeping, and that allowed broad exceptions to requirements for informed consent. On May 30, 1974, DHEW promulgated Regulations for the Protection of Human Subjects, at Title 45 Part 46 of the Code of Federal Regulations. Although the new regulations were little different in content from the DHEW Yellow Book, and although they lacked the educational commentary of the Yellow Book, they enjoyed the force of law. Senator Kennedy expressed himself as satisfied that DHEW was serious about protecting human subjects, and he agreed to back the Rogers bill.
Soon after Senator Kennedy lent his support to the Rogers bill, it was passed by both houses of Congress and enacted into PL 93-348, the National Research Act, signed into law on July 12, 1974. Title II of that act created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. By the time the National Commission completed its work in 1978, it had issued 17 major reports that included approximately 125 recommendations to the Secretary, DHEW. Many similar recommendations had been submitted to the Commission by the PHS Task Force and were supported by Donald S. Chalkley, Director, OPRR, and by Dr. Ronald Lamont-Havers.
One of the reasons that the National Commission exercised such a profound effect on regulations for the protection of human research subjects was the so-called forcing clause in the act that required the Secretary, DHEW, to accept the Commissions recommendations or publish in the Federal Register reasons for not accepting them. Rather than go on record as opposing an ethics commission that had studied the issue for four years, DHEW Secretaries (Matthews, Califano, and Harris) accepted all of the Commissions recommendations.
Among the many provisions in the National Research Act was a section that amended the Public Health Service Act. That section has now been updated and is currently incorporated in Sec. 491 of the Health Research Extension Act of 1985. The law requires the Secretary to issue regulations requiring Institutional Review Board review and approval of all research involving human subjects (including intramural research) prior to funding (Sec. 491(a)). Additionally, however, Sec 491(b) requires that "The Secretary shall establish a program within the Department of Health and Human Services under which requests for clarification and guidance with respect to ethical issues raised in connection with biomedical or behavioral research involving human subjects are responded to promptly and appropriately."
That section is incorporated in the act because Mr. Rogers, its primary sponsor, developed information in hearings before his subcommittee that supported the contention that the PHS policy, in existence since 1966, had been successful in part because of the educational efforts of the IRB (subsequently OPRR). The legislative history makes it abundantly clear that the law intends the department, through the OPRR, to promote a sound understanding of the ethics of research in all institutions that receive DHEW funding.
Section 491(c) of the act calls for prompt actions to enforce the regulations. It is interesting to note that the wording presumes that instances of noncompliance will be reported to DHEW. While this has not always been the case, very often educational efforts have emboldened whistle blowers to identify noncompliance with the regulations.
The author knows of no other federal regulatory mandate that includes a requirement for a program of guidance and education to accompany its regulatory effort. Beginning in 1978, OPRR subsidized a series of regional education programs for the protection of human research subjects. They were conducted in every segment of the country. Costs to participants were nominal. The growing number of Ph.D. level ethicists from universities across the country provided willing faculty leadership. In turn, the program provided visibility for these promising young scholars and high quality content to the educational programs. Coupled with intensive bioethics programs at the Kennedy Institute of Ethics at Georgetown University and efforts of a rapidly maturing community of bioethics scholars in America, the program enjoyed enormous success. One measure of its success was the number of telephone calls that poured into OPRR seeking guidance on difficult or controversial ethical issues.
At one point, in the mid-1980s, the number of daily incoming calls to professional staff in OPRR, largely from PIs and Institutional Review Board chairpersons, approached 200 per day during the academic year. The negotiation of Assurances of Compliance continued to be a means by which research institutions were periodically required to review and update their internal policies and procedures for the protection of human subjects. The negotiation associated with the assurance process continues to have some educational currency for research administrators who are expected to issue policies for their institutions and who are held personally responsible for the rights and well-being of research subjects in their institutions.
Nevertheless, in the opinion of this author, the process of negotiating assurances of compliance has become routinized. Institutions tend to reissue their policies with little reflection and little upgrading, and OPRR no longer travels to each institution in an effort to blend federal laws and regulations with institutional traditions and history. The transactions now take place via mail, telephone, and electronic communication. Thus the assurances of compliance have become a heavy administrative burden for OPRR. Worse, the assurance process has lost much of its original educational purpose. It needs to be replaced with a simple certification and by intense educational efforts that take a new form.
Although OPRRs regional educational programs have continued to the present time, the federal subsidy began to shrink in the Reagan administration, and it largely disappeared in the Bush administration. It has not been restored by the Clinton administration, despite the fact that it has put more public effort and money into uncovering radiation research injustices that occurred in the years prior to the existence of regulations than it has into protecting subjects in the present time. Institutions are now required to underwrite the educational efforts initiated by OPRR, which lacks funds to fully support the program. The number of programs has dwindled to about four regional programs per year.
In 1978 Dr. Lamont-Havers was upgraded to Deputy Director and was replaced by Dr. Thomas Malone. Dr. Malone continued to give the same level of support for OPRR and for the protection of research subjects begun four years earlier by Dr. Marston. Dr. Malone headed the search committee that selected Dr. McCarthy to succeed Dr. Chalkley, who retired in 1978.
When Dr. Malone was appointed Deputy Director, NIH, he continued to ask OPRR to report to him. However, when Dr. Malone was replaced by Dr. William Raub as Deputy Director, Raub ordered OPRR to report to the new Associate Director for Extramural Research, Dr. Kathryn Bick.18 The legal advisor to the PHS advised Dr. Raub at the time that to return to the previous arrangement in which OPRR reported directly to the Deputy Director for Extramural Research was to risk a conflict of interest. The reasoning of the Office of General Counsel was clear. Since OPRR was to exercise oversight authority over research projects that bore the stamp of approval of its immediate supervisor - the Deputy Director for Extramural Research - OPRR was placed in a position where it might have to overrule or criticize actions taken by its boss.
Dr. Bick had previously been employed as Deputy Director of the Neurology Institute (NINDS) which funded several animal studies that were discontinued by OPRR for their lack of compliance with the PHS Policy on Humane Care and Use of Laboratory Animals.19 The Neurology Institute had been severely criticized in the public media for funding these studies.
Shortly after Dr. Bick was named Deputy Director for Extramural Research, she froze personnel hiring in OPRR, cut its travel budget, and dramatically reduced its education budget. Her deputy was Dr. George Galasso, who succeeded her as Acting Deputy Director for Extramural Research. Dr. Galasso continued Dr. Bicks policies of constraint of OPRR.
Dr. Bick also initiated a policy that required institutions that are subject to the regulations to underwrite the educational efforts initiated by OPRR. Consequently, the OPRR educational effort was overshadowed by the appearance of conflict of interest.
OPRR, a regulatory office, was forced (by lack of funds to fulfill its own legislative mandate) to invite regulated institutions to subsidize its programs of education. Support of such a program can cost the regulated institution upwards of $10,000. To refuse to host a program is perceived to be a risk of offending a regulatory office with power to interdict research monies flowing from the government to the awardee institution.
The Deputy Director for Extramural Research is the line supervisor of the Director, OPRR. Turning a deaf ear to OPRRs appeals to the contrary, Dr. Bick ordered OPRR to carry out its educational mandate by asking regulated institutions to provide funds for its programs. Even though OPRRs intentions were benign, the appearance of coercion was present. Dr. Bick also prohibited OPRR personnel from participating in programs operated by two Boston-based nonprofit organizations, Public Responsibility in Medicine and Research (PRIM&R) and Applied Research Ethics National Association (ARENA). PRIM&R has grown into a national organization whose national meetings dealing with the ethical and regulatory aspects of research involving human subjects are attended by more than 700 people. ARENA members are mostly Institutional Review Board administrators, Institutional Review Board members, and Institutional Review Board staff persons who exchange practical information on efficient methods for protecting human research subjects in institutions throughout the country. PRIM&R and ARENA address issues of interest, not only to institutions whose research is funded by federal agencies, but to institutions regulated by the FDA as well.
The policy of requiring awardee institutions to subsidize education programs was initiated by Dr. Bick and continued by her successors, Dr. Galasso, Acting, and Dr. Diggs. The potential conflict of interest has cast a shadow of suspicion on the educational efforts of OPRR, an office whose success demands impartiality and whose legislative mandate requires an education outreach. This situation should be changed.
As a part of its educational outreach, OPRR has worked closely with PRIM&R and ARENA. Educational efforts in the private sector, particularly those of PRIM&R and ARENA have partially supplied for the decline of OPRR-sponsored programs. Nevertheless, because the OPRR programs are regional and low cost and because they are official, they reach persons who do not attend the national meetings of PRIM&R and ARENA.
When OPRR educational programs were flourishing in the early and mid-1980s, the number of noncom-pliance cases reported to OPRR dwindled. Conversely, as OPRR educational programs have declined, numbers of noncompliance cases have risen dramatically. (The number of backlogged cases was said by an OPRR official to be about 150 about a year ago.) Although a direct correlation between preventive educational efforts and reduction in cases of alleged non-compliance cannot be demonstrated, it is reasonable to hypothesize that improved education efforts reduce noncompliance. Education efforts are far less costly than compliance investigations. Therefore, in the opinion of this author, a decrease in educational funding has contributed to an increase in compliance costs.
Only about half of the cases of alleged noncompliance actually demonstrate noncompliance. Only a small fraction of those cases where noncompliance is demonstrated involve direct physical harms to subjects, but all noncompliance involves an erosion of the rights of subjects. Education therefore prevents both harms to the welfare of subjects and damage to their rights.
The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research completed its tasks and was disbanded in 1978. Responsibility for implementing the Commissions recommendations was delegated by the Secretary, DHEW, to the Director, OPRR. OPRR organized a Human Subjects Regulations Drafting Committee that included representatives of all of the relevant agencies within the DHEW, including the Office of the Secretary. Mr. Richard Riseberg, Office of General Counsel, played a key role on the committee. The committee scrapped the 1974 version of Regulations for the Protection of Human Subjects and rewrote them in the light of 1) the recommendations made by the National Commission; 2) public comments on the Commissions reports and on proposed rulemaking; and 3) public hearings on proposed rulemaking.
A major step forward occurred when FDA, with encouragement from the Secretarys office and leadership from Dr. John Petricianni of FDAs Bioresearch Monitoring Program allowed the Drafting Committee to redraft FDA regulations for Clinical Investigations and Informed Consent (21 CFR50 &56) so that the FDA regulations reflected the recommendations of the National Commission and were, in nearly all respects, congruent with the DHHS regulations. The DHHS regulations differ from those of the FDA in three ways: 1) DHHS regulations allow a waiver of informed consent under certain limited circumstances, whereas FDA regulations allowed no such limited waiver;20 2) the FDA regulations do not require regulated institutions to negotiate assurances of compliance prior to IRB review and approval of research involving human subjects, whereas DHHS regulations do require negotiation of assurances (thus placing FDA in the position of having to approve IRBs after they complete their work, rather than before); and 3) FDA regulations require inclusion of a statement in all consent documents that informs subjects that FDA personnel may review their records. In all other respects the DHHS regulations that pertain to federally funded research and FDA regulations that apply to research carried out in the private sector are identical.
Both the DHHS Regulations for the Protection of Human Subjects and the FDA Regulations for Clinical Research and Informed Consent were signed by DHHS Secretary Harris on January 19, 1981, one day before the Reagan administration replaced the Carter administration.
The regulatory significance of the melding of FDA and DHHS regulations is difficult to overstate. It has had a salutary effect on research ethics that far exceeds that of the Common Rule. Hundreds of institutions that had previously been required to follow two sets of regulations were now able to follow a single set of rules. The consequence has been that institutions can operate under a single internal policy for the protection of human subjects. This made it both feasible and attractive to extend the same protections to all human research subjects, irrespective of the source of funding.
Furthermore, it was now practical for FDA to join OPRR in educational efforts. Joint OPRR/FDA educational programs could now reach out, not only to universities and clinics that conduct federally supported research, but to research foundations, pharmaceutical houses, device manufacturers, small businesses, and research data banks. Finally, the DHHS/FDA congruent regulations allow the FDA and OPRR to share compliance information and to cooperate in investigations of alleged noncompliance.
Because the FDA budget for education programs was virtually nil, the issuance of congruent regulations and the resulting partnership in education placed further strains on the education budget of OPRR. Nevertheless, the partnership has proved to be a valuable and workable, if financially strapped, arrangement.
An unknown fraction of research activities involving human subjects remains unregulated. Research studies are not covered by either DHHS regulations or FDA regulations if the research is conducted by private sector institutions that 1) enjoy no federal support and 2) are not covered by either DHHS or FDA regulations because they involve no drugs, biologics, or medical devices. Failure to regulate such research constitutes a double standard that sends a message that the government has less concern for subjects of research conducted by unconventional sources than it does for other subjects.21 The publication of the DHHS/FDA congruent regulations, updated in the light of the findings and recommendations of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, propelled OPRR into the role as the lead agency within the federal government for the protection of human research subjects.
Because responsibility for implementing FDA regulations is spread across the three major FDA Centers - Drugs, Biologics, and Medical Devices - there is no central office within the FDA that exercises direct line authority over research protections, although Dr. Stuart Nightingale has exercised strong leadership in this area for many years. Because FDA has no central authority within its organization, it cannot exercise leadership across other federal agencies. That leadership has been centered on OPRR and is taken for granted by most federal agencies since the close of the National Commissions deliberations in 1978. Nevertheless, OPRR has never been given legal authority, personnel, the prominence, or the funding to play that role properly. The role has always been an add-on responsibility for which no personnel or funding has been provided.
One of the many advances accomplished by the promulgation of the new Regulations for the Protection of Human Subjects (45 CFR 46) in 1981 was a clarification of responsibilities of institutions and research investigators. Because of ambiguities and lacunae - particularly with respect to reporting and record keeping requirements - in the 1974 version of the regulations, it was difficult (between 1974 and 1981) to demonstrate whether a given research activity fell within or outside of the regulations. Without adequate records, it was often impossible to develop clear findings of noncompliance, and consequently it was difficult to evaluate allegations of noncompliance and to impose sanctions on institutions or investigators who were noncompliant. These shortcoming were corrected in the 1981 version of the rules.
One of the difficulties faced by OPRR was the unwillingness of the NIH intramural program to comply with the 1981 version of regulations for the protection of human subjects. Although the Clinical Center was technically out of compliance during the period 19741981, it followed a policy very similar to the DHEW policy. When the DHEW regulations were updated in 1981, OPRR was informed by the Director of the Clinical Center that the intramural program would not negotiate an assurance of compliance with the new regulations but would continue to follow its own internal rules. The Director, OPRR, turned to the Director, NIH, for backing, but was bluntly told to leave the Clinical Center alone. Clearly this was a case of an abuse of authority and an open conflict of interest.
Nevertheless, the Director, OPRR, notified the Director of the Clinical Center, Dr. Mortimer Lipset, that he would inform the public media that all of the Clinical Center studies, including a National Institute of Mental Health sleep study in which a subject unexpectedly died, were being conducted out of compliance with federal rules. Within 24 hours, the Clinical Center initiated the process of negotiating the required assurance. As it happened, the death of the subject in the National Institute of Mental Health sleep study was not caused by the research, but by an unreported health condition of the subject herself. Negligence in screening subjects (the young woman who died had a condition that would have excluded her from the study) and negligence in using and monitoring faulty equipment contributed to the subjects death.
With the assistance of the DHHS Secretarys Office of General Counsel, the 1981 version of the regulations coupled with the assurances of compliance signed by senior executives in the research institutions made it possible, in most cases, to determine whether research was conducted in accordance with the rules. For example, the 1981 regulations required records of all research protocols, records of all decisions made by IRBs, and retention of informed consent documents. These requirements simplified compliance evaluations. No longer could an institution plead the excuse that records were unavailable to determine whether a violation had occurred, because lack of careful record keeping was itself a violation of the regulations. As it turned out, careful record keeping exonerated many studies where alleged violations were claimed.
The education program of OPRR stressed that violation of the rights of subjects would not be tolerated and that whistle blowers would be, so far as possible, protected. (No whistle blower was ever publicly identified during the years 19811992, except in cases where the whistle blower chose to identify himself/herself.) Furthermore, OPRR taught administrators that if they identified noncompliance in their own institutions and notified OPRR, they would be allowed to correct the situation without automatically triggering a federal investigation. Of course, a full report of the institutions findings and corrective action(s) would be forwarded to OPRR for review. Follow-up reports were also periodically required. If OPRR found that the investigation had been thorough and the institutions corrective action had been appropriate, the case was closed. For many institutions that meant that adverse publicity about the institution was avoided. The system worked remarkably well.
In complex cases, institutions often invited OPRR to join with the institution in carrying out an investigation. This cooperation was fostered by the education programs that made it clear that OPRR and institutions both had a stake in assuring compliance with the regulations.23 A few examples may be illustrative:
The case proceeded as follows: Dr. Gallo developed material in his laboratory that stimulated immune responses in laboratory animals. He forwarded the material to a colleague, Dr. Zagury, in Paris, France. Dr. Zagury modified the material and injected it into terminally ill human patient/subjects with advanced AIDS. Evaluation by French officials showed that the treatment hastened the death of several of Zagurys patient/ subjects. Dr. Zagury also used the material to develop a vaccine for AIDS that was injected into a number of citizens of Zaire. Some of Dr. Zagurys Zairian laboratory workers and Dr. Zagury himself were also injected with the material. A brief summary of this research project was published in Nature magazine. Drs. Zagury and Gallo were identified as primary and secondary investigators. Alert NIH employees called the article to the attention of OPRR. They indicated that the preparation had been clearly labeled for use in laboratory animals only. OPRR investigated and found the facts to be as described above. Dr. Gallo did not deny the facts, but he pleaded that the regulations do not apply to him because he is a bench scientist who had no direct contact with human subjects. Nevertheless, Dr. Gallo was severely reprimanded for collaborating with a clinician in research involving human subjects that was conducted in violation of the regulations. As a result of this case, records of all shipments out of NIH intramural laboratories have been monitored. The French government administered sanctions to Dr. Zagury. Dr. Zagury was also declared ineligible to compete for future NIH awards.
While the Gallo/Zagury investigation was under way, newly appointed Dr. Bernadine Healy, Director, NIH, sent a strongly worded memorandum to the Director, OPRR, directing him to give her a full accounting of the status of the Gallo investigation. She sent a similar memorandum to the Office of Research Integrity (ORI), which was examining the French claims that Dr. Gallo had stolen the credit for discovering the HIV virus from French scientists.
The Director, OPRR, responded to Dr. Healy by memorandum stating that briefing her could appear to be a conflict of interest because the investigation concerned alleged misconduct by one of her most prestigious employees. The Director, OPRR politely declined to provide the briefing. The Director of ORI gave Dr. Healy the requested briefing. Subsequently Dr. Healy was severely criticized in a congressional hearing by Rep. John Dingell (D. MI) for interfering with the investigation carried out by ORI.
In the meantime, Mr. Dingell, Chairman of the House Energy and Commerce Subcommittee on Oversight and Investigations, directed his own investigative staff to interrogate OPRR on the status of its investigation of Dr. Gallo. OPRR provided congressional staff only with information that had already appeared in the public media. However, the legal implications of denying investigative material to a congressional oversight committee were not clear. The Office of General Counsel had advised OPRR to surrender all of the relevant information. Mr. Dingell chose not to make an issue of OPRRs failure to provide him with investigative information, but he publicly criticized OPRR for the slowness of its investigation. His own staff began a parallel investigation. The Dingell staff traveled to Paris but were rebuffed by the French government. The French, on the other hand, prompted by interventions from the U.S. Department of State and the NIH Fogarty International Center, provided information to OPRR about Dr. Zagury through their Health Attaché in the French Embassy. Under pressure from the French government, Zagury, accompanied by his assistants, traveled at his own expense to NIH and provided significant information. Because of political turmoil and violence in Zaire and tensions between Zaire and the U.S. government at that time, complete records from that country were impossible to obtain. Nevertheless OPRR was able to get enough information to complete its report, take action, and close the case.
In an exit interview several years after the Gallo/Zagury case, Dr. Healy acknowledged that she regarded OPRRs failure to brief her as an act of defiance that infuriated her. Only after she was criticized by Mr. Dingell for interfering with the ORI investigation did she come to believe that OPRRs action was in the public interest.
These cases illustrate different kinds of situations that can face OPRR. The Cline case required an astute whistle blower to bring it to the attention of OPRR. No amount of oversight would have enabled OPRR to uncover secret noncompliant activity by a U.S. investigator in Italy and in Israel. It was necessary for a well-informed scientist to recognize the situation and to report it. Once reported, it was necessary for OPRR to have access to competent scientists to evaluate the protocol as proposed and as actually conducted. This case teaches us that OPRR must not only have persons competent in clinical research on its staff, but it must have the ability to consult with experts in order to base regulatory decisions on a clear understanding of the evidence, including the scientific evidence. At the present time, OPRR has no permanent physician with clinical research background on its staff. It relies on two part-time volunteers for assistance in this area.
The Gale case also required a whistle blower. In this case the whistle blower was an alert nurse. No oversight of the situation would have uncovered the fraud without help from inside the institution. OPRRs limited resources were, in this case, greatly enhanced by the full cooperation of the UCLA administration and a disinterested faculty committee determined to learn what actually happened and to take appropriate steps.
The Straus Case illustrates how difficult it is for OPRR to function without the assistance of the regulated institution. Future regulations may need to address the obligation of the institution to assist the government in evaluating compliance. Straus was extremely clever. His case cost OPRR - with invaluable assistance from the NIH Division of Management Survey and Review (usually involved with investigation of fiscal mismanagement or fraud) - hundreds of hours of precious staff time.
The Gallo/Zagury case illustrates the fact that at times OPRR must have high political backing. The case was resolved only because the Department of State and the NIH Fogarty International Center had relationships of trust with the Health Attaché in the French Embassy. On the surface, neither the Director, NIH, nor Congressman Dingell and his staff actually did anything wrong. Yet OPRR felt that signals as to how the case should be adjudicated were being given by powerful political forcesthe Director, NIH, to whom OPRR must turn for personnel, budget, and cooperation, and a powerful chairman of a congressional investigative committee. Part of the challenge of finding the proper organizational locus for OPRR is to give OPRR the political backing it needs to withstand pressure from highly placed leaders in the Congress or other agencies in the executive branch, including the White House itself. OPRR would not survive very long if it were a separate agency. OPRR must also be protected against interference by its own supervisors.
In December, 1981, the Presidents Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research recommended the following: 1) All federal agencies should adopt the regulations of DHHS (45 CFR 46); 2) the Secretary, DHHS, should establish an office to coordinate and monitor government-wide implementation of the regulations; and 3) each federal agency should apply one set of rules consistently to research conducted or supported by the federal government.26 The Secretary, DHHS, through the Assistant Secretary for Health, designated OPRR as the lead office to develop a common set of regulations across the government. However, OPRR was dealing with reduced budgets and severe downsizing restrictions. Requests for personnel and salaries to carry out the task were quickly denied by NIH, which refused to forward the requests to the Office of the Secretary. Since most offices within the department were facing downsizing and OMB placed each department and agency under personnel and budget ceilings, it is unlikely that OPRRs request would have been approved even if it had gone forward to the Secretary.
OPRR approached each agency in the federal system with a request for compliance with the recommendation of the Presidents Commission. Most agencies sent an employee to the organizational meeting, but they delivered messages that stated - in effect - that they had no locus for human subjects protections, that they had no budget for such protections, and that they too were downsizing and operating under an Office of Management and Budget directive that no office or function could be added in a federal agency unless an equivalent function was discontinued.
Nevertheless, OPRR was able to obtain some backing from the OMB on grounds that what was being proposed was a simplification of regulatory structure. With prodding from OMB and nagging from OPRR (which had no authority to require action), the agencies finally agreed to review DHHS regulations. The response was disheartening. Each agency agreed to promulgate DHHS regulations, so long as it was able to add clauses of exception or additional protections to the DHHS rules. Literally dozens of exceptions were proposed. Had action been taken at that point, there would have been no Common Rule. The Department of Education (DOE), for example, agreed to follow the Common Rule on condition that it could add an additional subsection dealing with protection for the rights and welfare of handicapped persons. The Department of Agriculture and the Environmental Protection Agency sought exceptions for pesticide research and food testing research. The Department of Justice sought an exception for research conducted in federal prisons. On the other hand, OMB said that no variation of any kind from DHHS rules would be allowed.
OPRR was able to persuade most agencies to drop their request for modification, but DOE was adamant. Neither DHHS nor any other agency would accept the DOE proposals. DOE refused to drop its demands. A DOE political appointee, Madeline Will, who enjoyed the friendship of the President refused to yield. OMB would not proceed without DOE. After nearly three years of standoff, there was a change in personnel at DOE, and progress toward producing a Common Rule began again. The turnover in leadership that marked the change from the Reagan administration to the Bush administration, returned the project to its starting point. No one in the Bush administration felt obligated to honor commitments made during the Reagan administration. OPRR had no authority to force the issue, but it turned to the Office of Science and Technology Policy (OSTP) headed by the Presidents Science Advisor for assistance.
Armed with support from both OSTP and OMB, where a change in personnel had reduced rigidity, drafting progress was made. However, the legal advisor to the President refused to approve the final draft because, in his opinion, the requirement that each IRB include both men and women constituted a quota. The Bush administration was on record as opposing all quotas and considered them to be illegal. OPRR then turned to the DHHS Office of General Counsel for assistance. After several months and many meetings, a rewording of the IRB membership clause won approval from the White House. Armed with new wording in the regulation and strong support from OSTP, OPRR once again initiated a clearance process in each of the affected departments and agencies. Finally on June 18, 1991, 16 departments and agencies simultaneously published the Common Rule.
Given the difficulty of the getting so many departments and agencies to agree on the rule, serious questions concerning any further changes in the rule are raised. Unless the process is altered, the rule is fixed for perpetuity.
Note that the findings and recommendations below relate not only to the optimal organizational locus of OPRR but to its relations to 1) other federal components with ethics responsibilities; 2) staffing; 3) OPRRs responsibilities; and 4) OPRRs functions. The author believes these items cannot be separated.
In the light of all that has been said above, the following recommendations are offered:
In 1963, NIH contracted with the Institute of Laboratory Animal Research27 (ILAR) of the National Academy of Sciences to prepare guidance for awardee institutions concerning the care, housing, and husbandry that should be provided for vertebrate animals involved in research.
NIH had three motivations in issuing its contract to ILAR: a) recognition of a moral obligation to house and care for living, sentient nonhuman animals involved in research in a humane and respectful manner; b) recognition that obtaining reliable scientific results based on research involving animals requires that research animals be maintained in a contented and healthy state; and c) recognition that public support of research involving animal subjects is contingent upon the animals being treated in a humane manner.
ILAR produced the first edition of the Guide for the Care and Use of Laboratory Animals in 1963. This edition was so titled because it emphasized the housing and care that should be provided for laboratory animals. The Guide was updated in 1965, 1968, 1972, 1978, and 1985. The most recent version of the Guide was published in 1996. Although the current version of the Guide provides more information than previous editions concerning the care and housing of laboratory animals, much of the new information included in the Guide deals with so-called performance or outcome standards for treating laboratory animals.
Each edition of the Guide published after 1966 includes recommendations that meet and exceed the standards set forth in the Animal Welfare Act passed in 1966 and amended in 1970, 1976, and 1985.28 ILAR has attempted to include in the Guide the best information available, from both research studies and hands-on experience, concerning the care and use of laboratory animals.
The Guide for the Care and Use of Laboratory Animals29 has been translated into many languages, and it is recognized throughout the world as providing an excellent foundation on which to erect a laboratory animal care and use program.30 The PHS Policy on Humane Care and Use of Laboratory Animals issued in 1979 required institutions that receive research awards from any of the PHS agencies to provide assurances to OPRRs Division of Animal Welfare (DAW) that the institution will comply with the recommendations set forth in the Guide. Prior to 1979, awardee institutions were encouraged to follow the Guide, but Assurances of Compliance were not required, and little more than a token effort to require compliance was made.
From 1963 until 1979, the primary influence exerted by OPRR on awardee institutions came by way of education and persuasion of staff veterinarians in the institutions. OPRR encouraged the hiring of Diplomates of the American College of Laboratory Animal Medicineveterinarians with advanced training and experience who are recognized as expertsto direct programs in the awardee institutions. Furthermore, it encouraged, but did not require, institutions to seek accreditation from AAALAC.31 The 1979 PHS policy was inadequate in many ways. Assurances provided little detail beyond a statement that the institution intended to comply with the recommendations in the Guide. Assurances did not make it clear which senior institutional official would be held responsible for compliance with the policy. (Because no institutional official was designated, compliance was often left to the discretion of department heads or laboratory chiefs. Thus in the same institution, the quality of care for animals often ranged from very poor to excellent.) Furthermore, assurances did not require prior review and approval of protocols, and they required minimal recordkeeping. As a consequence, although the 1979 assurances probably contributed in a small way to the improvement of care and use of animals, their impact was small.
It was apparent that the quality of the animal programs in most institutions depended primarily on the institutional veterinarians and their staffs. If the veterinarians were well trained, given adequate resources, and were allowed to exercise authority over the housing, care, and use of the animals, the programs were usually compliant and strong. On the other hand, if institutional veterinarians lacked training, resources, or administrative support, their programs were usually weak.
Many veterinarians complained that they were cast in the role of research cops who recognized obligations stemming from their veterinary oath, rather than the PHS policy and the Animal Welfare Act to see that animals were properly cared for and humanely used in research. Unfortunately, in many cases, veterinarians lacked authority to insist that research investigators use animals properly. In a typical research institution, there was tension rather than cooperation between research investigators who used animals for their research and veterinarians who recognized an obligation to care for animals and to see that their use in research involved as little pain or distress as possible to the animal. In virtually all of the older institutions and many newer ones there was no central vivarium. Animals were housed in convenient locations for research investigators. Typically either department heads or individual research investigators were responsible for the animals involved in their research. In most cases, such persons were not trained to care for the animals. Staff veterinarians were available for consultation, but many investigators failed to consult with their staff veterinarian because correction of the problem was charged against the award money assigned to the researcher. Thus, investigators were often loath to consult with staff veterinarians.
In the period between 1979, when the PHS policy was revised, until 1981, OPRR was preoccupied with responding to the recommendations of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. OPRR staff energy was devoted primarily to efforts to incorporate the recommendations of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research into the DHEW Regulations for the Protection of Human Subjects.
The sole veterinarian on the OPRR staff retired, and hiring freezes prevented recruiting a replacement veterinarian. OPRRs program for animal welfare was maintained but not improved during this period. After the Regulations for the Protection of Human Subjects were promulgated in 1981, OPRR began to devote more resources and efforts to improve its oversight of awardee programs involving laboratory animals.
As soon as OPRR focused renewed attention on enforcement of the 1979 Policy, the policys shortcomings began to come to light. The policy required Animal Welfare Assurances of Compliance to be negotiated by awardee institutions, but was unclear as to the level of detail required in an assurance document. Consequently, assurance documents were often brief and vague. The policy required animals to be maintained in a manner consistent with the recommendations in the Guide, but failed to require either a plan for accomplishing that goal or evaluation of whether the goal was achieved. The policy did not require prior review and approval of protocols by an Institutional Animal Care and Use Committee. For that reason, some studies involved more animals than necessary to obtain sound scientific data. Others failed to use a sufficient number of animals to achieve reliable scientific results. Inhumane procedures were sometimes carried out in the name of science. The policy was virtually useless in preventing these abuses. The policy required little record keeping, and it made no provision for voluntary reporting of problems associated with the care or the use of laboratory animals. OPRR recognized early on that not only was the policy seriously flawed, it was also extraordinarily difficult to enforce.
In 1982 OPRR began to gather information necessary to revise and upgrade the PHS policy. Until then, the policy had been backed by the authority of the Assistant Secretary for Health, who made compliance with the policy a condition of receiving an award to carry out research involving laboratory animals. Issuance of the policy was not required by law, and Congress paid little attention to laboratory animals and the policies that governed their care and use.
In late summer, 1982, Mr. Alex Pacheco, then a student at George Washington University and a leader in a newly formed organization called People for the Ethical Treatment of Animals (PETA), took a summer job in the Silver Spring, Maryland, Laboratory of Dr. Edward Taub. While Dr. Taub was away from his laboratory on vacation, Mr. Pacheco arranged to have several veterinarians visit the laboratory, which housed approximately 15 deafferented primates (the motor and sensory nerves of one arm of each animal had been severed). Dr. Taub was studying regeneration of damaged nerves. Mr. Pacheco took a series of colored photographs of the laboratory and the condition of the animals. Then he arranged for a state police raid on the facility under the Prevention of Cruelty to Animals law of the state of Maryland. The visiting veterinarians, the colored photographs, and the police report all indicated that the animals were housed in a filthy, fetid environment that constituted cruelty to the animals.
Dr. Taub claimed that his laboratory was clean and well run when he left on vacation. He claimed that Mr. Pacheco had trashed the laboratory, failed to clean cages, neglected the animals, and subjected the laboratory to false reports of animal cruelty. Mr. Pacheco, for his part, claimed that he merely documented the deplorable state of the laboratory and the condition of the animals. Initially the matter was handled in the courts of the state of Maryland. Dr. Taub was convicted on six counts of animal cruelty, but a court of appeals set aside the conviction on the grounds that since the laboratory was subject to the PHS policy, the issue was a federal matter. The court remanded custody of the animals to NIH. OPRR was directed to investigate.
OPRR was never able to determine with a high level of confidence whether Dr. Taub operated an abominable laboratory, Mr. Pacheco had trashed a well-run laboratory in Taubs absence, or neglect by Taub and trashing by Pacheco combined to create a dreadful situation.
Taub claimed that he had been set up by PETA in such a way that he appeared to be in serious noncom-pliance with the PHS policy. Some of the facts in the case made such a defense plausible. The prosecuting attorney for the state of Maryland subsequently took an administrative position with PETA. Furthermore, the state temporarily housed the animals, in violation of a number of city ordinances, in the basement of a Rockville house owned by Ingrid Newkirke, President of PETA, and the animals were stolen from the Newkirke residence - only to be returned with no questions asked. These facts provide some circumstantial evidence to support Dr. Taubs contention that PETA had indeed set up Dr. Taub.
Dr. Taub acknowledged to OPRR that his records were intact. The records showed that the animals had not received routine veterinary care for a period of years. Because the animals were deafferented, they required more specialized care than most other primates. Absence of veterinary care for a period of years constituted a serious violation of the PHS policy. Taubs defense that he personally had provided care for the animals was considered inadequate.
Dr. Taubs grant was suspended until such time as his laboratory could be brought into compliance and he was able to demonstrate that he could meet all the standards set forth in the Guide. Taub appealed the decision, but lost the appeal. Taubs laboratory was never restored, and the animals remained, by court order, in the custody of NIH (despite a series of lawsuits brought by PETA) for many years until all died or were euthanized. Custody suits brought by PETA were taken all the way to the Supreme Court, which confirmed decisions of the lower court that PETA had no legal standing on which to base its claim to custody of the animals. The case of the Silver Spring Monkeys, as it was called in the media, lasted for a period of approximately ten years.
In 1983 another case made national headlines. A group that identified themselves as the Animal Liberation Front (ALF) broke into the University of Pennsylvania Head Injury Clinic in Philadelphia. Equipment was smashed and files were scattered. Most important, approximately 60 hours of audio/videotapes were stolen. The tapes had been used as a tool by research investigators to capture visual images of research animals; data concerning heartbeat, blood pressure, and brain wave activity; and investigators verbal observations concerning the animals involved in the research study of head injuries.
The protocol called for sedated baboons to be injured in a machine that simulated the whiplash motion that often inflicts damage to the neck and spine of humans involved in rear-end auto crashes. The nature of the injuries to the animals were to be studied, and the animals unassisted recovery from injury was to be compared with the recovery of animals that received a variety of treatment modalities. The protocol was controversial because it required the infliction of a severe injury on the baboons. Each animal ultimately would be examined in terminal surgery.
The ALF gave the stolen audio/videotapes to PETA. PETA edited the tapes, added a voice over commentary, and circulated the edited tape entitled Unnecessary Fuss32 to schools, newspapers, Congress, television networks, and dozens of television stations. Congress and members of the general public were shocked at the cruelty to and disregard for the research animals presented on the tape. PETA then petitioned the PHS to close the laboratory and to punish the investigators, Drs. Langfit and Genarelli, for violation of the PHS policy. OPRR refused to act on the basis of evidence contained in an edited tape. The University of Pennsylvania claimed that Unnecessary Fuss was a caricature of the actual proceedings that had taken place in the laboratory. PETA refused for more than a year to turn over the evidence it had to the OPRR. In the spring of 1984, PETA sent the unedited tapes to the USDA, which in turn sent them to OPRR.
OPRR asked 18 veterinarians, mostly Diplomates of the American College of Laboratory Animal Medicine, who were, for the most part, employed by various Institutes within NIH, to review the tapes and report on their findings concerning violations of the PHS policy or the Animal Welfare Act. In the meantime, OPRR conducted several site visits to the Head Injury Laboratory. On the last of those site visits, Dr. Generelli performed a surgical procedure in the presence of the visitors that he claimed was typical of those involved in the study. OPRR was astonished to learn that aseptic technique was sloppy, that smoking was allowed in the operating theater (improper on many grounds, and a dangerous procedure where oxygen tanks are stored and used), and that the depth of induced anesthetic coma in the animals was questionable. OPRR also learned that most of the animals were not seen by an attending veterinarian either prior to or after suffering whiplash.
OPRR discovered that the Unnecessary Fuss presented the case history of only 1 of approximately 150 animals that had received whiplash. By clever editing and inaccurate voice over comments, the viewer was led to believe that the inhumane treatment depicted on the film was repeated over and over and over again. In actual fact, one baboon was badly treated, and the film showed that single mistreatment over and over again while the commentator narrated that the mistreatment was repeated on a long series of different animals. In all, OPRR identified about 25 errors in the description of what was taking place. Typical was the statement accompanying film showing an accidental water spill that acid had been carelessly poured on a baboon.
Despite the fact that Unnecessary Fuss grossly overstated the deficiencies in the Head Injury Clinic, OPRR found many extraordinarily serious violations of the Guide for Care and Use of Laboratory Animals. Veterinary and post-trauma nursing care for the animals were inadequate, survival surgical techniques were not carried out in the required aseptic manner, the operating theater was not properly cleaned, the holding facility lacked the required number of air changes per hour and other features required of a holding facility, and occupational health safeguards were not enforced. Furthermore, OPRR found deficiencies in the procedures for care of animals in many other laboratories operated under the auspices of the university. The university was put on probation by OPRR. The Head Injury Clinic was closed. The chief veterinarian was fired, the administration of animal facilities was consolidated, new training programs for investigators and staff were initiated, and quarterly progress reports to OPRR were required.
Although OPRR dealt with a small number of additional cases of violation of the 1979 PHS Policy for the Humane Care and Use of Laboratory Animals, the case of the Silver Spring Monkeys and the University of Pennsylvania Head Injury case were the two events that caught the attention of the public and Congress, illustrated the serious weaknesses in the 1979 policy, and focused the attention of the Assistant Secretary for Health and the Director, NIH, on the importance of upgrading the PHS policy.
OPRR took three major steps to upgrade the Policy for the Humane Care and Use of Laboratory Animals. First, it convened a committee drawn from across the PHS to provide advice. Second, it persuaded Congress (particularly Congressman Doug Walgren) to postpone legislation long enough for the new policy to be promulgated and tested. Third, it initiated a series of educational workshops that were presented in every region of the country. The proposed policy was discussed and comments elicited at all of these events.
The revised PHS policy was promulgated in May of 1985. Promulgation of the policy was coordinated with the publication of the 1985 version of the Guide for Care and Use of Laboratory Animals edited and published by ILAR.
The new policy included many new provisions. The most important new requirements were
OPRR had found that institutions could be in compliance with the technical requirements of the Guide and nevertheless have an unhealthy colony of laboratory animals. It had also found the converse proposition was sometimes true. The 1985 version of the Guide and the concurrent education program stressed evaluation of the health and comfort of the animals in addition to requirements for good husbandry practices that included caging, housing, and sanitation.
On November 20, Congress enacted the Health Research Extension Act of 1985 (PL 99-158), that required the Secretary, DHHS, acting through the Director, NIH, to promulgate the very Guidelines for the Care and Use of Laboratory Animals that were issued in May of 1985 and that had been tested over a six-month period. The law, in essence, provided congressional sanction for a policy that had already been promulgated, implemented, and evaluated. Most of the provisions in the policy were born of experience of noncompliance with the 1979 policy and the experience of the NIH intramural animal research programs that provided ready and immediate feedback to OPRR.
The policy relied almost entirely on hands-on experience rather than the literature that was beginning to come from the bioethics movement in the United States dealing with the moral status of animals. The policy represented an act of trust that IACUCs would, over time, develop standards by which to judge prospective protocols involving animal subjects. That act of trust has been fully justified. IACUCs have examined virtually every procedure employed by investigators and have evaluated virtually every system, method, and technique for caring for animals.
The revised policy - assisted no doubt by strident, though often illegal and inaccurate criticisms of the animal activists - jump-started the improvement of programs for the care and use of laboratory animals from a system that was, at best, mediocre, to one in which Americans may legitimately take pride.
Within a few months after the PHS Policy for the Humane Care and Use of Laboratory Animals was backed by law, OPRR found it necessary to close the facilities of Columbia Universitys school of Physicians and Surgeons and the animal facilities at the City of Hope University in southern California. Neither institution had made an acceptable effort to come into compliance with the new policy.
As a result of their suspension, the two institutions rebuilt their animal research programs and came into compliance in a matter of a few months. Not only were facilities improved, but staff were increased, training was initiated, and a proper chain of command was established. The drastic actions of closing entire programs (at Columbia it was estimated that $90 million of research was suspended for a period of more than four months) served as a warning to the entire research community that the policy, which enjoyed the support of the scientific community, would be fairly but rigorously enforced. Although there have been many other minor cases of noncompliance, the history of implementation of the PHS policy has been, since the Columbia case, characterized as a partnership between the DAW and the research community rather than a regulator/regulatee relationship.
About a month after the PHS policy was bolstered by the enactment of the Health Research Extension Act, the Congress incorporated amendments to the Animal Welfare Act in the Food Security Act of 1985 (PL 99-158). The new law was detailed, complex, and specific. Careful interpretation was necessary to make it internally consistent. Among other provisions in the act were controversial provisions that called for exercise for dogs and psychological well-being of primates. It also called for harmonization with the PHS policy through consultation with the Secretary, DHHS.
Initially USDA minimized the USDA/DHHS harmonization clause, and it published proposed rulemaking in 1987. A storm of criticism greeted the proposed rules that relied exclusively on engineering standards. After a second unpopular proposal of regulations, OMB convened a meeting involving the Acting Secretary of Agriculture and the Acting Director of NIH.
Although both of the senior officials were present, negotiations were carried on by OPRR and the Director of the Animal Plant Health Inspection System (APHIS) within the USDA. The historic outcome of that meeting was an agreement to incorporate in many places in the USDA regulations performance standards in addition to engineering standards. Although engineering standards would be used, the seriousness of a violation of such a standard would be judged in terms of whether it negatively affected the health and well-being of the animals.
The USDA regulations produced in 1991 met with instant approval and endorsement from Congress and the research community. They were criticized by animal activists who claimed they were too vague, unenforceable, and filled with loopholes. The regulations were challenged in court by a group known as the Animal Legal Defense Fund. That group won its case - that the regulations did not adequately implement the law - in the lower court, but on appeal was found to have no standing to sue. The matter has recently been referred to the Supreme Court.
From 19701980 relationships between USDA officials with responsibility for implementing the Animal Welfare Act and OPRR staff were cool and distant. Rivalry and suspicion and a very different approach to regulations characterized the relationship. Clearly, the USDA approach was established by its own Office of General Counsel, which sought to produce rules that could be enforced in court proceedings. Thus, emphasis on issues that could be clearly measured, weighed, or documented characterized the USDA rules. In the years 19801985 the OPRR and APHIS began to cooperate in their efforts to promote sound practices of care and use for laboratory animals. However, until the 1985 amendments to the Animal Welfare Act, the USDAs authority was confined to holding facilities for animals. It had no jurisdiction over the use of laboratory animals in research. USDA inspectors had been trained to check lists of engineering standards, including such items as cage sizes, the expiration dates on feed bags, sanitation, air flow, clean water dispensers, thermostats, pest control, lighting, bedding, and cage washing. They had little training or expertise in evaluating the health and comfort of the animals. Because USDA exercised no jurisdiction over rats and mice, (about 90 percent of all the animals used in research), inspectors never visited laboratories that used no other species.
Because there were so many items on the USDA checklist, virtually every institution failed to meet some USDA standards. On Monday mornings, for example, most cages are littered in most laboratories. Inspectors visiting a holding facility on a Monday almost always found sanitation to be wanting because the cages had not been cleaned since Friday. If a bulb burned out, a cage washer needed repair, or a crack formed in a wall or a ceiling (that could possibly harbor vermin), even though it was sprayed weekly with hot water and disinfectant, the institution could fail inspection. Under the new regulations all of these items would be evaluated, but the primary evaluation is directed to the health of the animals. If the animals exhibit normal behavior and eating habits, have good coats, are neither too thin or too fat, have been checked periodically by a veterinarian, are socialized to other animals and to their human caretakers, then mechanical failures and floor cracks are not judged to be as serious as they would be if the animals were in poor health.
In other words, the engineering standards are viewed in the light of outcome or performance standards and judged accordingly. Performance standards require better trained inspectors who are qualified to evaluate animals. OPRR staff from DAW have worked harmoniously with USDA inspectors to teach them how to evaluate facilities using performance standards. A survey of IACUCs conducted by the Scientists Center for Animal Welfare and a survey of the opinions of USDA inspectors have indicated that performance standards have greatly improved the care and use of animals.
Since 1990 the cooperation between OPRRs DAW and the USDA has been outstanding. Both agencies have profited, and the quality of both care and use of animals has, by every measure, risen dramatically.
In testing policy interpretations and in perfecting approaches to making reasonable performance standards, DAW works closely and harmoniously with the NIH Office for Animal Care and Use and with the administrators, veterinarians, research investigators, technicians, and caretakers at NIH facilities. Many of these individuals are called upon to assist in the training of USDA inspectors and in OPRR educational programs and site visits. Credit is due to Dr. John Miller, Director of DAW and to his successor Dr. Nelson Garnett for improving relationships with USDA, improving relationships with the NIH intramural program, and as a consequence improving the oversight of the care and use of laboratory animals in awardee institutions. Recent meetings in Boston of more than 500 members of IACUCs indicate that these bodies have become highly sophisticated in evaluating the protocols that come before them. These bodies have been remarkably successful in developing procedures for inspecting facilities, maintaining high performance standards, and improving protocols proposing to involve animals in research. IACUCs have had dramatic success in putting practice the three Rs of animal research: reduction, refinement, and replacement.
DAW should be so situated that performance standards can be tested and perfected. DAW can afford to spend time on these matters because the USDA - although it has changed dramatically - still emphasizes engineering standards, allowing DAW to emphasize performance standards.