In 1988 Carol Levine, in an article which asked whether AIDS has changed the ethics of human subjects research, said the following about ethics in human research: “born in scandal and reared in protectionism” (Levine 1988:167). In this essay I will critically analyse this statement and support its veracity by showing how human research abuses continued even while codes and declarations on human research were being developed. I will argue that there has been a paternalistic protectionism on the part of scientists and researchers to protect or defend their perceived right to conduct research on human subjects and that this protectionism has to an extent been validated by Research Ethics Committees (RECs) and Institutional Review Boards (IRBs) by nature of the make up of their membership. I will argue for greater representation of participant communities in RECs and IRBs
Table of Contents
- Introduction
- Current and past debates regarding
human research
2.1.
Hippocrates
2.2.
19th Century
2.3.
20th Century and
vulnerable populations
2.3.1
Second World War and German
abuses.
2.3.2.
United States abuses
2.3.3.
Japanese abuses
2.3.4.
Attitude of Superiority in
experimenters towards their human subjects
2.3.5.
South Africa
3. Codes, Declarations and Guidelines
3.1. Defining codes
3.2. Beaumont ’s
Code
3.3. 20th Century Codes
3.3.1. The Nuremberg
Code
3.3.1.1. Criticisms of the Nuremberg Code
3.3.2. The World Medical
Association and the Helsinki
Declaration
3.3.3. WHO/CIOMS
Guidelines
4. Research Ethics Committees
5. Key Ethical theories
5.1. Respect for persons
5.2. Beneficence
5.3. Justice
6. Conclusion
7. Bibliography
8. Declaration
1. Introduction
In 1988 Carol
Levine, in an article which asked whether AIDS has changed the ethics of human
subjects research, said the following about ethics in human research: “born in
scandal and reared in protectionism” (Levine 1988:167). In this essay I will
critically analyse this statement and support its veracity by showing how human
research abuses continued even while codes and declarations on human research
were being developed. I will argue that there has been a paternalistic
protectionism on the part of scientists and researchers to protect or defend
their perceived right to conduct research on human subjects and that this
protectionism has to an extent been validated by Research Ethics Committees
(RECs) and Institutional Review Boards (IRBs) by nature of the make up of their
membership. I will argue for greater representation of participant communities
in RECs and IRBs. (From this point on I will use the term REC and IRB
interchangeably, favouring REC as it is the term used in South Africa ). By examining current
and past debates with regard to human experimentation and its history,
particularly in the 20th century, I will show how various codes, declarations
and guidelines have come into being and I will examine various key ethical
theories which ought to guide human research ethics.
I will come to a
conclusion that indeed ethics in human research was “born in scandal and reared in protectionism” and that it is
now a growing child that continues to be threatened by scandal and that the
protectionism now required is one that needs more than ever to be directed
towards the research participants.
2. Current and past Debates regarding Human Research
A perusal of the
literature on human research and experimentation could lead one to believe that
Nazi Germany was the first or worst culprit in this area and that once the
German atrocities were dealt with, everything could return to normal. Many
writers on the subject begin with an account of the research/experimentation
carried out on inmates of the concentration camps during World War II (Lott:
2005; Schuklenk: 2005; Pedroni & Pimple: 2001).
Now there is no
doubt that the research conducted in the concentration camps was unethical and
inhumane, but it was neither the first nor the last unethical and inhumane
research carried out on humans.
2.1. Hippocrates
Experimentation in
medicine seems to be as old as medicine itself. McNeill (1993:17 -36) begins
his account of unethical experimentation with Hippocrates who, while removing
splinters of bone from the exposed cortex of a boy is said to have gently
scratched the surface of the cortex with his fingernail and observed the corresponding
movements on the opposite side of the boy's body. His most recent examples
occurred just before the publication of his book and include an example in New
Zealand from the 1980s where treatment was withheld from women with carcinoma in situ as part of an experiment. He
also includes as an example of unethical research the mother of the first “test
tube” baby, who had not been told that she was the subject of an experiment. He
concludes from his survey of human experimentation that it is the socially
powerless and disadvantaged who are most likely to be subjected to unethical
research and suggests that this has always been the case. McNeill refers to
Scutt (1988:1-11) who has argued that it is lower class women who have been
experimented on for the development of new reproductive technologies and new
contraceptive measures because they were less likely to complain if something
went wrong. Yet women at this socio-economic level, having been subjected to the
risks, were unlikely to benefit from new and expensive reproductive
technologies.
2.2. 19th Century
Some 19th century
scandals include purposefully infecting healthy subjects with syphilis,
gonococcus and typhoid fever in Dublin .
In the same century, in the United
States , physicians put slaves into pit ovens
to study heatstroke, poured scalding water over them as an experimental cure
for typhoid fever, and there are records of amputating fingers of slaves, some
with anaesthesia and some without in order to test the effectiveness of
anaesthesia (McNeill 1993:19).
McNeill concludes
that there was considerable abuse of the subjects of experiments in the 19th
and early 20th centuries and that “research ethics was not a subject of
widespread concern” (1993:19).
2.3. 20th Century and vulnerable populations
Research ethics
has since certainly become a subject of “widespread concern.” Writing in 2005,
Lott (2005:31) writes about human research participants that are already
disadvantaged or vulnerable to harm independent of their participation in
clinical trials. Included in this group are the mentally disabled, abjectly
impoverished persons, prisoners, refugees and ethnic minorities. He suggests
that these vulnerable populations are “particularly attractive for clinical
research precisely because of their vulnerability.” Dhai (2005:73) states that
South Africa, being home to a large number of vulnerable groups of poor
populations that have limited or no access to health services and education and
who accept authority without question, is a country that many researchers are
drawn to.
It thus seems that
the “scandal” that Carol Levine refers to continues to be a problem even in the
21st-century and particularly for vulnerable populations. Lott (2005:32) states
that “it is no surprise then that the majority of sensationalised research
ethics cases in the past 60 years have involved vulnerable persons - indeed the
entire field of research ethics has, in an important way, been built upon and
refined according to examples of “extreme” clinical research that have dotted
modern history.”
2.3.1 Second
World War and German abuses
Certainly one of
the largest “dots” in modern history was the Second World War which represented
a turning point in experimentation on human subjects. “War, it seems, justifies
all sorts of inhumane treatment of human beings” (McNeill, 1993:20). McNeill
highlights some research atrocities committed by the United
States , Germany
and Japan .
2.3.2. United States
abuses
In the US there
was debate around the fact that soldiers were being drafted into the army to
risk their lives and by extension it was accepted that the institutionalised,
retarded, incarcerated and mentally ill should contribute to the war effort and
accept some of the risks involved in research designed to alleviate the
diseases of soldiers, whether or not they were capable of consent. Because of
the greater good that was served, the research without consent was justified.
And so it was, for example, that research subjects drawn from mental asylums
and state penitentiaries were infected with malaria and given experimental
antidotes to test therapeutic effectiveness, relapse rate and side effects. The
effect of the association between the war and medical research was to undermine
any concern for the welfare of research subjects. After the war, research
practices established during the war “profoundly influenced researchers’
behaviour in the post-war era” (1993:21) and the public's perception of science
in service of society was successfully employed to argue for increased funding
and to perpetuate an ethic that “subordinated subject’s interests to those of
science on the implicit assumption that this was also in the interest of
society” (1993:21).
Still in the US , the Tuskegee
syphilis study, which started in 1932 and ended in 1972, and which Lott
(2005:34) refers to as the most well-known example of unethical experimentation
after the Nazi concentration camp experiments, is an example of the “scandal”
within which research ethics was being born. This study spans the period during
which the earliest ethical codes (see next section) were being debated,
developed and applied, but only came to an end following the publication of a
front-page New York Times article
describing the study and its effects.
2.3.3. Japanese
Abuses
German atrocities
and the debates around them are well-known. Less well-known are the atrocities
committed by the Japanese. Between 1930 and 1945, Japan conducted trials of
biological warfare through the use of various diseases including anthrax,
cholera, typhoid and typhus. The principle testing site, Unit 731, was on
mainland China
where there were installations for germ warfare, a prison for the human
experimental subjects and a crematorium for the human victims. There was also an
airfield and special planes for dropping germ bombs. At least 11 Chinese cities
were subjected to biological warfare attacks. McNeill
records that “documents obtained by freedom of information applications
revealed that the United
States agreed to give the Japanese
experimenters immunity from prosecution in exchange for information about
biological warfare. One of the American government documents acknowledged that ‘because
of scruples attached to human experimentation’ it was not possible to conduct
such experiments in the US .
It went on to express the hope that the individuals involved would be ‘spared
embarrassment’. It was a delicate show of concern considering that they had
used humans as guinea pigs in lethal experiments and executed any who survived.
Far from being embarrassed many of them have become influential and respected
figures in modern Japan ”
(1993:24). It would seem that the command of the American occupation forces in Japan decided
that the value of gaining access to the results of the Japanese experimentation
was greater than the value of prosecuting the experimenters. The US therefore
offered immunity from prosecution in exchange for the results of the research
and did this, according to McNeill “on the grounds of national security.”
When one bears in
mind that this was going on at the same time that the Americans were conducting
the Nuremberg trials which resulted in Germans being hanged for being
criminally culpable and in breach of a universal standard of ethics, one gets a
further example of Carol Levine's “scandal” in which modern research ethics was
birthed.
McNeill goes on to
highlight unethical experiments carried out in countries like the US , Britain
and Australia .
The last one he mentions involves the US Food and Drug Administration and
Department of Defence approving the use of investigational drugs and vaccines
on American troops without their consent in late 1990.
2.3.4. Attitude
of superiority in experimenters towards their human subjects
The common thread
in all these examples is the attitude of superiority in the experimenters
toward their human subjects. Thieren and Mauron (2007:8) make the following
comment regarding the Nazi researchers but I would argue that it applies to all the examples I have mentioned: “rather
than being the result of a coercive state, Nazi medicine illustrates how
medical researchers and their representative bodies co-operated with and even
manipulated a totalitarian state and political system relying on expert opinion
in order to gain resources for the conduct of research without any moral and
legal regulation. The Nazi doctors followed the intrinsic logic of their
scientific disciplines and used the legally and ethically unrestricted access
to human beings created by the context of the political system.” I would argue
that researchers today are similarly tempted and where they have sought to
protect their right to experiment (Levine's “protectionism”), it is more than
ever necessary to ensure the protection of the participant.
This is borne out
by the final example of ‘scandal’ that I would like to introduce before moving
on to discuss the various codes, guidelines and declarations that have been
developed at the very same time that these ‘scandals’ have been unfolding.
2.3.5. South Africa
Jason Lott
(2005:34) discusses an HIV-related clinical trial carried out in South Africa
which came under intense criticism in the New
England Journal of Medicine. Without discussing the details of the study, I
wish to draw on Lott's comments which tie in with much of what I have discussed
above. He points out that the study could never
have been conducted in developed world countries and that the study was rightly
accused of being “exploitative, capitalising on the poor of the developing
world who had no alternative treatment to prevent HIV transmission” (2005:35).
The scandal and
protectionism continues. A quote attributed to Carl Leopold, although in a
different context, could be used to describe the situation in human research
before the development of specific codes regarding human research. Simon and Hersh
(2002:43) quote Leopold as describing a particular situation as one of “science
saturation and ethics starvation.” I turn now to the various efforts that have
been made to inject ethical codes into the ‘ethics starved’ science of human
research.
3. Codes, Declarations and Guidelines
3.1. Defining
Codes
Simon and Hersh
(2002:41-44) define ethical codes as instruments which formalise the structures
that guide practice, especially within a profession. They point out that codes
are subject to multiple interpretation. There may, for example, be a generally accepted
social meaning; a specialised official meaning; a distinct meaning within a
given working environment; and also unique individual shadings of meaning. The
meaning of a code is always context dependent. The observance of given ethical
norms may reflect a consensus. However when ethical norms and codes are not
disseminated, they are not internalised and may end up serving only a symbolic
function. They therefore stress (2002:44) that ethical direction must also come
from outside the profession as well as from its leadership. Together, informal
and formal sources can create an ethically-conscious environment, “as valuable
as any formal code of ethics.”
In the discussion
which follows it will become evident how important an “ethically conscious
environment” is, and how good codes of conduct in an environment which is not ethically conscious become
worthless.
3.2. Beaumont Code
The oldest known
American code of ethics is William Beaumont's code of 1833. Ronald Numbers
(1979:113-135) points out that he recognised the importance of experimentation
on humans but considered that the voluntary consent of the subject was
necessary. His code required abandoning the experiment if it caused the subject
distress or if the subject became dissatisfied. While Numbers finds much to
criticise in Beaumont 's
code he describes it as an important step in the history of bioethical codes
and says: “William Beaumont may not have been a pioneer in the history of
bioethics, but neither was he a villain” (1979:135).
3.3. 20th
Century codes
The 20th century
saw the development of national and international codes of ethics applying to
experimentation on human beings. According to McNeill (1993:40) “Germany
provided both the nursery for the early development of research and its
commercial application. It was the first country to formulate a national code
of ethics. Paradoxically, German atrocities during the Second World War in
contravention of its own codes, led to the first international statement on the
ethics of research: the Nuremberg Code.”
3.3.1. The Nuremberg Code
Udo Schuklenk
(2005:11) regards the Nuremberg
code as the first and historically most important international research
ethical guideline. It was the international community's response to the crimes
Nazi scientists committed in German concentration camps. The court drew up the
code as a statement of a universal standard of ethics in research against which
to measure the behaviour of the accused. According to McNeill (1993:42ff) the
court ordered two of its expert witnesses to advise it on universal standards
of ethics in experimentation on human subjects. With the endorsement of the
American Medical Association, they presented three basic principles of human experimentation:
(1) voluntary consent; (2) previous animal experiments to investigate the risks
of each experiment; and (3) responsible, medically qualified management.
The court took
these three points and expanded them into ten principles of the Nuremberg Code.
Emanuel et al (2003:2-3) point out that although it is now recognised as a
landmark document, the Nuremberg
code did not provoke much of a response at the time it was issued. They put
this down to the fact that in the United States the German misdeeds
were considered an anomaly attributable to a totalitarian regime of
unquestionable brutality. The unspoken assumption was that researchers working
in democratic countries would never do such things. “Thus the Nuremberg code was viewed as a document that
was needed to restrain barbarians but was not applicable to ‘the rest of us’ ”
(2003:3)
It needs to be
remembered that the Nuremberg
trials and code were taking place at the same time the Japanese researchers
were being spared exposure and punishment - another example of Levine's “scandal
and protectionism.”
McNeill (1993:43)
highlights the fact that the Nuremberg trial
itself received little press coverage and that before the 1970s there were few
discussions of the Nuremberg
code and fewer citations in medical journals. Furthermore it had little
judicial application. He too puts this down to a tendency to dismiss the
atrocities, revealed in the Nuremberg
trials as the work of morally bankrupt or deranged Nazi doctors. He goes on to
say “this was a political and defensive reaction that blinded the medical
profession and medical bureaucracy to the ethical and humanitarian concerns
inherent in medical research. It took revelations of unethical medical research
in the United States
itself, to shake this complacency” (1993:43).
Writing on the
60th anniversary of the Nuremberg code, Thieren
and Mauron (2007:573) suggest that although the Nuremberg code evokes a dark time for
medicine, yet it remains a powerful symbol in inspiring the medical profession
to stand up for its Hippocratic values and protect individuals from harmful
medical experiments. They refer to the fact (which I will still discuss) that
the Nuremberg code was the first in a number of ethical codes designed to
control human research, but ask an important question, “whether modern ethics
and its binding instruments can always secure full protection to experimental
subjects and beyond them, to the recipients of health care.” In the context of
this essay, they seem to be aware, in 2007 that the “scandal and protectionism”
continues.
3.3.1.1.
Criticisms of the Nuremberg
Code
Before moving on
to later codes, I would like to point out some criticisms of the Code
(summarised from McNeill 1993:42-43). One of the strongest criticisms is that
it was an exercise in moral indignation by the victor over the actions of the
vanquished and did not take into account that breaches of moral codes were condoned
on both sides. This is a valuable criticism, but need not detract from the
overall importance of the Code.
Much contention
surrounding the code arose because of the emphasis given to the principle of
consent. Pedroni and Pimple (2001:2) highlight this in their statement: “the
code begins simply, with one statement set apart from all the rest: ‘the
voluntary consent of the human subject is absolutely essential’…… clearly the
principle of informed consent predominates.” McNeill points out that before
consent becomes an issue, the scientific validity of the proposed study, the
acceptability of potential risks to subjects and the competence of the
investigator to conduct the study, need to be established. None of these are
adequately dealt with by the Nuremberg
code, but are addressed in later codes and guidelines.
3.3.2. The
World Medical Association and the Helsinki
Declaration
Goodyear et al
(2008: 1067) refer to the Declaration of Helsinki[1]
adopted in 1964 as the “cornerstone of research ethics” and that its periodic
revision provides an opportunity for debate about its purpose and
effectiveness.
McNeill (1993:44)
suggests that the Helsinki
declaration has been more influential because it is perceived to be a guide to
researchers rather than a legal document and because it is less restrictive of
research.
The declaration
was “birthed” over a long period of time which was put down to the desire to
produce a truly useful and practical document. McNeill suggests however that
there was disagreement within the Association on the need for a code. “Indeed,
resistance to the adoption of guidelines for researchers within an organisation
that had previously placed its confidence in the responsibility of the
investigator was hardly surprising” (1993:44). This is surely another example
of Levine's “protectionism”. Having undergone several amendments, the most
recent version was approved in 2008[2].
Some of the important elements of the Declaration of Helsinki include the
allowing for proxy consent by others for research on those who are incapable of
giving informed consent, and a distinction between therapeutic research and non-therapeutic
research (which remains a contentious issue). In 1975 an amendment was
introduced that research projects be considered by specially appointed
independent committees. Up to this time all research had relied on the
conscience of the individual investigator and the consent of the research
subject. One senses here a move from the “protectionism” of the profession that
I have referred to earlier, to a “protectionism” more aimed at the research
subject.
Commenting on the
latest revision, Puri et al (2009:131-134) point out that where previous
versions of the declaration emphasise the fact that it is the duty of the
physician to promote and safeguard the health of patients, the current version
clearly specifies that this is also applicable to the “researcher physician.”
When a medical professional conducts research, the primary responsibility as a
physician must not be in conflict
with the scientific curiosity to find research answers, thus the responsibility
to safeguard participants health and well-being is significantly enhanced. The
latest revision also indicates the need to perform research among populations
that otherwise would always enter the “exclusion criteria.” For example,
pregnant or lactating women, children and the elderly are often left out as
they may be vulnerable. The latest version also requires the protocol to have
provisions for treating or compensating participants who are harmed as a
consequence of participation in the research study.
3.3.3. WHO/CIOMS
guidelines
Other international
guidelines have followed the Helsinki Declaration. The World Health Organisation
(WHO) and the Council for International Organisations of Medical Sciences
(CIOMS) published International Ethical
Guidelines for Biomedical Research Involving Human Subjects[3] in 1993. According to McNeill (1993:48)
the issuing of these guidelines was prompted by concern that research might be
conducted in developing countries to avoid restrictions and to minimise
expense. Other factors, like investigations serving external rather than local
interests and no long-term commitment to participants are also dealt with. The
guidelines also give a strong endorsement to independent, impartial prospective
review of all protocols by a committee of the investigators’ peers that might also
include lay people qualified to represent community, cultural and moral values.
4. Research Ethics Committees
The idea of peer
review of research protocols is one which I have shown is built into both the Helsinki declaration and
the CIOMS guidelines. As a product of the very guidelines that are meant to
ensure good research ethics, I want to suggest that they have the potential to
be a source of the “scandal and protectionism” which this essay has sought to
identify. In 1975 Robert Veatch claimed that there was no rationale for ethics review
committees composed of expert and lay members (1975:32). He correctly
identifies that RECs are composed of some members who are there because of
their expertise and other members who are there as representatives of the
community without any special expertise. In his view, this split reflects the
lack of any clear rationale “of what these committees are supposed to do, of
what purposes they are to serve, and what skills their members ought to have.”
He suggests that there is a need for a theory to clarify the ambiguities and
for changes to be made to committees to make them consistent with that theory.
He believes it is impossible for RECs to fulfil their task successfully without
these changes, but he does acknowledge that participants are better protected
by committees than by reliance on a researcher’s judgement alone. His solution
is to have two committees, one of professional members, and one of community
representatives.
Dhai (2005:82),
commenting on South African guidelines points out the RECs should (amongst
other things) be independent, have a membership of at least nine, with at least
two lay people not affiliated to the institution, preferably from the
community. I would suggest that 2 out of 9 does not reflect a fair
representation of the community, and bearing in mind that the other seven (or
most of them) will probably be from the institution, and therefore have a
vested interest in it and its research, one can question whether research
ethics is entirely free of the risk of scandal and protectionism on the part of
scientists and scientific institutions. Commenting on unequal power relations
between committee members, McNeill (1993:188) says “the difference between
professional and lay members on committees is part of a general difference in
the power and status between professionals and non-professionals in society. A
power difference on committees of review means in practice that those with a
vested interest in research are the most influential in decision-making.”
George Annas commented on the National Commission for the
Protection of Human Subjects of Biomedical and Behavioural Research decision to
endorse the status quo which placed primary reliance on local IRBs for subject
protection as follows: “this was predictable because of the commission's
researcher-dominated composition which permitted it to assume that (1) research
is good; (2) experimentation is almost never harmful to subjects and (3)
researcher dominated IRBs can adequately protect the interests of human
subjects. The successor Presidential Commission can learn much by re-examining
these premises” (1980:84).
It seems many fear
ongoing “scandal and protectionism” in research.
Lawrence Baer
(2005:7) fears that when it is made known that funding for research will be
made available subject to IRB approval, two things might happen. A committee
might believe that a research proposal has overall credibility if an outside
agency, especially a government one, has approved the funding. Secondly, he
suggests that with the prospect of funds flowing into the institution for
research, the committee might be biased in favour of the institution.
All the above
indicates how important it is to balance the interests of science with the
interests of participants in research. As I have shown, science has given (and
often continues to give) the public reason to be wary of it. RECs are the ideal
place for trust to be restored and renewed and developed, but I suggest they
need to be better balanced between scientists and the community in which
science wishes to do its work. This would require better representation of
communities on RECs, perhaps by NGOs, perhaps by other means. This means RECs
need to take on a more “political” nature. In South Africa this might mean that
an organisation like the Treatment Action Campaign and others like it, need to
take up positions on RECs. George Annas (1994:21) believes that “bioethics,
being applied ethics, has always sought to deal with the real world. As this
world is dominated by politics and as the manner in which health care is
delivered becomes more and more a function of politics, bioethics will have to
learn more about politics to be effective in influencing medical research. The
challenge for bioethics is to influence politics and policy without corrupting
itself by making it seem that ethical principles and practice are the result of
compromise and majority vote rather than reason and virtue.”
Thus the ongoing
realisation that research ethics was not only born in scandal and reared in
protectionism, but continues to be threatened by scandal and protectionism
should lead to the realisation that human research is still on a journey toward
maturity. Stephen Toulmin (1981:38) gives some good advice for this journey.
Discussing method in ethics he mentions a bad example that is so often followed
today, namely that of “assuming that we must withdraw discretion entirely when
it is abused and impose rigid rules in its place, instead of enquiring how we
could adjust matters so that necessary discretion would continue to be
exercised in an equitable and discriminatory manner.” He calls for the
recognition of the truth that a morality based entirely on general rules and
principles and codes is tyrannical and disproportioned, and that only those who
make equitable allowances for subtle individual differences have a proper
feeling for the deeper demands of ethics.
Thieren and Mauron
(2007:573) ask whether modern ethics and its binding instruments can always secure full protection to experimental
subjects and beyond them to the recipients of health care.
I would suggest
the answer is a humble: “No.” However, we can
believe that with the use of key ethical theories and the ongoing development
of our various codes, declarations and guidelines we can move further away from
the scandal and protectionism into which research ethics was born and continues
to be reared.
5. Key Ethical Theories
Schuklenk (2005:3)
claims two fundamental objectives for ethics: “to tell us how we ought to act
in a given situation and to provide us with strong reasons for doing so.” Dhai
(2005:75) points out that while there are various codes and declarations that
guide ethical research, most of them lack a systematic and coherent framework
for evaluating research studies that incorporate “all requisite ethical
considerations.”
Many commentators
on the key ethical theories which should underlie research ethics begin by
referring to principle-based ethics, with its emphasis on Autonomy,
Beneficence, Non-Maleficence and Justice. Others first look at deontological
ethics (with its emphasis on motive) and utilitarian ethics (with its emphasis
on maximum utility) and conclude that a utilitarian approach is more useful for
problem solving in research ethics. Stephen Toulmin (who served on the National
Commission for the Protection of Human Subjects which led to the drawing up of
the Belmont
report) says that in almost every discussion the commissioners came close to
agreement even about quite detailed recommendations. He says (1981:32) “even
when the commission's recommendations were not unanimous, commissioners were
never in any doubt what it was they were not quite unanimous about. When the
eleven individual commissioners asked themselves what principles underlay and
supposedly justified their adhesion to consensus, each of them answered in his
or her own way: the Catholics appealed to Catholic principles, the humanists to
humanist principles, and so on. They could agree; they could agree what they
were agreeing about; but, apparently, they could not agree why they agreed
about it.”
With this in mind I
conclude this essay with some key ethical principles which should permeate,
guide, define and nurture human research ethics, which was born in scandal and
raised in protectionism.
5.1.Respect for
Persons
This includes
classical ideas associated with autonomy, but also ethical notions about the
need to protect those with diminished autonomy. Autonomy includes the idea that
an individual is free to choose and to act. Respect
for Persons includes respecting the decisions of individuals and giving
them the information needed to be able to make rational decisions. Respect for
autonomy translates into the requirement for informed consent based on the
provision of all necessary information, ensuring that the information has been
understood, honouring objections and insisting that all consent is voluntary.
5.2. Beneficence
Ryan (unknown year
from Research Ethics Binder: 524) describes beneficence as an obligation to do no harm and to maximise
possible benefits and minimise possible harms. According to McNeill (1993:146)
this principle includes the duty to consider the proper design of research, its
value and validity, and whether the risks to participants are justified.
5.3.Justice
Powers (1998:147)
points out that where in the past emphasis of justice in research was applied
almost exclusively to the need for protection of vulnerable populations, it now
refers to an increased concern that members of disadvantaged populations (both
individually and as a group) have greater access to the potential benefits of
medical research and increased opportunities to participate as research
subjects.
6. Conclusion
Research ethics
was born in scandal and reared in protectionism and these two issues remain
threats in modern research. The ongoing
development and application of codes and declarations is essential for the
development of ethical research. RECs have been a very important step in the
regulation of research but more needs to be done to establish a fair balance
between all the parties involved in research. Respect for persons, beneficence
and justice need to undergird all human research.
7. Bibliography
Annas, G. J. 1980.
Report on the National Commission: Good As Gold. Bioethics Quarterly. Vol 2(2). Available:http://0-www.springerlink.com.innopac.wits.ac.za/content/x1612ww0848374r2/fulltext.pdf
Accessed
13/05/2010
Annas, G. J. 1994.
Will the Real Bioethics (Commission) Please Stand up? Hastings Centre Report 24(1): 19 to 21.
Available:
Accessed
13/05/2010
Baer, L. J. 2005.
Influences on IRB Decisionmaking. IRB:
Ethics and Human Research, Vol. 27(3):7. Available:
Accessed
13/05/2010
Dhai, A. 2005.
Implementation of Ethics Review. In: Developing
World Bioethics. Vol 5 No.1. Oxford :
Blackwell Publishing. [from Research Ethics 7011 Binder]
Emanuel, E.J.,
Grouch, R.A., Arras , J.D., Moreno , J.D., Grady, C., (Eds). 2003. Ethical
and Regulatory Aspects of Clinical Research. Baltimore :
Johns Hopkins University
Press. [from Research Ethics 7011 Binder]
Goodyear, M.D.E., Eckenwiler, L.A. , Ells, C. 2008. Fresh thinking about the
Declaration of Helsinki. British Medical
Journal Vol 337:1067-8. Available:
Accessed 18/05/2010
Levine, C. 1988.
Has AIDS changed the ethics of human subjects research? Law, Medicine and Health Care. Vol. 16, no.3-4, Fall/Winter: 167 [this
reference from McNeill 1993:205]
Lott, J. 2005.
Vulnerable/Special Participant Populations. In: Developing World Bioethics. Vol 5 No.1. Oxford : Blackwell Publishing. [from Research
Ethics 7011 Binder]
McNeill, P.M.
1993. The Ethics and Politics of Human Experimentation. Cambridge :
Cambridge University Press.
Numbers, R.L. 1979. William Beaumont and the Ethics of Human
Experimentation. Journal of the History
of Biology Vol 12 (1): 113-135. Available:
Accessed
13/05/2010
Pedroni, J.A.,
Pimple, K.D. 2001. A Brief Introduction to Informed Consent in Research with
Human Subjects. [from Research Ethics 7011 Binder]
Powers, M. 1998.
Theories of Justice in the Context of Research. In: Beyond Consent. Kahn, J.P., Mastroianni, A.C., Sugarman, J. (Eds)
[from Research Ethics 7011 Binder]
Puri, K.S., Suresh,
K.R., Gogtay , N.J. , Thatte, U.M. 2009. Declaration of Helsinki , 2008:
Implications for stakeholders in research. Ethics Forum. Vol.55 Issue 2:131-134.
Available: http://www.jpgmonline.com/article.asp?issn=0022-3859;year=2009;volume=55;issue=2;spage=131;epage=134;aulast=Puri
Accessed
18/05/2010
Ryan, K.J. Unknown
Year. Ethical Principles & Guidelines for Research Involving Human
Subjects. [from Research Ethics 7011 Binder]
Schuklenk, U.
2005. Introduction to Research Ethics . In: Developing
World Bioethics. Vol 5 No.1. Oxford :
Blackwell Publishing. [from Research Ethics 7011 Binder]
Scutt, J.A. 1988.
The Baby Machine: Commercialisation of Motherhood. Carlton , Australia :
McCulloch Press. [this reference from McNeill 1993:302]
Simon, J., Hersh,
M. 2002. An Educational Imperative: The Role of Ethical Codes and Normative
Prohibitions in CBW-Applicable Research. In: Minerva Vol 40:37-55. [from Research Ethics 7011 Binder]
Thieren, M.,
Mauron, A. 2007. Nuremberg Code turns 60. In: Bulletin of the World Health Organisation Vol 85:8[from Research
Ethics 7011 Binder]
Toulmin, S. 1981. The
Tyranny of Principles. Hastings Centre Report. Vol 11 No 6:
31-39. Available:
Accessed
18/05/2010
Veatch, R. M.
1975. Human Experimentation Committees: Professional or Representative? The Hastings
Report Vol 5(5):31-40. Available:
Accessed
18/05/2010
[1] World Medical
Association Declaration of Helsinki, Ethical Principles for Medical Research
involving human subjects. Available from: http://www.wma.net/e/policy/b3.htm.
Accessed: 18/05/2010 13:43
[2] 7th Revision of the Declaration of Helsinki : Good News for
the Transparency of Clinical Trials
Accessed:
18/05/2010 14:03
[3] Council for
International Oganizations of Medical Sciences [CIOMS]. International Ethical Guidelines for Biomedical Research Involving
Human Subjects. Geneva ,
Switzerland :
CIOMS, 1993. Available: http://www.cioms.ch/frame_1993_texts_of_guidelines.htm.