Please be advised that classmate responses will be added later in the week!!!
In your responses to two of your peers, choose at least one peer who chose a different prompt than you did.
6-1 Discussion: Bioethics Hot Topics
Please be advised that classmate responses will be added later in the week!!!
In your initial post, respond to one of the prompts below.
· Outline the various approaches for the allocation of scarce human organs for transplantation. Which one do you believe is the most fair? Use factual evidence and/or research to defend your choice.
· Do you believe that people should be compensated for offering their organs for transplantation? Should the families of deceased donors be entitled to compensation, or only living donors? What kind of incentives could be offered to encourage people to donate their organs in lieu of cash?
· Outline the opposing viewpoints related to human stem cell research. Discuss your own personal views related to stem cell research, using factual evidence and/or research to defend your position.
· What would you do if you knew that a patient suffering from cancer was part of a control group of research patients who were not receiving a drug that could benefit them? Use factual evidence and/or research to defend your position.
· Conduct a search of whistleblowing cases in the healthcare field. Choose one of the cases and summarize the facts and the outcome. Do you feel that the person who reported the wrongdoing (the whistleblower) was justified in doing so? Why or why not? Do you agree with the outcome of the case? Why or why not? Use facts and legal support for your position.
This video contains an overview of how donation and transplantation works.
· https://www.youtube.com/watch?v=K4bS7YZjqhY
To complete this assignment, review the Discussion Rubric document.
· Write a post of 1 to 2 paragraphs
· In your responses to two of your peers, choose at least one peer who chose a different prompt than you did.
· Consider content from other parts of the course where appropriate. Use proper citation methods for your discipline when referencing scholarly or popular sources.
False Hopes and Best Data: Consent to Research and the Therapeutic Misconception
Author(s): Paul S. Appelbaum, Loren H. Roth, Charles W. Lidz, Paul Benson and William Winslade
Source: The Hastings Center Report , Apr., 1987, Vol. 17, No. 2 (Apr., 1987), pp. 20-24
Published by: The Hastings Center
Stable URL: http://www.jstor.com/stable/3562038
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at https://about.jstor.org/terms
The Hastings Center and Wiley are collaborating with JSTOR to digitize, preserve and extend access to The Hastings Center Report
This content downloaded from ������������198.246.186.26 on Sun, 02 Aug 2020 15:22:44 UTC�������������
All use subject to https://about.jstor.org/terms
Hastings Center Report, April 1987
Following a suicide attempt, a young man with a long history of tumultuous relationships and diffi- culty controlling his impulses is admitted to a psychiatric hospital. After a number of days, a psychiatrist approaches the patient, explaining that he is conducting a research project to determine if medications may help in the treatment of the patient’s condition. Is the patient interested, the psychiatrist asks? The answer: “Yes, I’m willing to do anything that might help me.”
The psychiatrist returns over the next several days to explain the project further. He tells the patient that two medications are being used, along with a placebo; medications and placebo are assigned randomly. The trial is double-blinded; that is, neither physician nor patient will know what the patient is receiving until after the trial has been com-
pleted. The patient listens to the explanation and reads and signs the consent form. Since the process of providing information and obtaining consent seems, on the surface, exem- plary, there appears to be little reason to question the validity of the consent.
Yet when the patient is asked why he agreed to be in the study, he offers some disquieting information. The medication that he will receive, he believes, will be the one most likely to help him. He ruled out the possibility that he might receive a placebo, because that would not be likely to do him much good. In short, this man, now both a patient and a subject, has interpreted, even dis- torted, the information he received to maintain the view-obviously based on his wishes-that every aspect of the research project to which he had consented was
designed to benefit him directly. This
False Hopes and Best Data: Consent to Research and the Therapeutic Misconception
by Paul S. Appelbaum, Loren H. Roth, Charles W Lidz, Paul Benson, and William Winslade
belief, which is far from uncommon, we call the “therapeutic misconcep- tion.” To maintain a therapeutic misconception is to deny the possi- bility that there may be major disad- vantages to participating in clinical research that stem from the nature
of the research process itself.
Research Risks and the Scientific Method
The unique aspects of clinical research include the goal of creating generalizable knowledge; the tech- niques of randomization; and the use of a study protocol, control groups, and double-blind procedures. Do these elements create a body of risks or disadvantages for research sub- jects? The answer lies in understand- ing how the scientific method is often incompatible with one of the first principles of clinical treatment-the value that the legal philosopher Charles Fried calls “personal care.”‘1
According to the principle of personal care, a physician’s first obligation is solely to the patient’s well-being. A corollary is that the physician will take whatever measures are available to maximize the chances of a successful outcome. A failure to
adhere to this principle creates at least a potential disadvantage for the clinical research subject: there is always a chance that the subject’s interests may become secondary to other demands on the physician- researcher’s loyalties.2 And the meth- ods of science inhibit the application of personal care.
Randomization, an important ele- ment of many clinical trials, demon- strates the problem. The argument is often made that comparisons of
multiple treatment methods are legitimately undertaken only when the superiority of one over the other is unknown; thus the physician treating a patient in one of these trials does not abandon the patient’s personal care, but merely allows chance to determine the assignment of treatments, each of which is likely to meet the patient’s needs.”
But as Fried and others have noted, it is very unlikely that two treatments in a clinical trial will be identically desirable for a particular patient. The physician may have reason to suspect, for example, that a given treatment is more likely to be efficacious for a particular patient, even if overall evidence of greater efficacy is lacking. This suspicion may be based on the physician’s previous experience with a subgroup of patients, the patient’s own past treatment experience, the family history of responsiveness to treatment, or idiosyncratic elements in the patient’s case. Subjects may have had previous unsatisfactory responses to one of the medications in a clinical trial, or may display clinical characteristics that suggest that one class of medications is more
likely to benefit them than another. Ordinarily, these factors would
guide the therapeutic approach. But in a randomized study physicians cannot allow these factors to influ- ence the treatment decision, and efforts to control for such factors in
the selection of subjects, while theo- retically possible, are cumbersome, expensive, and may bias the sample. Thus reliance on randomization
represents an inevitable compromise of personal care in the service of attaining valid research results. There are at least two reports in the literature
Paul S. Appelbaum is AFE Zeleznik professor of psychiatry, University of Massachusetts Medical School. Loren H.
Roth is professor ofpsychiatry and Charles W Lidz is associate professor of sociology and psychiatry, University of Pittsburgh. Paul Benson is assistant professor of sociology, Tulane University. William Winslade is associate professor of medical humanities, University of Texas Medical Branch at Galveston.
20
This content downloaded from ������������198.246.186.26 on Sun, 02 Aug 2020 15:22:44 UTC�������������
All use subject to https://about.jstor.org/terms
Hastings Center Report April 1987
of physicians’ reluctance to refer patients to randomized trials because of the possible decrement in the level of personal care.4
The use of a study protocol to regulate the course of treatment– essential to careful clinical research-
also impedes the delivery of personal care. Protocols often indicate the
pattern and dosages of medication to be administered or the blood levels
to be attained. Even if they allow some individualization of medication, changes in time or magnitude may be limited. Thus patients who do not respond initially to a low dose of medication may not receive a higher dose, as they would if they were being treated without a protocol; on the other hand, patients experiencing side effects, which could be controlled by lowering their dosage, yet which are not so severe as to require withdrawal from the study, cannot receive the relief they would get in a therapeutic setting.
Analogously, adjunctive medica- tions or forms of therapy, which may interfere with measurement of the
primary treatment effect, are often prohibited. The exclusion of adjunc- tive medications, such as sleeping medications or decongestants, may increase a patient’s discomfort. The requirement for a “wash-out” period, during which subjects are kept drug- free, may place previously stable patients at risk of relapse even before the experimental part of the project begins. And alternating placebo and active treatment periods may mean that a patient who responds well to a medication must be taken off that
drug for the purposes of the study; conversely, patients who improve on placebo must be subject to the risks of active medication. In sum, the necessary rigidities of an experimen- tal protocol often lead investigators to forgo individualized treatment decisions.
The need for control groups or placebos and double-blind pro- cedures can produce similar effects. In the therapeutic setting patients will rarely receive medications that are deliberately designed to be pharma- cologically ineffective; the ethics of those occasional situations when
placebos are employed clinically are hotly disputed.5 Yet, placebos are
routinely employed in clinical invest- igations, without the intent of bene- fiting the individual subject.
Similarly, clinicians in a nonre- search setting will never allow them- selves to remain ignorant of the treatment patients are receiving. Double-blind procedures, however, are necessary to ensure the integrity of a research study, even if they delay recognition of side effects or drug interactions, or have other adverse consequences.
Are these disadvantages so impor- tant that they should routinely be called to the attention of research
subjects? That issue raises an empir- ical question: how prevalent is the therapeutic misconception?
Studies on Consent
Our findings suggest that research subjects systematically misinterpret the risk/benefit ratio of participating in research because they fail to understand the underlying scientific methodology.6
This conclusion is based on our observations of consent transactions in four research studies on the
treatment of psychiatric illness, and our interviews with the subjects immediately after consent was obtained. The studies varied in the
extent of the information they provided to subjects. Two of the studies compared the effects of two medications on a psychiatric dis- order (one used, in addition, a placebo control group). A third study examined the relative efficacy of two dosage ranges of the same medica- tion. And a fourth examined two different social interventions in
chronic psychiatric illness, com- pared with a control group.
The populations in these studies ranged from actively psychotic schizo- phrenic patients to nonpsychotic, and in some cases, minimally sympto- matic, borderline, and depressed patients. Our questions were based on information included on the
consent form with regard to the understanding of randomized or chance assignment; and the use of control groups, formal protocols, and double-blind techniques. Eighty-eight patients comprised the final data pool, but since all of the issues
addressed here were not relevant to
each project the sample size varied for each question.
We found that fifty-five of eighty subjects (69 percent) had no compre- hension of the actual basis for their
random assignment to treatment groups, while only twenty-two of eighty (28 percent) had a complete understanding of the randomization process. Thirty-two subjects stated their explicit belief that assignment would be made on the basis of their
therapeutic needs. Interestingly, many of these subjects constructed elaborate but entirely fictional means by which an assignment would be made that was in their best interests.
This was particularly evident when information about group assignment was limited to the written consent forms and not covered in the oral
disclosure; subjects filled vacuums of knowledge with assumptions that decisions would be made in their best interests.
Similar findings were evident concerning other aspects of scientific design. With regard to nontreatment control groups and placebos, four- teen of thirty-three (44 percent) subjects failed to recognize that some patients who desired treatment would not receive it. Concerning use of a double-blind, twenty-six of sixty-seven subjects (39 percent) did not under- stand that their physician would not know which medication they would receive; an additional sixteen of sixty- seven subjects (24 percent) had only partially understood this. Most strik- ing of all, only six of sixty-eight subjects (9 percent) were able to recognize a single way in which joining a protocol would restrict the treatment they could receive. In the two drug studies in which adjustment of medication dosage was tightly restricted, twenty-two of forty-four subjects (50 percent) said explicitly that they thought their dosage would be adjusted according to their indi- vidual needs.
Two cases illustrate how these flaws
in understanding affect the patient’s ability to assess the benefits of the research. The first demonstrates the
effect of a complete failure to recog- nize that scientific methodology has other than a therapeutic purpose. The second demonstrates a more
21
This content downloaded from ������������198.246.186.26 on Sun, 02 Aug 2020 15:22:44 UTC�������������
All use subject to https://about.jstor.org/terms
Hastings Center Report, April 1987
subtle influence of a therapeutic orientation on a subject who under- stands the overall methodology but has certain blindspots.
In the first case, a twenty-five-year- old married woman with a high- school education was a subject in a randomized, double-blind study that compared the use of two medications and a placebo in the treatment of a nonpsychotic psychiatric disorder. When interviewed, she was unsure how it would be decided which
medication she would receive, but thought that the placebo would be given only to those subjects who “might not need medication.” The subject understood that a double- blind procedure would be used, but did not see that the protocol placed any constraints on her treatment. She said that she considered this project not an “experiment,” a term that implied using drugs whose effects were unknown. Rather, she con- sidered this to be “research,” a process whereby doctors “were trying to find out more about you in depth.” She decided to participate because, “I needed help and the doctor said that other people who had been in it had been helped.” Her strong conviction that the project would benefit her carried through to the end of the study. Although the investiga- tors rated her a nonresponder, she was convinced that she had improved on the medication. She attributed her
improvement in large part to the double-blind procedures, which kept her in the dark as to which medi-
cation she was receiving, thereby preventing her from persuading herself that the medication was doing no good. She was quite pleased about having participated in the study.
In the same study, another subject was a twenty-five-year-old woman with three years of college. At the time of the interview, she had minimal psychiatric symptoms and her under- standing of the research was gener- ally excellent. She recognized that the purpose of the project was to find out which treatment worked best for her
group of patients. She spontaneously described the three groups, including the placebo group, and indicated that assignment would be at random. She understood that dosages would be adjusted according to blood levels
and that a double-blind would be
used. When asked directly, however, how her medication would be selected, she said she had no idea. She then
added, “I hope it isn’t by chance,” and suggested that each subject would probably receive the medication she needed. Given the discrepancy between her earlier use of the word
“random” and her current explana- tion, she was then asked what her understanding was of “random.” Her definition was entirely appropriate: “by lottery, by chance, one patient who comes in gets one thing and the next patient gets the next thing.” She then began to wonder out loud if this procedure was being used in the current study. Ultimately, she con- cluded that it was not.
In this case, despite a cognitive understanding of randomization, and a momentary recognition that ran- dom assignment would be used, the subject’s conviction that the investi- gators would be acting in her best interests led to a distortion of an
important element of the experimen- tal procedure and therefore of the risk/benefit analysis.
The comments of colleagues and reports by other researchers have persuaded us that this phenomenon extends to all clinical research.
Bradford Gray, for example, found that a number of subjects in a project comparing two drugs for the induc- tion of labor believed, incorrectly, that their needs would determine which
drug they would receive.’ A survey of patients in research projects at four Veterans Administration hospitals showed that 75 percent decided to participate because they expected the research to benefit their health.8
Another survey of attitudes toward research in a combined sample of patients and the general public revealed the thinking behind this hope: when asked why people in general should participate in research, 69 percent cited benefit to society at large and only 5 percent cited benefit to the subjects; however, when asked why they might participate in a research project, 52 percent said they would do it to get the best medical care, while only 23 percent responded that they would want to contribute to scientific knowledge.9 Back in the psychiatric setting, Lee
Park and Lino Covi found that a
substantial percentage of patients who were told they were being given a placebo would not believe that they received inactive medication,1o and Vincenta Leigh reported that the most common fantasy on a psychiatric research ward was that the research
was actually designed to benefit the subjects.”
Responding to the Problem
Should we do anything about the therapeutic misconception? It could be argued that as long as the research project has been peer-reviewed for scientific merit and approved for ethical acceptability by an institu- tional review board (IRB), the prob- lem of the therapeutic misconception is not significant enough to warrant intervention. In this view, some minor distortion of the risk/benefit ratio has
to be weighed against the costs of attempting to alter subjects’ apprecia- tion of the scientific methods. Such
costs include time expended and the delay in completing research that will result when some subjects decide that they would rather not participate.
Whether we accept this view depends on the value that we place on the principle of autonomy that underlies the practice of informed consent. Autonomy can be over- valued when it limits necessary treatment, as it may, for example, in the controversy over the right to refuse psychotropic medications. There, we believe, patients’ interests would best be served by giving claims to autonomy lesser weight.12 But when we enter the research setting, limiting subjects’ autonomy becomes a tool not for promoting their own interests, but for promoting the interests of others, including the researcher and society as a whole. We are not willing to accept such limitations for the benefit of others, particularly when, as described below, there may exist an effective mechanism for mitigating the problem.
Assuming that one agrees that distortions of the type we have described in subjects’ reasoning are troublesome and worthy of correc- tion, is such an effort likely to be effective? One might point to the data just presented to argue that little can
22
This content downloaded from ������������198.246.186.26 on Sun, 02 Aug 2020 15:22:44 UTC�������������
All use subject to https://about.jstor.org/terms
Hastings Center Report, April 1987
be done to ameloriate the problem. The investigator in one of the projects we studied offered his subjects detailed and extensive information in
a process that often extended over several days and included one session in which the entire project was reviewed. Despite this, half the subjects failed to grasp that treatment would be assigned on a random basis, four of twenty misunderstood how placebos would be used, five of twenty were not aware of the use of a double-
blind, and eight of twenty believed that medications would be adjusted according to their individual needs. Is it not futile, then, to attempt to disabuse subjects of the belief that they will receive personal care?
Various theoretical explanations of our findings could support this view. Most people have been socialized to believe that physicians (at least ethical ones) always provide personal care.’3 It may therefore be very difficult, perhaps nearly impossible, to per- suade subjects that this encounter is different, particularly if the researcher is also the treating phy- sician, who has previously satisfied the subject’s expectations of personal care. Further, insofar as much clinical research involves persons who are acutely ill and in some distress, the well-known tendency of patients to regress and entrust their well-being to an authority figure would undercut any effort to dispel the therapeutic misconception.
In response, more of our data must be explored. In each of the studies we observed, one cell of subjects was the target of an augmented informa- tional process, which supplemented the investigator’s disclosures to sub- jects with a “preconsent discussion.” This discussion was led by a member of our research team who was trained
to teach potential subjects about such things as the key methodologic aspects of the research project, especially methods that might conflict with the principle of personal care.
By introducing a neutral discloser, distinct from the patient’s treatment team, we shifted the emphasis of the disclosure to focus on the ways in which research differs from treat-
ment. Of the subjects who received this special education, eight of sixteen (50 percent) recognized that random-
ization would be used, as opposed to thirteen of the fifty-one (25 percent) remaining subjects; five of five (100 percent) understood how placebos would be employed in the single study that used them, compared with eleven of the fifteen (73 percent) remaining subjects; nine of sixteen (56 percent) comprehended the use of a double blind while only fifteen of fifty-one (31 percent) remaining subjects did so; and five of seventeen (29 percent) initially recognized other limits on their treatment as a result of con-
straints in the protocol, compared with one of the fifty-one (2 percent) other subjects.
Our data suggest that many subjects can be taught that research is markedly different from ordinary treatment. Other efforts to educate
subjects about the use of scientific methodology offer comparably encouraging results.”4 There is no reason to believe that subjects will refuse to hear clear-cut efforts to
dispel the therapeutic misconception. Novel approaches such as we
employed may be one thing, of course, while routine procedures are something else. Perhaps our data derive from an unusually gifted group of patient-subjects. Will the complex- ity of explaining the principle of the scientific method defy understanding by most research subjects?
Undercutting the therapeutic mis- conception, thereby laying out some of the major disadvantages of any clinical research project, is probably much simpler than it seems. About the goals of research, subjects could be told: “Because this is a research
project, we will be doing some things differently than we would if we were simply treating you for your condi- tion. Not all the things we do are designed to tell us the best way to treat you, but they should help us to understand how people with your condition in general can best be treated.” About randomization: “The
treatment you receive of the three possibilities will be selected by chance, not because we believe that one or the other will be better for
you.” About placebos: “Some subjects will be selected at random to receive
sugar pills that are not known to help the condition you have; this is so we can tell whether the medications that
the other patients get are really effective, or if everyone with your condition would have gotten better anyway.”
One can quibble about the word- ing of specific sections, and complex- ities can arise with particular projects, but the concepts underlying scientific methodology are in reality quite simple. And as long as subjects understand the key principles of how the study is being conducted, inves- tigators can probably omit some of the detail that currently clogs consent forms and confuses subjects about the minor risks that accompany the experimental procedures, such as blood drawing. Overall, then, we may end up with a much simpler consent process when we focus on the issue of personal care.
Who should have the task of
explaining the therapeutic miscon- ception to subjects? Clearly, investi- gators should be encouraged to discuss such issues with subjects and to include them on consent forms, but several problems arise here. First, it is decidedly not in investigators’ self- interest for them to disabuse potential subjects of the therapeutic misconcep- tion. Experienced investigators, as we have reported elsewhere,15 view the recruitment of research subjects as an intricate and extended effort to win
the potential subject’s trust. One of our subjects in this study described the process in these words: “It was almost as if they were courting me….everything was presented in the best possible light.” One could argue that it is unrealistic to expect investigators to raise additional doubts about the benefits that subjects can expect; any effort in that regard will result in resistance by investiga- tors, particularly those who have yet to internalize the justifications for informed consent in general.
Second, even investigators who recognize the desirability of subjects making informed decisions may have great trouble conveying this particular information. When a researcher tells
subjects that he or she is not selecting the treatment that will be given or that the medications being used may be no more effective than a placebo, the researcher is confessing uncer- tainty over the best approach to treatment, as well as the likely
23
This content downloaded from ������������198.246.186.26 on Sun, 02 Aug 2020 15:22:44 UTC�������������
All use subject to https://about.jstor.org/terms
Hastings Center Report, April 1987
outcome. Harold Bursztajn and colleagues have argued that the essential uncertainty of all medical practice is precisely what physicians need to convey in both research and treatment settings.'” Yet, as Jay Katz points out, physicians have been systematically socialized to underplay or ignore uncertainty in their discus- sions with patients.7 In a recent report of physicians’ reluctance to enter patients in a multicenter breast cancer study, 22 percent of the principal investigators cited as a major obstacle to enrolling subjects diffi- culty in telling patients that they did not know which treatment was best.”8
Third, few researchers who are also clinicians feel comfortable acknowl-
edging, even to themselves, that the course of treatment may not be optimally therapeutic for the patient. Thus, there appear such statements as the following, which recently was published in The Lancet: “A doctor who contributes to randomized treat-
ment trials should not be thought of as a research worker, but simply as a clinician with an ethical duty to his patients not to go on giving them treatments without doing everything possible to assess their true worth.”’19 The author concludes that since
randomized trials are not really research, there is no need to obtain any informed consent from research subjects. Although this conclusion may be extreme, the example empha- sizes the difficulties of getting inves- tigators to admit to themselves, much less to their patient-subjects, the limits they have accepted on the delivery of personal care.
If there is concern with particular protocols, IRBs might consider sup- plementing the investigators’ disclo- sure and the “courtship” process with a session in which the potential subject reviews risks and benefits with someone who is not a member of the
research team. (John Robertson has proposed a similar approach, albeit out of other concerns.20) The neutral explainer would be responsible to the IRB and would be trained to empha- size those aspects of the research situation about which the IRB has the
greatest concern. This approach might be especially appropriate when the investigator is also the subject’s treating physician and the methodol-
ogy used is likely to be interpreted as therapeutic in intent. The model we employed of using a trained educator (nurses are natural candi- dates for the job) worked well. It is certainly more manageable and less disruptive than the oft-heard sugges- tions that patient advocates or con- sent monitors sit in on every inter- action between subject and investigator.
There may be advantages to using a trained, neutral educator, apart from aiding subjects’ decision- making. Subjects’ perceptions of the research team as willing to “level with them,” even to the point of explaining why it might not be in subjects’ interests to participate in the study, may increase their trust and cooper- ation. On the other hand, failure to deal with the therapeutic misconcep- tion during the consent process could increase distrust of researchers and
the health care system in general, if subjects later come to feel they were “deceived,” as a few did in the studies we observed. Enough experiences of this sort could further heighten public antipathy to medical research, partic- ularly if they are publicized as some have been.” The scientific method is
a powerful tool for advancing knowl- edge, but like most potent clinical procedures it has side effects that must be attended to, lest the benefits sought be overwhelmed by the dis- advantages that accrue. With careful planning, the therapeutic misconcep- tion can be dispelled, leaving the subjects with a much clearer picture of the relative risks and benefits of
participation in research.
Acknowledgments The authors acknowledge the invaluable
assistance of Paul Soloff, M.D., in the collection of the data described in this paper.
References
Charles Fried, Medical Experimentation: Personal Integrity and Social Policy (New York: American Elsevier Publishing Co., 1974).
2Arthur Schafer, “The Ethics of the Random- ized Clinical Trial,” New England Journal of Medicine 307 (Sept. 16, 1982), 719-24.
‘ “Consent: How Informed?” The Lancet I
(June 30, 1984), 1445-47.
4 Kathryn M. Taylor, Richard G. Margolese, and Colin L. Soskolne, “Physicians’ Reasons for Not Entering Eligible Patients in a Randomized Clinical Trial of Surgery for Breast Cancer,” New England Journal of Medicine 310 (May 24, 1984), 1363-67;
MortimerJ. Lacher, “Physicians and Patients as Obstacles to Randomized Trial,” Clinical Research 26 (December 1978), 375-79.
5 Sissela Bok, “The Ethics of Giving Placebos,” Scientific American 231:5 (May 1974), 17-23.
6 Paul S. Appelbaum, Loren H. Roth, and Charles W Lidz, “The Therapeutic Miscon- ception: Informed Consent in Psychiatric Research,” International Journal of Law and Psychiatry 5 (1982), 319-29; Paul Benson, Loren H. Roth, and William J. Winslade, “Informed Consent in Psychiatric Research: Preliminary Findings from an Ongoing Investigation,” Social Science and Medicine 20 (1985), 1331-41.
7 Bradford H. Gray, Human Subjects in Medical Experimentation: A Sociological Study of the Conduct and Regulation of Clinical Research. (New York: John Wiley & Sons, 1975).
8 Henry W. Riecken and Ruth Ravich, “Informed Consent to Biomedical Research
in Veterans Administration Hospitals,” Journal of the American Medical Association 248 (July 16, 1982), 344-48.
9 Barrie R Cassileth, Edward J. Lusk, David S. Miller, and Shelley Hurwitz, “Attitudes Toward Clinical Trials Among Patients and Public,” Journal of the American Medical Association 248 (August 27, 1982), 968-70.
10 Lee C. Park and Lino Covi, “Nonblind Placebo Trial: An Exploration of Neurotic Patients’ Responses to Placebo When Its Inert Content is Disclosed,” Archives of General Psychiatry 12 (April 1965), 336-45.
n Vincenta Leigh, “Attitudes and Fantasy Themes of Patients on a Psychiatric Research Unit,” Archives of General Psychiatry 32 (May 1975), 598-601.
” Paul S. Appelbaum and Thomas G. Gutheil, “The Right to Refuse Treatment: The Real Issue is Quality of Care,” Bulletin of the American Academy of Psychiatry and the Law 9 (1982), 199-202.
“3 Cassileth et al., op cit.
4Jan M. Howard, David DeMets, and the BHAT Research Group, “How Informed Is Informed Consent? The BHAT Exper- ience,” Controlled Clinical Trials 2 (1981), 287- 303.
5 Paul S. Appelbaum and Loren H. Roth, “The Structure of Informed Consent in Psychiatric Research,” Behavorial Sciences and the Law 1:3 (Autumn 1983), 9-19.
16 Harold Bursztajn, Richard I. Feinbloom, Robert M. Hamm, and Archie Brodsky, Medical Choices, Medical Chances: How Patients,
Families, and Physicians Can Cope with Uncertainty. (New York: Free Press, 1984).
‘~7 Jay Katz, The Silent World of Doctor and Patient, (New York: Free Press, 1984).
18 Taylor, et al., op cit.
‘9 Thurston B. Brewin, “Consent to Random- ized Treatment,” The Lancet II (Oct. 23, 1982), 919-21.
“oJohn A. Robertson, “Taking Consent Seriously: IRB Intervention in The Consent Process,” IRB: A Review of Human Subjects Research 4:5 (September-October 1982), 1-5.
2’ Dava Sobel, “Sleep Study Leaves Subject Feeling Angry and Confused,” New York Times (July 15, 1980), p. C-1.
24
This content downloaded from ������������198.246.186.26 on Sun, 02 Aug 2020 15:22:44 UTC�������������
All use subject to https://about.jstor.org/terms
