Wendler, David "Dave" (2025)
Transcript
BA: Good afternoon. Today is July 15th, 2025. My name is Brittany Acors. I’m a postdoctoral fellow in the Department of Bioethics in the Clinical Center. Today I have the pleasure of speaking with Dave Wendler. Dr. Wendler has worked at NIH since 1993 and is currently Acting Chief of the Clinical Center’s Department of Bioethics. He has written widely on such topics as assessing risks systematically, respect for autonomy, assent in pediatric research, research with stored biological samples, and moral status. He is also an attending on the Bioethics Consult Service and a member of the NIH Intramural Institutional Review Board (IRB). Thank you for taking the time to talk with me today, Dr. Wendler.
DW: Thank you for having me.
BA: I’d like to start off by talking about your education. Can you tell me about your path through the University of Pennsylvania, where you earned your BA in biology and philosophy in 1984, and the University of Wisconsin—Madison, where you earned your master’s and PhD in philosophy in 1993?
DW: Sure. Maybe it helps to give some context. I sometimes think that there are two types of academic people. There are the types who, from early on in their career, have a vision of who they are and who they want to be and how they want their career to go, in this 30,000-foot view of their lives and their careers. The individual decisions they make and the steps they take are typically informed by that vision. The second type is people who are more focused on the challenges, the demands, the circumstances in front of them at a time, and they just respond to those and end up stringing those altogether into a life. I am very much, emphatically, the latter type. I have never had a broad, deep vision of how I want my whole life to go. I rarely step back in that way, and I tend to be focused on what’s in front of me. I think that is useful for understanding the arc—if that’s the right word in this case—or the series of steps that make up my career. About four or five years ago, I was asked to give a talk about my career, and I titled the talk “Right Decisions for the Wrong Reasons.” I titled it that way because I feel that, in a lot of cases, I made decisions based on what I was thinking, what I wanted, what was going on in my life at the time to take a next step. And then, when I got to that next step, I was glad I had taken it, but I liked it for completely different reasons than the reasons I had taken the step in the first place.
I spent a lot of my youth playing sports, and I was into a lot of different sports. When I got towards the end of high school, I wanted to continue playing soccer in college. I got recruited from different places to play soccer, and I ended up going to the University of Pennsylvania. I realized it was a good school, but in a large part, I went because I was interested in going there to play soccer. I got there, and after at least the first season, probably earlier, I realized that it wasn’t what I wanted to spend my time doing. Things were fine with the school, I was enjoying myself and I had a lot of good friends, but I realized that I wasn’t the kind of competitive, at least competitive athletically, nature that would make for a happy, flourishing, and successful collegiate athlete.
So, I stayed at Penn, but I stopped playing soccer. I enjoyed lots of different things, but I didn’t know what I wanted to do in my life or in my career. You had to fill out a form every semester for what your major was, and the last box you could fill out, at least at the time, was UND, and that stood for Undecided. I used to joke that I was going to get a doctorate in UND. I stayed with that for a long time, but eventually I had to pick a major. I had taken a bunch of biology classes, not because I was particularly interested in biology over other things, but because I had studied sciences in high school, and I became a biology major. Didn’t enjoy it very much, but that’s what I was used to.
At one point, I was sort of despairing of my collegiate education. I remember one time I was in a biology class studying evolution, and I asked a question about progress. The professor basically said, look, that’s not the kind of questions that we discuss here. He pointed to the very large, fat book he had on his desk and said the goal of this class is basically to memorize this book. I didn’t say this, but I thought, that’s why we have computers. Why is it that we’re supposed to be memorizing this? I remember having to memorize the life cycle of blue-green algae. The only two things I remember any more about it is it’s really complicated and it’s really, really boring.
I was bemoaning this to one of my friends, and they suggested I take a philosophy class. They had taken this professor they really enjoyed. I took that class, and I loved it. I thought, this is exactly what I wanted to get out of an education. It was too late for me to be a straightforward philosophy major, but I learned from that professor—it was Elizabeth Flowers that I really got attached to—about what was called a philosophy of science major. I could get a major with half as many philosophy classes and use the science classes to fill the other side of the major.
I became a philosophy major initially because it seemed like a fun thing, to sit around with a bunch of other people and just chat about philosophical ideas. And then I got to the end of my four years, and I had to figure out what I was going to do with my life. I had no idea what I wanted to do, but I come from a family of teachers. My mom taught first grade for a long time; my sister taught first and second grade for a long time; my aunt taught fourth grade for a long time; two of my cousins taught elementary school for a long time; my dad had been a chemistry teacher before he became a principal and then a superintendent. And I think I had some of that too. Teaching seemed like something that might be good, I thought. Well, teaching in a college could be an interesting thing to do, teaching philosophy, because I enjoyed those classes. I enjoyed the way the professors just tried to get us to think through problems and issues and come up with our own views. As a result of that I decided, OK, I’m going to go to graduate school, try to get a PhD in philosophy, because I really want to teach philosophy at a college, and in order to do that, you typically need a PhD.
I had been doing all this philosophy of science, and I was telling the same professor that I’d started out with about this plan. She said that there was a relatively new field that was becoming popular called the philosophy of biology, which I hadn’t heard of. But I studied philosophy, and I took a bunch of biology classes, so that sounded like an interesting thing.
Turns out that one of the leading people in the field—who I think is still there, maybe he’s just about to retire—is a guy named Elliott Sober, who is a very prominent philosopher of biology at the University of Wisconsin. I applied there, I got in, I went, and I studied with him. I thought, OK, I’m going to do philosophy of science and philosophy of biology, and then I’m going to be a teacher. I don’t really care about writing the papers. I don’t really care about doing the research, but I was interested in becoming a teacher.
That turned out to be the right decision for again the wrong reason, because as I went through graduate school, I learned that I do enjoy teaching, but what I really like is doing the research. That just completely surprised me. When I got to college and realized I didn’t want to be a collegiate athlete, that was surprising because my whole identity up until then was as an athlete. This was a similar kind of surprise when I realized that, no, I think what I like is actually doing the research.
I studied and started thinking about being a professor and being able to do research. I was doing philosophy of science, and metaphysics and epistemology, which are just two different academic fields within philosophy. While I was there, I supported myself as a teaching assistant (TA), and for two semesters I was a teaching assistant in Philosophy 341, which I’m told still exists at Madison, and it’s called Contemporary Moral Issues. I TA’ed that class twice for a guy named Dan Wikler, who’s a very prominent guy in bioethics. We got to know each other; we got along. Dan was always very keen on trying to identify people to get into bioethics. I remember maybe three or four times over my career, we had these big set of mailboxes in the hallway of the philosophy department, and he would put flyers about bioethics in the mailboxes of a couple of us that he was trying to attract into bioethics. I remember every time I would grab that, read it, and think to myself, that’s just not philosophy. That’s not what I want to do with my life. And I would throw it in the trash, and I told him that the couple times he asked me about it.
So then I got to my last year. At least at this time, it took a long time to get a PhD when you’re TA-ing; it took me 7 years. When I was going out on the market then—and now it’s even worse—like a lot of other academic positions, philosophy jobs are really hard to get, so you apply for everything. Unbeknownst to me, Dan was at a meeting here in Bethesda at the NIH, and they were just thinking about starting a bioethics fellowship program. Dan recommended me, so I got this phone call one day from somebody saying they’d like to interview me for a position in bioethics. Oh, I can say this now—when I used to tell this story early on, my boss at the time would say, “Don’t tell that story; it doesn’t sound good.”
But the fact is that, at the time I interviewed for that, I had never studied bioethics at all. I didn’t really know anything about bioethics. I’d never taken a bioethics class. I had taken one moral philosophy class and I TA-ed this Contemporary Moral Issues class, but really my philosophical training was in philosophy of science and epistemology, which is just a completely different field. But I thought, well, the job market is terrible, so you apply for everything, so I said OK. They flew me out here for an interview, and when they did that, it just seemed like I wasn’t that interested in bioethics. This is another case of me just completely misjudging myself and what I wanted to do with my life. But I thought at the time I’m just not interested in doing bioethics. But I’m from New Jersey, I went to undergrad in Philadelphia, I’m very much an East Coast urban person, and being in DC for a while seemed great. I still wanted to be a philosophy researcher, but what I knew from people who preceded me at Madison and had gotten jobs was that there are universities and colleges in the kinds of places that weren’t that congenial to where I wanted to be. I thought being in DC for a couple years would be great and be really exciting.
I came here, and I did [the fellowship] for about two years. I did it for a year and thought, this is interesting, I’ll do it for one more year. After that second year, Zeke Emanuel then came on as the boss and completely transformed the department into basically what it looks like today. And after that I started getting very interested in bioethics, completely against what I had predicted. And it is now 32 years later.
BA: I’d love to hear more about that transition, between what bioethics first looked like when you came in 1993 and then the changes that Zeke brought.
DW: There’ll be a gap from what I can tell you in there. If you’re interested in filling in some of that gap, the two people you might talk to are Evan DeRenzo, who was my officemate when I first got here, and Alison Wichman, who’s a neurologist here and did a fair amount of bioethics here for a while.
When I first got here, that part of the history I can’t tell you very much [about] just because at the time I barely could tell up from down. It was just such a different world from anything I’d ever known or experienced. The guy in charge of the department at the time was named Fred Bonkovsky. There had been some kind of disruption or something—I never even really found out what it was, but a number of things had happened a couple years before I got here, and they decided to basically clean house and start over. They brought Fred in as a consultant on how to modify the department. He stayed on for a while, and he was here when I got here. The message Fred had gotten was that what was really important was clinical responsibilities and clinical work, so for the first two years of my being here, I basically got thrown into that. When I first got here as a fellow, within the first two months, I was going on four sets of clinical rounds every week, I was on four IRBs, and I was taking call. It was an 85% clinical position.
I remember from the very beginning, after my first month, I was going on rounds in the intensive care unit (ICU), which used to be in 10D in the old hospital, on the tenth floor. For truly the first six months when I was in [the ICU], I just had absolutely no idea what anybody was saying. It’s basically numbers; it’s abbreviations; it’s acronyms, and they go by very fast, and it’s hard to even remember them, much less understand their significance. For the first almost year, basically the vow I made was I was going to remember two abbreviations or acronyms, and then I was going to go back to my office, and I was going to look them up and figure out what they meant. The problem was, I was too embarrassed to ask anybody. AST are these liver function tests that have these abbreviations, but they would say them so fast. I was like, was it A-C-T or A-Z-T? And so, I’d have to try a bunch of times before I even got it right. And then I thought, OK, those are liver function tests. Those are two of them, I’ve got those. I just spent a lot of my time doing that, which in a way was really strange, but in another way, I found it interesting.
I love traveling, and one of the things I’ve also gotten out of this job over the years is traveling. I love traveling to different places and different cultures. And really, for at least a year or two, I basically felt like that’s what my job was. I was in this very important, significant, distinct culture: medicine, hospitals, ICU in particular. They speak a very specific, specialized, technical language. I had never been in that culture or that setting before. I didn’t speak the language, but I was given this sort of bedside, ringside, front-row seat on all of it. And it was fascinating. I just found it absolutely fascinating.
BA: When you came, it was still called the Clinical Bioethics Program. Were you the first postdoc fellow?
DW: Well, I don’t know. Evan might have been the first. There was also this guy when I got here named John Schumacher. He went on to be a sociologist at Case Western Reserve for a while. I was certainly one of the very first, but Evan might have been the first or there might have been one before. I don’t know that John Schumacher was a fellow, but I’m not sure what his role was either, so you could definitely say I was at least close to one of the first.
BA: Or at least very early on.
DW: Very early, yes.
BA: When Zeke came, what kinds of changes happened?
DW: Zeke came between my second and third year, two-and-a-half years or so in. There was a new head of the Clinical Center, that’s the hospital. There was a new head guy named John Gallin, who had been a scientific director of the Infectious Disease Institute, and he was thought of as being a very technical, very scientific guy. When he became head of the hospital, they decided to figure out what to do with the Bioethics Program, which was kind of haphazard, was very small, wasn’t sure what they were going to do with us. A lot of people thought at the time, or a lot of what I heard at the time, was that [Gallin] is a real technical, scientific guy. He’s just going to get rid of it. He’s not interested in having a bunch of ethics people around trying to tell him what to do. He’s just going to get rid of you guys. That’s what we thought initially.
It turns out that John Gallin studied a disease called CGD [Chronic Granulomatous Disease] , which is a very rare disease, but it’s studied here a lot. It stands for chronic granulomas disease. It’s a very bad disease in which people are born with impaired immune function, and as a result they get lots of bad lifelong infections, spend a lot of their life in the hospital, get lots of treatments. It’s a very tough disease. He studied that disease and took care of and saved the lives of a lot of patients who were really sick, but in that process, it turns out that he had had a couple of ethical dilemmas and called some bioethics consults. And as he told me later, he found them really helpful. And so just completely unexpectedly from what everybody thought was going to happen, he decided not only that he didn’t want to get rid of the bioethics program, but that he wanted to have the very best bioethics program he could possibly have.
So what he did was he called up a friend, said find me the six or eight most prominent bioethicists in the country. They did that, and he called a meeting of all of them, had a whole day meeting devoted to it, because he was really fired up about this now, and decided on what the vision would be for the bioethics program. One of the goals was to hire a really strong leader, so there was a national search, and they hired Zeke.
I’m the latter type of the two types of academics I mentioned, where I sort of focus on what’s in front of me. Zeke is much more of the former. He has big visions about what he wants to do and what he wants to do with his life. This department, and myself in particular, dramatically benefited from that. Zeke had a really clear, strong idea of what he wanted this program to look like, and I remember, when he first came, after about the first month—Zeke has this very distinctive voice; whenever I start telling Zeke stories, it starts coming into my head, and then I try to imitate it and I do a terrible job of it, so I’m going to control myself this time. But he took me into his office, and he just sat me down. He’s a very blunt, straightforward guy, and he says, “What do you do? What do you do here? Tell me what you do. Tell me how long you’ve been doing it.” So I basically went through the litany of things I was just describing to you a little bit earlier: well, I go on the ICU rounds, I go on the cancer rounds, I go on the infectious disease rounds, I go on the pediatric rounds, I go on the HIV rounds, and then I told him about the four IRBs I was on. And he said, “Stop, stop, stop, stop. Why do you do that? What are you doing? Why are you doing that stuff? What are you doing?” And he basically started yelling at me [laughs], and I said, “Well, that’s what I was told to do. That’s what I’ve been doing for two years.”
So, what’s interesting is that, as I mentioned before, Fred, my first boss, had gotten the message that the way to have a flourishing bioethics program is to really focus on the clinical things. I think that’s true, and it remains true to this day. But Zeke was a very different kind of person, and he got a very different message. He believed that this is the NIH, this is one of the premier institutions in the world for doing research. And he thought, if we’re going to get respect, if we’re going to do well as a department, we need to be doing research. He basically said that very first day, I’m taking you off at least three of those IRBs, if not all of them, and I’m taking away at least three or four of those clinical rounds. I want to give you more time to do research.
Now for me that was actually great, because as I said, I was still very interested in doing research, and this freed me up to do a lot more. If you look at my CV in the first couple years here, there was like a paper here and a paper there, because I was in IRB or in clinical rounds all the time. Zeke really freed me up to do a lot more research, so that was one of the big changes.
BA: Was that the point when you transition from fellow to faculty?
DW: Basically, I think so. Now that I’m the acting chair of the department, I appreciate that there are all these distinctions, and you might not track the details of them if you’re not an HR (human resources) type person. I remember that, for the first couple of years when Zeke was here, we had this tradition then, and we still have this tradition now, that when you give a talk or we have a visitor, everybody in the room first goes around and introduces themselves, says who they are, and gives a sentence or two on what work they do. I remember during those meetings for at least six months, if not a year, we would go around, and I would say, “Hi, my name is Dave Wendler, I’m a philosopher, and I’m a fellow here”, and right as I started to describe what I was doing, Zeke would yell at me and say, “You are not a fellow.” And I would say to him, “OK, Zeke, what am I?” And he’d say, “I don’t know, but you’re not a fellow.” [laughs] I was like, OK. Then I would just say, “OK, my name’s Dave, and I’m not a fellow.” I was in this ambiguous status for a while, and then eventually at some point, I don’t even know when this happened, how this happened, I can’t even remember if I applied for something or if Zeke just did it, but I ended up going from being a fellow to being on the faculty.
BA: You just kept coming back every day, and they kept letting you!
DW: Yeah, when I introduce myself, sometimes I say to people that I came to the department at the end of 1993, didn’t know what I was doing. I enjoyed it, and I have become extraordinarily attached to the NIH. I think it’s just an absolutely wonderful place, and they just haven’t figured out how to get rid of me yet.
BA: Well, they have also given you a lot of responsibilities. So even starting back in 1996, you became head of the Unit on Vulnerable Populations. What did that entail?
DW: Good question. That was preceded by [a conversation with Zeke]. I live in DC. I take the metro back and forth to work. At the time, Zeke was actually living in Chicago. He would fly in on Monday mornings and fly out on Wednesday evenings, and in between, he would stay at a friend’s house in Cleveland Park, a neighborhood near me. Every once in a while we’d run into each other, and we’d take the metro home together in the evenings, and we started having these conversations. This was me being my version of a career and Zeke being his version of a career: What do you work on? What do you want to work on? And my initial reaction to that question is always, what am I interested in? I’m interested in the interesting things, and I’m not interested in the uninteresting things. And he would yell at me again, in his inimitable way, and say, “That’s not enough. You can’t be an academic, you can’t make a name for yourself that way. That’s ridiculous. You’ve got to have a field. You’ve got to have an area. You’ve got to have something you work on.”
I said, “Alright, what should I work on?” And he had two ideas—he was much more attuned to bioethics than I was at the time. And he said, “OK, vulnerable populations, that’s big now. You work on research on vulnerable populations, and we’re going to make you head of the Unit on Vulnerable Populations, and you’re going to work on that.” I was like, “OK.” [Zeke continued,] “And the second thing was stored samples—stored samples are becoming a big deal, consent for it is a big deal. We need to figure that out. Work on that.” I said, “OK.” So that’s how the Unit on Vulnerable Populations got created. I mean, initially it was just me, so I don’t know if it was really a unit. It was a person, but at the time I think it was valuable to have a title just to describe what I do.
One of the things I’ve come to love about this job, I realize now, is that what I initially wanted to do when I was in graduate school was research on academic philosophical issues like free will. I was into epistemology, and whether or not you can know, and when you can know things, and when you can claim to know. I was very interested in this stuff. Now I realize that if I had spent my career just working on purely academic philosophical stuff, I probably would have just gone nuts a long, long time ago. What saved me, and what’s wonderful for me about this place and about bioethics, is that there are deep, interesting, fundamental, important ethical issues involved, but it has this practical aspect or significance or implications, and I really like that combination. I started thinking about it in different ways, but at the time and still to this day, some people, when they think about vulnerable populations, they understand that to mean individuals who can’t give informed consent and therefore are at greater risk than other people. For instance, three-year-olds, and people with severe dementia, and people who are unconscious—that was a lot of the talk about research on vulnerable populations. For me, that became a very interesting area to work on because the way I started thinking about it was thinking about, OK, you want to do research on these people who can’t consent. How do you do that ethically? The way I approached it was trying to think, well, why is consent valuable in the first place? What’s morally important about getting people’s consent before you put them in research? Try to figure that out, and then if you can figure that out, try to see, when somebody can’t give consent, what are other ways or protections or requirements or conditions you can put on the research in order to try to realize those goals or conditions as well as you can? And so I’ve been thinking about that, I still keep thinking about that, and I started thinking about that a long time ago. It led me to thinking about really rich, fundamental, philosophical, ethical issues, but in a way that has this relevance, for instance, for when you can enroll a two-year-old in research. That practical aspect of it is really gratifying.
I’ll give you one more story about this. I remember being excited about the practical implications of the work, although there is a risk or a challenge to that which I learned distinctly. One of the IRBs I was sitting on initially was the Child Health IRB. They had a deceptive study. Deceptive research is actually fairly common. In order to do certain kinds of trials, you can’t honestly tell people everything about the study. For instance, you want to figure out how people react when you make a loud noise behind them. But if you say to them, “In a minute, I’m going to make a loud noise behind you,” it completely changes how they react, so then you’re not getting an accurate picture of how people react when they hear a loud noise that they weren’t expecting, and that’s what you were trying to study in the first place. You need to either not tell them certain things or describe the study in an inaccurate way. That happens a fair amount in order to collect valid data.
We [the Child Health IRB] were looking at a study like that and some of the people in the IRB thought that’s just unethical, you can’t do deceptive research, you can’t give people false information as part of the informed consent process, you can’t do this study. Other people thought, eh, we deceive each other all the time, that’s sort of what advertising is. You just deceive people about your products. We do that all the time in daily life, what’s different about this? It’s fine. And I thought neither of those views really seemed right to me, there’s got to be some in-between ground where it certainly seems ethically problematic, but it doesn’t seem to necessarily follow that you can never do it. We just need to figure out when it might be acceptable or how you can make it acceptable.
I thought about that for a while. It’s one of the first papers I wrote in bioethics. The typical experience of a philosopher is you write a paper, and maybe 50 people in history read it, and the people who read it, all they really care about is trying to figure out some problem with your argument or coming up with a counter example so they can get something published. And that’s the way the academic game works. In this case, I wrote this paper on my proposal for how to do deceptive research ethically. And I got one phone call, and I got a couple of emails. When I first got those, I thought, oh no, there’s some counterexample these people have figured out. Turned out, it was actually from investigators who did deceptive research and were interested in using my approach, and they had some questions about the details. I remember one time, sitting in my office talking to this investigator on the phone before I left for the day, and as I walked out of the office, I was very excited that there was somebody who’s actually paying attention to what I said, not because they disagreed with it or wanted to publish something on their own against my argument, but because they thought it was interesting, and they wanted to try to implement it in their own work. Maybe I was going to have this tiny, tiny, tiny impact on the world, and that seemed really exciting. And then as I got to the metro, about an eight-minute walk from my office to the metro here at the NIH, I had maybe six minutes of feeling great and excited about it. Then just as I got to the metro, I thought, what if I’m wrong? [laughs] What if I’m wrong, and I tell this guy all this stuff, and he’s going to do it, and it’s just going to mess up his research.
That is the one thing about having the practical significance or implications: there’s the possibility that you’re wrong, and if you’re wrong, you possibly mess things up. I think that’s something that doctors—and this is one of the reasons why I can never be a doctor—just have to learn to live with. Doctors just inadvertently, accidentally hurt people frequently, and every doctor just has to learn to accept that and figure out either how to get on with their life or drop out of the profession. Philosophers don’t have that problem, because basically nobody pays attention to what they do except for other philosophers. So you’re at least insulated from that worry. This was my first encounter with it, and in a very tiny way. It was a little bit scary, and I’ve come to terms with it over the years, but it took a while.
BA: That makes a lot of sense. It seems like that’s also background to your research on decisional capacity and surrogate decision making. Can you talk a little bit about that topic and what your contributions have been?
DW: That’s exactly right. Thinking about deception, thinking about vulnerable populations, thinking about people who can’t consent—what that got me thinking about initially was how do we decide whether to enroll somebody who can’t decide for themselves in research. The famous story that gets told in bioethics on this topic starts with the Nuremberg Code, which was a court ruling after World War II on the Nazi atrocities and the horrific experiments they perpetrated on prisoners. The thought at the time was that the way you avoid those abuses occurring again was to require informed consent. So famously, the Nuremberg Code starts out with the first principle. The whole code is just ten principles. The first one says that informed consent is essential to ethical research. That seems like it means you can’t enroll in research somebody who can’t give informed consent.
Now, that is certainly a very powerful way to protect people who can’t consent from being exploited or taken advantage of in research. But there are two big problems with it: one is that sometimes being in research can be valuable for the person. For instance, if it offers them access to a drug they can’t get outside of research. The second problem is that, if you completely preclude research with individuals who can’t give informed consent, then you cut off the possibility of doing research and therefore learning things to improve medical treatments for those groups or populations. The big example that people talk about here is pediatrics and little kids. If you can’t ever enroll a three-year-old or four-year-old or five-year-old in a research study, then it’s very hard to impossible [to know if a treatment works for them]. Now maybe alternative methods in computers and AI are going to address this challenge, but right now the only way we know how to figure out whether a certain drug works in a three-year-old is to give it to a three-year-old or to give it to a four-year-old and see. In order to make progress, you need to enroll these people in studies.
My thought was OK, how can we do this right? I thought about that for a long time and it’s interesting. I sort of did it backwards. I started thinking about it from the research setting. And then I just, after a while, started thinking about this in the clinical setting. I always tell people if you look at bioethics and you look at philosophy, there’s a lot of emphasis on informed consent for both research and clinical care, on getting informed consent from patients in the clinical setting. It turns out, though, that a surprising number of medical decisions—including important ones, maybe the majority of really important medical decisions, like whether to put somebody on a respirator, whether to take them off a respirator, whether to put them on dialysis—very frequently, maybe in the majority of times, are made by somebody other than the patient, because the patient is so sick that they can’t make a decision for themselves at the time. So practically, it’s really important to try to figure out how to make decisions for these patients. What I started thinking about, based on my thinking in the research setting, was applying to clinical care and trying to think about how we can make these decisions and how we should make these decisions for patients who can’t make decisions for themselves.
BA: One of the newer directions that work is taking is the patient preference predictor (PPP). Can you talk about that project that you’re working on now?
DW: Sure. What we’re talking about here is a patient who needs medical treatment, but they can’t make decisions for themselves. Maybe they’re in a car accident and they’re unconscious. Or maybe they’ve developed severe Alzheimer’s disease. Or maybe they have a really bad infection and that’s given them this enormous temperature and they can’t think straight. Or they just had a lot of steroids and they’re not able to think straight and somebody has to make a medical decision for them. How are they supposed to make medical decisions.
Well, the reigning standard—ethically, clinically, and also legally—is what’s called the substituted judgment standard. The basic idea there is that, when somebody’s making a medical decision for somebody else, they are usually called a surrogate. The surrogate is making the decision for the patient. That’s often the next of kin: the wife, one of the children, a sibling. They’re making a decision for their loved one: whether they have surgery, whether they are intubated, whether they have dialysis. The substituted judgment standard says to the surrogate, don’t make the decision that you would want if you were in this situation, and, furthermore, don’t make the decision that you want for your loved one. Instead, you should try to figure out what decision that person would make for themselves. So that’s called the substituted judgment standard. It’s basically standing in the shoes of the person, imagining you’re them for a moment, and trying to figure out what they would decide if they were faced with this decision.
That’s the standard, so one of the first things we started working on was looking at an obvious question: How good are surrogates at doing that? How good are they at predicting or guessing how their loved ones would want to be treated? At the time, we didn’t do this research, a lot of it existed, but it tended to be very small studies. For instance, a lot of them tended to be studies that nurses did in order to get a master’s in nursing. And what they would do is they would just take five, ten, or fifteen couples or pairs, maybe a husband and a wife who were in the hospital and then separate them. You put them in different rooms and say, the husband’s the patient, and you say, here are a couple scenarios: Imagine you’re in a car accident. You get hit on the head, and the doctors have to decide whether to give you blood, to give you dialysis, to give you antibiotics. Would you want those things? You get the “patient’s” answers, then you go into the other room, and you ask the spouse the exact same questions, but about how they think that their partner in the other room answered these questions. Then you put all those together [and see how well they did].
So what we did was a systematic review, where we put together the results of many studies that had done that, and basically when you do that, you get an estimate for how good we are at predicting the medical treatments of our significant others, of our next of kin, of our first-degree relatives. And the answer is terrible—we’re really, really, really bad at it. The data suggests that maybe we’re slightly better than guessing, but not much better. That was the first result, so that was what got us started. That, in itself, I thought was interesting. And I thought, OK, so if we’re going to try to work on this field on this issue, we want to try to figure out ways to do better.
How can we do better? Well, one thing that was known at the time was that patients’ treatment preferences tend to be correlated or associated with different aspects of them. Not surprising ones: if you are 20 years old, you are more likely to be willing to undergo really aggressive treatment in an ICU for weeks or even months in order to help you live the rest of your life. If you’re 95, you’re much less willing to undergo a couple weeks or a couple months of aggressive care because you think, well, I don’t have that much time left. So there are lots of associations like that, some of them we knew. So initially we got Liz Garrett-Mayer, who was a statistician at the time at Johns Hopkins, and we found out some of those correlations. We had her create this very, very simple computer model, then we went back to those studies that I just mentioned where they talked to the spouse and they talked to the patient. What you can do with those is you can get [an idea] of what treatments patients want and don’t want in different scenarios, and then you get certain aspects [demographic factors] of them. We then tried to build this model, a really simple, crude model because we didn’t have much data to use, we just had the limitations of what was reported in these studies, which wasn’t much. Just with a tiny bit of data, we built this to try to predict the treatment preferences of these patients. And we found something that, to me, was one of the most shocking things I’ve ever found in a study, which was that that simplistic tool that we had developed was at least as good at predicting the treatment preferences of patients as their spouse. At the time, I almost thought we must have cheated or something, because it just seemed unbelievable to me. Basically, what that means is, if you don’t know somebody, and I tell you one or two things about them, you’re going to be as good at predicting what treatments they would want at the end of life as the person who’s been married to them for 45 years. Now, when I first heard that, I thought that just doesn’t make any sense at all. It just can’t be right.
So, I was presenting that at a works in progress (WIP) session a long time ago. I was doing a WIP on this study, and at the time, one of our wonderful colleagues was a guy named Alan Wertheimer, who’s a philosopher. I presented these data, and I said, I almost don’t believe them, it’s just so surprising. It turns out that he knew a lot of literature that I didn’t know anything about, and he said it’s actually not surprising. He said there’s a lot of data on our ability to predict the preferences of other people, and those data suggest a couple things: one, they suggest that we tend to be really bad at it. The other thing that was interesting: they suggest that the better you know somebody, the longer and more intimate the relationship you have with them, the worse you are, not the better, at predicting their preferences. That’s bad news for surrogate decision-making, because what we tend to do is rely on the people who know the patient the best, their spouse, their next of kin. Now there are lots of reasons and possible explanations for that which we could talk about if you’re interested, but I found that really striking.
That got me into thinking, well, we were able to do just as well as the spouse with only tiny bits of information. If we could develop this computer model more, and get more information about the patient, we should be able to do better. We’re not going to do worse, and we could possibly do better. Another thing to note is that all these studies were at a very calm moment. They were like, imagine your husband’s been in a car accident and your husband is dying, or something like that, what would you think they want? Well, nobody has the data on this because no one’s going to ever do the study, but presumably, in the real setting, our ability to make these decisions is not going to get better. If you’re actually in that setting and you think your spouse is dying, your brain is not going to be operating very well. On the other hand, these computer programs aren’t going to be affected by that in a negative way, so that’s another reason to think that if we got these things and we did it right, they would be more accurate. I thought initially that they might be a little bit more accurate. There are reasons now to believe they’d be a lot more accurate than surrogates. And so I’ve been working on that for a while. I wrote a bunch of papers on it, and it didn’t get a lot of attention for a long time. Recently, which is nice, it’s gotten a lot of attention because now we have things that we didn’t have when I was starting to work on this from 2004–2006. This was before we had ChatGPT and all these AI sorts of systems. Those suggest a lot more promise. With electronic health records and things where we could get information about the specific individual, these algorithms, the PPP or PPPP (P4)—it’s either patient preference predictor or personalized patient preference predictor—could get to the point where they’d be much more accurate, using AI, taking advantage of electronic health records, and other sorts of things. It could do, I think, a lot better.
The problem now is a very prosaic one, which is that we need to build one and test it, and that’s probably going to cost at least $5 million. It might cost $10 million. I can’t possibly do that. That would be our departmental budget for years. Basically, everybody else would have to stop working, and we’d have nothing for four or five years if I tried to do that out of our departmental budget. That’s not going to happen, so what I’ve been doing over the last couple years isn’t so much doing academic work on it. It’s trying to interest somebody in actually developing [the technology]. I spent a number of years writing to lots of foundations and never really got any uptake, so I’m hoping at some point somebody will do it.
I’ll say one more thing about that. The initial goal is to try to increase the extent to which patients who aren’t able to make decisions for themselves nonetheless get treated in the ways that they want and don’t get treated in ways they don’t want. I think that’s an important goal. Whenever we survey patients about this, they really like the idea of a PPP. They think it could be really valuable. The primary goal there is to try to get patients treated in the way they want, even though they can’t make decisions for themselves.
But there’s a possible secondary benefit, which relates to what we call surrogate burden. Along with a previous fellow, now faculty member, Annette Rid, another project I started working on was thinking about what it’s like for family members to make these decisions. When we started giving the predictive accuracy data, that family members are really bad at making these decisions, some people had the reaction of, who cares? We just want these people to make these decisions anyway. That’s their role as the next of kin. That’s what they should do. What I thought was, well, it’d be interesting to look at what the impact is on them. We looked at a lot of data and it turns out that, at least in a lot of cases, the impact is very negative. It’s really, really hard for people to make these decisions. They feel responsible for what are often very sad outcomes, like a person being taken off life support and dying. It can make people feel responsible and haunted by these things for, the data suggests, months or sometimes even years.
And so the thought is that if we could build one of these PPPs, maybe—and this is just speculation—but maybe a side effect would be that it would take that burden of responsibility off of the surrogate. And if it did that, the hope is maybe it wouldn’t be nearly as bad of an experience for the surrogate, so it might have that side benefit too.
BA: Sounds like definitely a new frontier in bioethics. One of the things you mentioned from very early in your time at NIH was that you were on many IRBs, and then you got taken off them when Zeke started. But now you are back on as a member of the NIH Intramural IRB. What do you do in that role?
DW: Zeke was never able to get me off all of them, despite all of his power, but I got reduced from five to four to three to two, and then eventually to one, and now I’ve been on one for a long time. Institutional review boards, or IRBs, are important bodies that basically review human subjects research before it gets started. Their goal is to make sure that the research is consistent with the federal regulations on research with human subjects and that it’s scientifically appropriate and ethically appropriate. It’s a really important job. I sit on the committee once a month, and we review the studies in the intramural program where I work at the NIH.
BA: That’s great. You’re also still going on some weekly clinical service rounds with the National Institute of Mental Health. Can you talk about your involvement with that team?
DW: That is interesting. Most of the people around here are researchers, and they’re doing studies, but there are some consultation services. As you know, we have a bioethics consultation service where people who are taking care of patients or doing research with human participants can call us if they have an ethical question or an ethical dilemma, and we’ll talk to them about it. There’s similarly a psychiatric consultation service. If you’re enrolling somebody in a cancer trial and they start getting very depressed, maybe as a result of their advanced cancer, you can call. If you’re the researcher and you’re worried about the participant, you can call the psychiatric consultation service and have them see the participant. Those are the people that I go on rounds with. They’re consultants to the other researchers. I’ve been doing that for a long time now, and it’s fascinating. Part of why it’s fascinating is because the psychiatrists tend to be very open and receptive to having ethicists around. They’re interested in the things that we discuss. Unlike the ICU, they tend to talk a lot more in normal English terms rather than numbers and figures. They do that too, but not nearly as much as happens in an ICU, so I can understand it more, and when I take the fellows with me on rounds, they can understand it more. They have very sad cases, because the people who come here often tend to be very sick, but they’re also very, very interesting.
BA: Among your many other jobs in this department is serving on the NIH Clinical Center tenure review committee. Can you tell me about what the tenure process is like in the Clinical Center, and what your role is on the committee?
DW: My role on that committee is actually very minimal for one specific reason. When I first came here and started going on the rounds in the ICU, I was telling you about the person who was in charge of the ICU at the time, a guy named Henry Masur. He’s a really wonderful guy. He is still in charge of the ICU at the NIH. He’s also in charge of the tenure committee at the NIH. As you know, we [in bioethics] are a little bit of an outlier around here. Most people are scientists, they’re physicians. We tend to be things like historians and philosophers and lawyers. While I’m on the NIH tenure committee, I don’t review a lot of the packages. Henry basically called me up one time and said, “OK, a lot of this stuff is just not going to be within your wheelhouse, so what I’ll do, if it’s acceptable to you, is reserve you for times when there are people who are doing the kinds of things that your input might be helpful”, which was a very nice thing for him to do. I tend to get involved only when there are people who are reviewing things that I could be helpful with. If you have a basic scientist who’s doing something in biochemistry, I’m just completely useless. But it’s just like at universities, there is a tenure process that involves people being here for a long time, doing a lot of service, and also building up a sufficiently impressive publication record to be accorded tenure basically in the way that happens at universities.
BA: It’s good that they have you for the softer kinds of tenure cases too, so we don’t get left out. It makes NIH more interdisciplinary. Another service you offer is coordinator of Ethics Grand Rounds. Can you talk about how you choose topics for those presentations, and what they offer to the community?
DW: We deviate from it a little bit sometimes, but the standard model is this: as I mentioned, we have a consultation service, and we get interesting questions, concerns, and ethical challenges that people bring to us. What I do is look at all of those cases that have come in and choose ones that I think might be interesting to a broader community or might be of interest to the hospital, the Clinical Center, or the broader NIH community in general. And then we present those at Ethics Grand Rounds. Grand Rounds are an hour, from noon to one, on Wednesday afternoons, four times a year. I pick one of those cases, and then I’ll typically have the person who called the consult, often a clinician, present the case and the question in about 15 minutes. Rather than giving our view on it or what we said in our consultation report, we bring in an outside expert. We ask them what they think should be done or how they think the situation should be handled.
BA: Great. Now I want to step back, and for my last few questions think about bioethics at NIH throughout your 32 years here. What changes have you observed?
DW: Well, the changes from the very beginning to now are just dramatic. When I first got here, there were sort of three of us. My office literally was in a closet that I shared with Evan DeRenzo, the person I mentioned before. We were in the closet behind the Red Cross desk, which used to be near the old entrance to the hospital, and the program was very informal.
There was none of the training that we have now, none of the works in progress, none of the journal clubs. There was none of the mentoring. Although I was a fellow, I just got treated as a faculty member and got thrown into doing faculty member things from the very beginning. So although it was a fellowship in name, in practice there wasn’t a lot of the pedagogical, didactic, educational activities that we have now, so that has completely changed. The program of the whole department has changed, and also, as you know, the physical space is just dramatically different than it used to be. We were in one closet, and then if you walk down the hallway, took a turn and walked a couple more offices, the head of the department was there, and one other little office. That was basically the department.
Now, because of Zeke, we have our own space. We’re all together. We have a common space. We have a little bit of a kitchen. And it makes things so much better, both just enjoyable, but also in terms of interacting, working together. It’s just lightyears better.
BA: Has anything stayed the same throughout your time here?
DW: While it’s waxed and waned, one thing that stayed the same is, as I mentioned initially, there was this huge emphasis on clinical issues: helping researchers, helping with patients. That has stayed the same. Doing research and publishing has gotten more important from the initial point. After the first couple years, we’ve settled into a kind of equilibrium of trying to make sure that we do both of those effectively.
BA: Looking forward, what do you see as some of the greatest challenges and opportunities confronting bioethics in the next five to ten years?
DW: I have absolutely no idea. This is the kind of thing I don’t know. You’re a historian, maybe some people are good at this. I’m not. Now you’re seeing in practice my focusing on what’s in front of me and the implications of that.
People think that AI stuff is going to be big, but I mean, people are already so focused on that. I don’t know how much that’s going to change. There’s a lot of discussion about AI ethics in terms of privacy and confidentiality of information when you’re developing the systems, making sure that there’s not bias in the system, making sure that they’re accurate. All these things are really important. What I think is interesting—and now this is the speculation part—I think that we’re starting to get AI that could be transformative in a sense that’s unlike previous technology. We’ve always had progress for hundreds of years, the scientific revolution and before that industrial revolution, a long time ago. But I think this could be very different, because I think it could be disruptive in a significant way that people talk about, in terms of displacing people. It raises serious questions of what’s called—and some philosophers believe in it, some philosophers don’t—the human project, what we’re trying to do as a species.
Just to give you one example of that: one of the things we’ve been struggling with now for our fellowship program is one of the key criteria we use for selecting fellows is they have to write an essay. We evaluate if the essay shows that this person might be in a position to contribute to bioethics and be somebody who would really benefit from our fellowship program for two years. Well, AI is getting to the point—and in a couple of years it’s going to be there—where you could just go into any AI and ask it to write that essay for you, and it would write something that was better than just about anybody else could write. The first question is, in terms of our fellowship program, what do we do about that? We have other AI that tests whether this is from AI, but then if AI gets really good, it’s going to defeat that too, and you’re not going to be able to tell. It’s just going to seem like another person writing this paper. Some colleagues I’ve been working with from Oxford trained this AI to basically digest all the papers they had written and then write a paper in their voice, and it did a really good job. As people say, AI is never going to be this bad again. It’s just going to get leaps and bounds, orders of magnitude better, and better, and better. For the fellowship program, the question is, how do we then judge fellows? Do we have to make them sit in a room and keep an eye on them and make sure that whatever they’re writing comes out of their pencil? That’s the sort of specific concern.
The bigger concern I have is that if AI is doing better than the fellows, and it’s doing better than the faculty, what’s the point of even caring whether or not people can write essays? What’s the point of me writing essays? A bunch of what I do with my research is I write papers, and if it gets to the point where AI does better than I can do, what’s the point of me writing them? Just have AI do bioethics. When you have a consultation request, don’t call me—call ChatGPT. I’m getting more towards the end of my career, but this could be more of a threat or concern for earlier people. What’s the point of us going into bioethics? Should we be training fellows in bioethics? If ChatGPT is just going to do it a lot better, what do we do with ourselves while ChatGPT is doing all the important stuff?
I think that is going to be a really significant challenge. I mean, some people in technology studies think it’s overblown, and there will still be stuff you need human beings to do. But as technology just gets better and better, I’m really skeptical. I really wonder what the demands are going to be for keeping human beings around.
BA: Well, assuming we don’t get completely replaced by AI, what advice would you have for bioethicists early in their education or careers?
DW: I was going to say my first suggestion was “have a backup plan”, but it’s not as though AI is just going to be disruptive when it comes to bioethics. It’s going to be broadly disruptive, so I don’t know what your backup plan should be. Maybe you learn to surf or something like that. That might be the best backup plan.
I guess I would say that bioethics has changed. When I got into it, as I mentioned at the beginning, I hadn’t done any bioethics. I hadn’t really done much ethics, but I was still able to get a job. The field was not in its infancy anymore; it was in infancy in the 1970s and 1980s. By the time I came around in the mid-1990s, it wasn’t in its infancy, but it was still young, and you could still get into it without having specialized in it during your training. That’s no longer true. Somebody with my background would find it really hard to get into bioethics now, because it’s become so much more specialized. There are now PhDs in bioethics.
My recommendation would be, if you want to get into bioethics, don’t go that route. Try to get into bioethics from some other route. Try to become expert in something that’s related to it. Become a really good philosopher. Become a really good lawyer. Become a really good sociologist. Become a really good historian or something. And then use those skills and bring them to bioethics. That’s my recommendation.
BA: Great. I just have one last question. Having been here for 30 years, you know the Clinical Center better than most, so what’s your favorite secret of the Clinical Center that you’re willing to give away?
DW: That is a great question. My secret used to be that there was a gym on the top floor of the Clinical Center. You know, I’m not even sure it’s there anymore, because I never have time to go up there. But I played a lot of basketball growing up, and I used to get up there for like 15 minutes in the middle of the day and just shoot baskets. That was my secret back in the day. That may or may not be there anymore, but there is a little courtyard down near the Clinical Center, off the first floor, where there is a basketball hoop. We probably should resurrect it, because nobody ever goes there. I’ve been there exactly once in the last ten years. Robert Torrance [a former bioethics postdoctoral fellow] was very good about taking five- or ten-minute breaks and just getting outside when it was nice, to just clear his head. We went out there and played basketball for about five minutes in the spring. That’s there; anybody who’s listening to this, who works at the Clinical Center, it’s a great opportunity for clearing your head.
BA: Sounds perfect. Is there anything else you’d like to add or any questions you wish I’d asked you?
DW: I don’t think so. I thought your questions were really good.
BA: Alright, well, thank you so much again for taking the time to speak with me.
DW: Nice talking to you. Thanks for having me.