Dr. Edward Sondik Oral History 2009
- Download the PDF: Sondik_Edward_Oral_History_2009 (PDF 217 kB)
- Download the MP3: sondikedward.mp3 (MP3 21.17 MB)
National Cancer Institute
Division of Cancer Prevention Oral History Project Interview with Edward Sondik
Conducted on January 9, 2009, by Philip L. Cantelon
History Associates Inc.
Dr. Edward Sondik currently serves as the Director of the National Cancer Center for Health Statistics (NCCHS) of the Center for Disease Control and Prevention. A native of Hartford, Connecticut, Dr. Sondik received his BS and MS in electrical engineering from the University of Connecticut. In 1971 he earned a PhD in electrical engineering from Stanford Medical School. While there, Dr. Sondik assisted with the design of a new hospital for Stanford and eventually taught at the University. Dr. Sondik began work at the National Institutes of Health as the Chief of the Program Analysis and Evaluation Branch of the National Heart, Lung, and Blood Institute in 1976. Six years later, he joined the Biometrics and Research Operations Branch of the Division of Cancer Prevention and Control. While working for the DCPC, he advanced to the position of Associate Director of the Surveillance Program and became the Deputy Director in 1989. In 1992 National Cancer Institute (NCI) Director Samuel Broder appointed Dr. Sondik to the position of Acting Deputy Director of NCI. Shortly before taking the position of Director at the NCCHS, Dr. Sondik also served as Acting Director of NCI. In 1996 Dr. Sondik left NIH for his current position at the Center for Disease Control and Prevention.
This interview covers Dr. Sondik’s early work with the Division of Cancer Prevention and Control and the growth of epidemiology. He discusses the pioneering aspects of the division and the development of the Cancer Surveillance Program and the Surveillance, Epidemiology and End Results (SEER) Program. While working for the DCP and the NCI, Dr. Sondik assumed administrative roles and his work often concerned policy and coordination between NCI, the NIH, and even Congress. This interview reflects his transition from SEER director to influential administrator.
PC: All right. I'm speaking with --
ES: Had you called, had you called my office?
PC: I did, yes.
ES: Oh, I see. All right.
PC: Well, I called earlier.
ES: Oh, I see.
PC: I'm speaking with Edward Sondik, S-O-N-D-I-K, on January 9th, 2009, and I have permission to record the call?
PC: Thank you very much. An electrical engineer. How in the world do you wind up at the Division of Cancer Prevention and Control?
ES: Well, I did my graduate work on the side of electrical engineering that doesn't deal with resistors, wires, capacitors and so forth, but is really on the control side, control system side, and it's – actually was closer to operations research – or management science, really. And I had an interest in application, and in health. And I – when I was doing my graduate work for my Ph.D., I took a couple of years off and worked on a project with the Stanford Medical School on the design of a new hospital for Stanford. And we were taking different approaches than had been taken in the past. We were working with the architect, but we were doing mathematical modeling of how the hospital would actually work, and that got me interested, even more interested in health applications.
My – one of the applications for the work I did for my thesis had to do with medical decision making, decision making under uncertainty, which is a great part of mathematical control. Mathematical control means for example, how one guides a rocket from here to the – here to the moon or Mars, keeping on track, or even – since I'm driving (being this interview), cruise control, how does cruise control work and does the control hold the car at a particular speed? And the principles and, actually, the mathematics of doing that are similar. Anyway, I started teaching at Stanford, and I was doing work with the Stanford Medical School and with a couple of other projects that are all connected with the medical school one way or the other. One was sponsored by NHLBI, the National Heart, Lung and Blood Institute, the first large clinical trial on cholesterol reduction, and the impact of that. And another project had to do with changing behavior using the media, and intensive instruction. Both of those were really pioneering projects.
The media project was due to some real innovative thinking going on at Stanford, while the trial was due to ideas that were coming from NIH. So I was working on these, I had the opportunity to come back to NIH and visit, and I started talking with Bob Levy, who was then the director of NHLBI. Then it was NHLI; the B was added around that time.
I thought it would be useful for me and maybe for NHLBI to come to NHLBI and spend a couple of years seeing how these ideas could be useful in the design of these very large- scale efforts because large trials are not research in the usual sense. They're singular activities, and I'm sure you're – having done all these interviews, you're now familiar with that action when you look at the trials that have taken place under Peter. You don’t do these trials and have them repeated many other times across the land. These long studies may be carried out once or maybe twice in but they're more instruments of public health policy than they are fundamental research, I think.
So anyway, I came back and I worked in NHLBI. I became very interested in what I was doing and NIH decided that I wasn't going to go back to Stanford, at least right away. Around my fifth or six year, I heard that NCI was starting a biometrics and operations research branch. And I said, "Well, whoever is talking about operations research at NIH is my kind of person," because there was no activity like that at that point at NIH. There was of course, focus on the usual analytic sciences, epidemiology, demography and mammography to a degree. But I said "operations research is focused on decision making." And I said, "That's really quite crucial to health policy." So it turns out it was Peter Greenwald. And Peter Greenwald's office, it turns out, was one floor, literally one
floor below where I was located.
I was on the fifth floor in Building 31A, and he was on
the fourth floor in Building 31A. So I went down and talked to him, and came to be the
head of that new branch.
Uh-huh. So this was what year?
That's how I got there.
This was what year?
You know, I knew you'd ask me. I'm just terrible. '82 – probably '82.
'82, '83? Somewhere in there?
Something, yeah, like that. Yes, '82.
And what was the status of DCPC in the early 80s in, well, let’s say in NCI?
Status in what sense?
Well, where –
ES: How it was viewed or –
PC: Yes. Yeah, well, let's start out how it was viewed.
ES: Boy, you know –
PC: People encourage you to –
ES: I think there was encouragement. But I think when the grand cancer plan was created, there was a lot of emphasis on control. There was more lip service to prevention because it wasn't clear how to prevent cancer, but there was a great deal of confidence, that with research, we're going to find the clues to how cancer develops. But I think at that point, research had progressed long enough for people to start to feel that prevention was going to be extremely difficult, that there were some aspects of cancer that can be controlled, but none of those prevention and control areas had the high-powered research associated with them that was going on in treatment. Treatment was a research colossus with the cooperative groups having been set up with a huge amount of infrastructure. And there was also a lot of infrastructure associated with epidemiology in another division. I think people (other NCI staff) were really encouraging, but I don't know that prevention and control had as much respect as it really should. But Peter who came to NCI in ’82 was really a spark plug. And I think he had a great deal of respect from his NCI peers. But DCPC didn’t have the research base, the extensive publication base that formed the NIH foundation. So I'd say from a research point of view, I don't think DCPC was held in the highest esteem, but I think people saw it as quite important, particularly the control side, the application side.
PC: Uh-huh. And how did that work as opposed to prevention?
ES: How – I don't know what you mean.
PC: Well, when you say the control side, as opposed to –
ES: Well, there were clear – there were clear actions that one could take with respect to control. Drawing from cardiovascular disease, it was clear that screening was important. But on the other hand, the evidence for cancer screening was still not as solid as it really needed to be. For some types of cancer, every observer felt it was clear that screening was effective. For example, there were some people who felt that we should be screening for lung cancer, but the fact of the matter was that there really wasn't evidence that screening for lung cancer would change the progress of the disease. There was some focus on skin cancer, but I don't recall the focus as strong as this. But screening for skin cancer, particularly melanoma – is in exactly the same situation that it's been in for some time. It's very hard to detect, and it's not clear what the benefit is, that you get from a population-based screening campaign, wherein everybody gets screened periodically.
Breast cancer screening was in its relative infancy. Some had been going on. But the need for more information about that was becoming very clear.
And at that point, around 1982, there were some early attempts to translate what's known about screening into health policy. How to pay for it, whom to screen, issues like that.
So there was, I think, a ready agenda for screening application and research. Prevention was in a much earlier state with the most solid results being related to tobacco. It was clear that stopping smoking would have a significant effect, which, of course, has been borne out. There were early ideas about diet. A very influential study, on available risks of cancer, was published in 1981―the Doll and Peto Study. Do you remember the date for that?
PC: No, I don't. I'm sorry.
ES: Doll and Peto were researchers in England who put together a very comprehensive analysis of factors relating to cancer. I'm surprised no one's brought that up, or maybe they have.
PC: No, not at all.
ES: Wow. They put together an analysis that had sort of at its core a table that gave a set of factors, and the percentage of cancer mortality that's associated with those factors.
What I just said may sound straightforward but it's very hard to calculate, and even the way they did it, it was not as precise as it should be, and they had several factors, and I'm recalling this table now. I don't have it in front of me. But one factor had to do with tobacco, and I think they said 30 percent of cancer mortality is related to tobacco.
ES: Don't hold me to the numbers, but it's easily checked.
PC: About a third, anyway.
ES: Yeah, let's say that. Another factor was diet. They put down about a third for that, but the evidence for that was more speculative than that for tobacco. Or one could say the evidence for diet was less certain tan that for tobacco. For tobacco, the relationship between smoking and cancer was very, very clear.
PC: Strong correlation.
ES: Well, it was causal. I mean, it was about as close to causal as you could get, without completely understanding the causal mechanism, which I think is an important point. But the cause and effect relationship was very clear. In diet, it was more correlation than it was cause and effect. And then another factor was environment, and they gave environment around 3 percent, which was very controversial, because there were people who felt that the environment was a paramount factor. They felt that the reason cancer has risen so much over the last century had to do with changes in environment, pollution and so forth. And there were people who were really quite very activist about this. But there were others who said, the way Doll and Peto are analyzing cancer, is correct—we really don't have the evidence that environment is a key factor. So in looking at this little table in the middle of a very large report where the message was, "We really need to focus on prevention, prevention and control, because it seems that there are things that we can do about these factors.” Certainly, if screening is important, we can get more people screened and do it regularly and do it cost-effectively. For tobacco, what can we do?
Some said “let’s ban cigarettes." I'm not saying that from a practical point of view, we could ban cigarettes, but you know, this is what people were thinking. Again, not necessarily in DCPC, but in general, people said, "Well, there are things we can do about it, or we can raise the price of cigarettes to a point where cigarettes are not going to be affordable. Axe them." But at that point, that was incredibly controversial because there were people who continued to question the role of tobacco, even though I think it was really quite clear, tobacco companies were very, very active in attempting to control public opinion, as is clear from the legal trials that have taken place and the evidence that's been produced.
The most intriguing thing was diet. The point raised was, "Well, you know, maybe there really is something to diet, and if we could improve diet and nutrition, maybe we could make a real dent in this." Oh, another factor was treatment. And I don't know exactly what they had for improved treatment. I can say from the things that I did later at NCI, I was involved with the SEER program and we did annual progress reviews. We started doing annual reviews for the NCI directors and we looked very carefully at treatment trends. We looked at mortality and incidence trends, and we looked at how mortality was changing with respect to incidence. If incidence was constant and treatment was having no effect, then we would expect mortality to just be constant.
ES: Follow me?
ES: If there was an effect of treatment, we would expect that treatment would decline with respect to incidence, and what we saw was very little change in mortality. So it was a time when, while there was a great deal of research going on in treatment, it wasn't having a significant impact on mortality; It's really only in the last few years that we see that cancer mortality overall is turning down. Only in the last few years.
PC: About the last decade or so, I think, is –
PC: The latest pieces came out in November, say.
ES: That's right. And part of that decline is immediately due to prevention. A lot of it's due to smoking, stopping smoking. Some of it is due to screening. But I think there's no question that there's been improvement in treatment so that at least people are living longer with cancer, and people are now talking about cancer as a chronic disease. You know, some people are talking about it in those terms, you know, meaning that, "Well, you develop cancer, you're treated for it intensively, and then it's something that needs to be watched, managed." So there's been progress there. Back to Doll and Peto: Their analgesic was one of the major factors, I think, in buttressing ideas that Peter had about prevention. In other words, I think they reinforced what he was already thinking. I recall that his conviction about preventing cancer came from his own view of where the promise was in cancer prevention. So it was the prevention around that time really started in a major way. Our research program for it really started in a major way.
Control continued at a – I don't know, I'd have to look at the budget figures, but I think a somewhat lesser level, but still very significant, very significant level. That's a very long- winded explanation, I guess.
PC: When you did the – you said you were involved in two studies at Stanford.
PC: And one was a media study?
ES: Yeah, that was very interesting.
PC: And how did that apply to what you would be doing in DCPC?
ES: Well, I think it gave me an interest in what it takes to actually make change to have the public change what it does. This idea of behavior change and what it takes to achieve change and how you estimate this change. I found it very interesting from a mathematical modeling point of view. So, I guess, I gained from being involved in that tremendous respect for how difficult it is to achieve change. There were many people who talked glibly about how easy it was to do it. All we needed to do was put the effort into it. But this study showed to me how difficult it was, and the study was a really neat one that looked at three communities in California. In fact, it was called the Three Community Study. One community, Tracy, California, was the control.
Then there were two other communities on the coast, Gilroy and Watsonville. And they were served by the same television stations. Both those communities received information over the media, radio and television and newspapers. One of them, Watsonville, had clinics set up that provided for a portion of the population for intensive instruction on the risk factors related to cancer. And the hypothesis was that if this intensive instruction in the media were effective, what you'd see is no change in Tracy, some change in Gilroy and then the combination of intensive instruction through TV and media in Watsonville leading to more change. But what came out of it was that there wasn't a lot of change, period. And that's despite the fact that the Department of Communication at Stanford, and some tremendous communication experts, and real experts in behavior and so forth put a lot into this, it just showed how difficult it was to actually achieve this change. Part of it was that there was some secular change going on so that Tracy, the control, actually improved. Comparing Gilroy and Watsonville to Tracy, there wasn't a great deal of change in the other two communities. So that got me really interested in how population change occurs. When I started working in the branch, the biometrics and operations research branch and Joe Cullen was becoming instrumental in moving smoking front and center, through a program within our division, focused on these control activities. I'm blanking on what it was called. But it was headed, at one point, by Joe, I believe, who then became deputy director of DCPC. Over time, there were several other people who headed this, including me as an acting program director. We also built a cancer surveillance program, which expanded – Peter, really, was instrumental in doing this. The SEER program moved from the Division of Cancer Etology and joined DCPC. Joining the Biometrics and Operations Research Branch to SEER, we became the Cancer Surveillance Program.
So what I had done at Stanford, actually, really informed me about the state of the science associated with control and behavior change. And that, coupled with my analytic view of things, I thought, actually put me in a pretty strong position for understanding the promise, the issues and problems and the research potential. We also developed another branch that was headed by a fellow named Larry Kessler as part of the surveillance program that focused on the more operations research side. Operations research is the application of mathematical methods to problems, a variety of problems in diverse fields.
PC: Were there any special problems working in prevention and control during the Reagan Administration?
ES: No problems related to the administration. Our discussion of sex was changing. Over those eight years, I think attitudes changed and we went from women not discussing breast cancer to it being a major issue. And I think the same is true for cervical cancer. It was something that wasn't discussed all that much, but I think – I hope, in part, due to NCI's efforts that cervical cancer rose in the public consciousness. In some sense, it was really a no-brainer. Very inexpensive screening could actually save lives. It didn't account for very much mortality, but it was a disease – is still a disease that is almost entirely preventable. And now we even know more about how to prevent it fundamentally rather than just through screening and detecting it, detecting it early. But I don't – I don't associate anything between the – administration and what I was doing at the time.
PC: Or the directors of NCI?
ES: Yeah. I'm not – I – I'm not a very astute observer of these relationships. Maybe it's my analytic background. I always ask sort of what, exactly, is the relation – what is the evidence, you know, if someone says something. But, for example, Vince DeVita, he was during the Reagan Administration, wasn't he?
PC: Uh-huh. Uh-huh.
ES: Yeah. I think he was a real champion of prevention and control, particularly because he understood the data very well, and he understood that in order to really have an impact on the measures, mortality, and on incidence, you had to have an effort, you had to have prevention and control, that treatment alone was not going to drive these rates down. He was very clear on that. And I saw Vince as quite supportive of prevention and control.
He was certainly supportive of what I was doing, although we had some horrendous reviews, I must say [laughter]. He got quite – very, very exercised over the dour statements that I was delivering when I would do this.
PC: For example?
ES: Well, pointing out that we're not seeing any treatment effect. And I remember one meeting, he said, "That can't be. We have to be seeing treatment effect. I can see – I see the research results. You're not looking at this right." So we went back, and we did a report that was really internal. I don't think we ever published it. We looked at all the cancer sites that we typically reported on in terms of mortality and incidence, and we analyzed the advances that had taken place over the last decade or decade and a half.
And we had research staff from around NCI supplying information and produced another report that still had the same results. But I think he felt better that we clearly had taken account of the latest research and trends in survival. But, you know, and I feel very strongly about this, particularly in my current position where I head the National Center for Health Statistics, we need to look at these numbers without passion or bias to guide health and health policy. Those close to research sometimes want to look at data and couple it with wishful thinking. But you can't do that. That's why you need an agency like the one that I'm with now, that's independent and calls it on the basis of the trends that it sees. And that's what the SEER program did. I'm very proud of the SEER program. The people who built it were very far-seeing. I give them a tremendous amount of credit, and the people who have continued it have maintained that level of quality of credibility.
PC: In the 80s, I think Heckler was also a supporter of cancer prevention awareness programs when she was secretary –
PC: — of Health and Human Services. But when the operation branch got the SEER program, that was in '85.
PC: And that's what you were working with at that point.
PC: Okay. And then through the rest of that decade and into the next, the – how were the agendas established for DCPC.
ES: How were they established?
PC: Yeah. How – did the branch chiefs get together and talk about this? Did Peter bring people in? I know in – what was it, '85, he came out with the book –
PC: — on cancer 2000 or something.
ES: Yes. That report was very controversial. Regarding agendas: First of all, we had an advisory body. And I recall it as being very good. The overall National Cancer Advisory Board (NCAB), which advised the NCI as a whole, included an element of high-level scientific cancer politics. I thought the people on it were really excellent, but I'm not so sure its guidance overall, was exactly what we needed in DCPC. Although I recall –
PC: You mean more political than medical?
ES: Well, I don’t know. Somewhat―I’d say the politics of the different sectors of cancer research, which is to be expected. I recall giving a long presentation related to goals. I was very much a champion of setting cancer goals. The report went from goals to estimating impact, which I think is very important to do. But I do think you've got to be very careful in doing that; Some people that just won't agree with any kind of estimation.
ES: I remember doing a presentation to the NCAB that was really very well received relating to goals, setting cancer control goals and estimating impact.
PC: Yeah. And the books, well, the two studies, one that you co-authored, and the other one in '92 that you did on the cancer goals, you know, 1985 to 2000 and then the follow-up study on that.
ES: You know, I think that the Cancer Control Objectives for the Nation, 1985-2000 document helped set a framework. I think the more specific agendas were really set in a kind of traditional way for research, through advisors. I became, by being at NIH and NCI, a very, very strong believer in the role of outside organizations in helping to set the agenda for a research organization. When I came to CDC, I wanted to start a board of scientific counselors for my organization and did that. It took a few years to get it going because of the bureaucracy to establish a federal – official federal advisory committee going. But it's been very, very helpful. And now CDC is really turned to using these committees to evaluate the various divisions of CDC. But I feel a lot of the direction was set by Peter's vision, which I think has been quite constant, at least for the period I was there. Ambitious, tremendously ambitious, but understanding at the same time of what it actually takes to achieve change, and to achieve the new knowledge that's needed. So I think his vision and the vision for his key staff were very much a part of the direction. I also think that the NCI directors, there's also vision there that was tremendously helpful. Vince DeVita, I thought, had a clear vision, as did Sam Broder. Sam ran into a number of issues, particularly related to the breast cancer treatment. But he really is a tremendous intellect, and I felt actually very lucky to work with him. He had tremendous respect for epidemiology, for the statistical sciences, and what they had to tell him. And he was not one of those people who, when you told him that this is not going in the right direction based on these statistics. The data we have, he's not one who said, "Well, I just don't believe it 'cause I know it is." I also have great respect for former secretary, Donna Shalala who is also someone who had great respect for what the data had to tell her.
PC: What – how would you describe Peter's management style?
ES: Very collaborative. I was very comfortable with it. It did not have a lot of process associated with it that wasn't important to actually getting things done. You know, I'm not quite sure how to – if you give me some adjectives, I can probably respond better.
PC: Well, collaborative is a good one. Is he a hands-on person, is he heavy oversight? Does he let people operate on their own? You know, is he a control freak?
ES: No, he is not – I would say he has heavy interest. So I think there are different ways that you can be heavy-handed and a micromanager. There are things that you can achieve in
that way but don't achieve good working relationships. But you can also achieve this kind of direction by showing a strong interest in understanding what's going on. He had a tremendous understanding of what's going on across the division. But I never found that he was a control freak as you put it, or a micromanager. He had, I think, a reasonable interest in getting in place an appropriate set of tools for managing the program. This was particularly true with the chemo prevention program, which was complex and had a lot of different activities underway. I think that tracking progress in that program was, in part, a point of frustration for him. I think it was sometimes difficult to know exactly where things were because there were a lot of different small studies that were taking place. But a process had been developed to be able to say where those studies were at any point in time. I do have this vague recollection of issues with individuals and how they were actually managing some aspects of the program that didn't meet his standards. or as I recall, my standards either. But I think he gave, as far as I could see, an appropriate level of control and oversight without being overbearing.
PC: But a respected leader.
ES: Oh, I don't think there's any question about that. I certainly felt that way about him. I think there were issues in the weight of evidence that he gave to things. I think sometimes there were issues between Peter and the other division directors. I went to enough of these meetings with senior leadership and was able to see these tensions.
ES: The treatment division produced data from hundreds of studies. Peter couldn’t match it but he saw potential for selenium for cancer prevention, beta carotene or whatever study it might be. Whatever intervention it might be. But his colleagues would say, "Well, you know, the link really isn't there for you to be able to say that." And there were arguments about this, but I think it was a very strong give and take. And Peter was very good on defending his ground. And I came to believe that it was very important to do the program of large-scale trials that we undertook, particularly the prevention trials through the CCOPs. It was important to do those, to work both ends toward the middle.
If you try interventions and you see the results that will help inform your knowledge of the basic science. The Womens’ Health Initiative and the reduction of estrogens, show the value of coupling basic science with clinical trials.
ES: But there comes a point where it's a judgment call as to how to conduct a trial. But it's a good investment of resources to do that, and DCPC did that, and I think it was pioneering, and I think, given the state of knowledge, I think it was the right thing to do. Of course, Peter was always concerned about safety issues and relied heavily on the IRB structure to inform the trials program.
PC: What – we talked about these two books. What was the impact of the first one, Cancer Control Objectives for the Nation: 1985-2000?
ES: Oh, boy, I don't know. I think the fact that we talked about what it would take to actually reduce mortality by 50 percent was controversial, and it was generally not considered to be a high point for NCI. On the other hand, I thought it was a, really a very well-done effort. And the 50 percent analysis could've been left out and the report would've been terrific. Unfortunately, critics tended to focus on that. I think it's very important that an organization be very careful in stating its goals. There are people who will criticize aspirational goals. Now what we did was not aspirational, except that we did say that we could reduce mortality by that large amount if, and it was a big if, we had major advances in treatment. And that point just didn't get across very well. I do think, though, that it reinforced the direction in which DCPC and NCI were moving in terms of prevention research and control research. Now I don't know, there might be contrary views that maybe it was – not negative, it was negative about that, but I don't think so. Certainly had no negative effect. On balance, I think the report was positive on the analytic work and the direction of the analytic studies about control and what could be achieved by building data sets, linked data sets—that link SEER to Medicaid and other data bases.
PC: And the second – the follow-up study, why was that done? The '92 study.
ES: I'm trying to recall what it was.
PC: Oh, it was the follow-up to this, to the Cancer Control Objectives.
ES: Where was it published?
PC: I think – I thought you had done it.
PC: '92 follow-up report on Cancer Control Objectives to the Nation.
ES: You know, I really don't remember much effect from that, other than it was a kind of forerunner for the kinds of things that actually we do here in the NCHS about looking at the progress that's been made. I recall it as that. And if anything, now as – I really would need to go back and look at it, is that it was a sort of a sobering look about progress, which in the cancer – magnitude of cancer, which hadn't changed very much, was certainly not going in the right direction.
PC: Uh-huh. What was your take on the split when cancer control and prevention was split – were split from each other?
ES: Well, I was – I got kind of a battlefield promotion when Sam Broder was director and I became the NCI Acting Deputy Director. The then current Deputy Director had unexpectedly died. An acting deputy director for NCI was needed. So Broder came down from the eleventh floor to the tenth floor, walked in on me one day and said, "How would you like to be the acting deputy director?" I was completely floored. So as they
say, it seemed like a good idea at the time, so I said yes. We briefly thereafter encountered a scandal related to breast cancer treatment. Do you know what I'm talking about?
PC: No, I don't.
ES: Well, there were accusations that, in one of the large cooperative groups that conducted cancer treatment trials for breast cancer, one of the many, many investigators had falsified data. And the person who headed the overall group was an icon named Bernie Fisher. And to make a long story short, there was a Congressional investigation and the agency became just almost paralyzed by this. As this was coming to a close, Sam Broder left. I then became the acting director of NCI for a period of about, I don't know, six or eight months. And it was during that time that Rich Klaussner was named director and I went back to DCPC. As Klaussner was transitioning in, I had several discussions with him. But I had enough to know that he had little respect for the prevention program. I don't know what his opinion was of Peter, per se. That was really not clear to me, but there was no question that he felt that the evidence to support the program was just not there. And as you know, I'm sure, there's probably been a lot of focus on this, there was a real battle and Peter really held his own and held the program. It was about that time that I then left NCI and went to CDC.
PC: Is this part of the reason?
ES: No. Part of the reason, really, was that I'd been at NIH for around twenty years, and I felt that I needed to do something different. And somebody actually had put my name in for this position. But, you know, given my background in dealing with data and analytic interest and so forth. It was kind of a natural thing, and I thought it would be good to make the change, because I didn't see where I could really go in NCI at the time. This was prior to the break, to the split of prevention and control.
PC: From '97 to '99?
ES: No. Yeah, I guess it's that.
PC: The Winter of '98 anyway.
ES: Yeah. Being upstairs in the director's office –
ES: I'd just been through quite a bit, and I must say, part of it was horrendous, but the challenge was tremendous. And I just felt I wanted to go on to another really major challenge. When I left, I had absolutely the greatest respect for Peter and what NCI was doing. I had real concerns about what Rick Klaussner wanted to do. I think he came in with preconceived notions of how NCI should look, and how it should be organized. But I think any new director to an organization, a large organization, comes in with these kinds of ideas. I don't think you come in with a absolutely clean slate. I think he was off base, though, in this view of the cancer prevention program. In terms of the split, I think the split actually makes sense. I didn't have any major problem with it. I think the reasons it may have been done, though, were based on other issues perhaps on how Klaussner viewed prevention even after he decided that the program would stay and that Peter would stay, “My bias is…..etcetera”.
ES: Yeah. My bias is that – my bias is that – I'm walking in my building. I've been sitting in the car actually.
PC: Oh, my. It's cool today.
ES: No, it actually was quite comfortable. But I realized, I think I have somebody waiting upstairs. So let me just, you know – I'd be happy to talk more, but the –
PC: The bias.
ES: Yeah. Bias about what?
PC: You were talking about Klaussner and the split.
ES: Yeah. Oh, my bias is – about organization is that you can arrange organizations in a variety of different ways, you know. And what you want to do, and they're all going to be, you know, pretty much effective. What you really want to do is get the people who are doing the work to be able to collaborate. You don't want to establish silos.
ES: And no matter how you set it up, people are always in boxes or silos to some degree, but you want the ideas to be there that will allow the freedom to be there to collaborate and to work across, to work across areas. And Klaussner was probably trying to isolate the prevention research. And given the state of the research of prevention at that point, it probably didn't hurt it to be by itself; in some sense, maybe even helped it. And putting the control together with the analytic sciences and the data sciences, I think also seemed to make sense as far as I could see.
PC: Okay. Well, I know you have to go. What I will do is send you the transcript as soon as we get it transcribed, and if you think you want to add something or we want to talk again, just give me a call.
ES: Okay. Do you think [indiscernible]?
PC: I'm sorry, I didn't hear that.
One second. Communication is a little iffy in the elevator.
One more floor. Just a second.
I can hear you.
[Indiscernible]. I just realized I left something in the car.
I was just going to say, I guess, that – are there other questions that you would want to ask?
Well, you may think – I've pretty much run through mine.
But if you have something you want to add after reading it, let me know.
PC: All right?
ES: All right. Well, you're not going to publish this transcript, I hope.
ES: You're not going to publish the transcript.
PC: No, no, no, no, no. No. I'm going to send it back to you for light editing.
PC: And then it will go into a – the historian's office at NIH.
ES: Okay, great.
ES: Okay, thanks.
PC: All right. Thank you very much. I appreciate it.
ES: Sure, thanks.
PC: And take care of your cold.
ES: Thank you.
PC: Right. Bye-bye.
[End of Interview]