Dr. Patricia Hartge Oral History
Download the PDF: Hartge_Patricia_oral_history (319 kB)
National Cancer Institute, Division of Cancer Epidemiology & Genetics, NIH
Oral History Project
Interview with Dr. Patricia Hartge
Conducted on December 5, 2022, by Holly Werner-Thomas for History Associates, Inc., Rockville, MD
HWT: Hello. My name is Holly Werner-Thomas, and I am an oral historian at History Associates Inc. in Rockville, Maryland. Today’s date is Monday, December 5, 2022. I’m speaking with Dr. Patricia Hartge for the National Cancer Institute, the Division of Cancer Epidemiology and Genetics, part of the National Institutes of Health, or NIH. The NIH is undertaking this oral history project as part of an effort to gain an understanding of the National Cancer Institute’s Division of Cancer Epidemiology and Genetics. This is one in a series of interviews that focus on the work of several individuals at the NCI DCEG, including their careers before and during their time with the institute. This is a virtual interview over Zoom. I am at my home in Los Angeles while Dr. Hartge is in, is it Bethesda?
PH: Chevy Chase.
HWT: Chevy Chase, Maryland, outside of Washington, D.C. Before we get started, can you please state your full name and also spell it?
PH: Patricia Hartge. H-A-R-T-G-E.
HWT: Thank you. Dr. Patricia Hartge, appointed scientist emerita in the DCEG after her retirement from the NCI in 2013, is an internationally recognized epidemiologist whose contributions span the fields of molecular, genetic, and clinical epidemiology. With more than 400 publications, Dr. Hartge has advanced understanding of non-Hodgkin lymphoma, ovarian, and bladder cancers. She has served as the Deputy Director of the Epidemiology and Biostatistics Program from its inception in 1996 and has been a leader in several modern epidemiologic consortia, most notably the NCI Cohort Consortium, a major NCI-sponsored initiative that included investigators responsible for 36 cohorts that are international in scope and involve about three and a half million people. She has been a member of the governing council of the American Public Health Association and the board of directors of the American College of Epidemiology. She also served as cochair of the education and planning committees for the American College of Epidemiology, and on several federal interagency committees and task forces, including program evaluation and strategic planning for the National Institute on Aging, the National Institute of Environmental Health Sciences, and the National Human Genome Research Institute and the Centers for Disease Control and Prevention. Additionally, she has been assistant editor of the American Journal of Public Health, and on the editorial board of Epidemiology, and has lectured and taught at the George Washington University School of Public Health and Health Sciences. In 2012, she received the Department of Health and Human Services Career Achievement Award and the Harvard School of Public Health Alumni Award of Merit. She received her BA from Radcliffe, a master’s in economics from Yale, and a doctorate of science from the Harvard School of Public Health in 1983.
So, let’s go ahead and jump in. And I know last time we started with a little bit further back with your career path in terms of your personal background. We can get there through a different means, however, so I thought I would start by asking you what drew you to epidemiology and specifically cancer research, or how epidemiology led to cancer research.
PH: That was a winding path. But I would say, pretty much from the time I left high school, I knew I wanted to do something to make the world a better place, and I knew that I wanted to use math and science to do it. I didn’t have a very clear notion of exactly what that would look like. The zigzagging was figuring out what that was.
In college, I majored in multidisciplinary social science. The notion was to train in economics, history, and sociology and then to apply those tools to one research area with a more intense focus. At the end of that experience, I thought I liked social science, and I particularly liked that paradigm: multiple disciplines, one big problem.
For the first few years, I worked in economics. That seemed like a good choice. That was not ultimately satisfying for two reasons. It’s one step removed from the problem, since my motivation for improving economic policy would be to get full employment, fair distribution of income, so that people could have healthier, happier lives. Epidemiology seemed one step closer to the problem, but it [epidemiology] would still use a lot of the tools that I had acquired in training in economics. I wasn’t sure it would be as much fun. The other thing that bothered me about economics is the fundamental toolkit, the neoclassical model, seemed to me to have real problems at that time. Still does seem to me to have real problems. It [economics] is not without value, but, as you know, I’m a person who cares a lot about whether the tools of the trade seem to work.
With those two considerations in mind, I made a little foray and tried epidemiology. Liked it a lot. Then I went to the Harvard School of Public Health, and I hit a bit of a hurdle. I was surprised to encounter some misogyny in the department, and a bit of backward thinking. It stopped me in my tracks but not [for] long. Then I thought, well, I’ll do something that’s intermediate. I’ll explore health services research. I went to the [Boston] Children’s Hospital, which was just down the road, and did a very useful study of protocols for taking care of children in the emergency pediatric setting. That was not fully satisfying, but I got some more tools in my “how-to-do-research” toolkit.
Then I went down to my first foray at NIH, at DCEG, really to test out two things. Was epidemiology what I wanted to do? Do you know the expression. “Jack of all trades, master of none?” I don’t really love that expression. But I do like, and I think my family does, too, “Jill of all trades and a master of one.” I had not decided by then what the “one” would be. Maybe it would be epidemiology.
When I went to the NIH in 1977 and started working on the National Bladder Cancer study, I answered the question: Would epidemiology satisfy that long-felt need to use science to make the world a better place? It also would fit my temperament. So that’s how I wound up in cancer epidemiology.
I actually then went back to the Harvard School of Public Health, and I had a very good experience completing my doctoral training while I was still working at DCEG, at NIH. I think that answered your question.
HWT: What brought you to NIH in 1977, and what were your initial goals?
PH: Some of the initial goals were what we were just talking about. I wanted to know, “Do I belong in cancer epidemiology, and do I belong at the NIH?” What pulled me there specifically [was that] ever since I was a child I knew that the NIH was a national treasure. But, in particular, Joe Fraumeni and Bob Hoover had a terrific reputation. They were building a program, and they were building it fast. After the publication of The Cancer Atlas, there was tremendous congressional and public support to expand the work that the NIH was doing in cancer epidemiology. So I thought, sounds like terrific people. Sounds like a great time to go. I’m going to go and see if I can find out what I want to be the master of, and also whether I want that [NIH] to be my scientific home.
HWT: Can you describe the NIH when you first arrived, and how it has evolved over time? At least you know, through the time you were, you spent your career there.
PH: I think that its fundamental mission is constant and is clear. I do think it’s a national treasure. The specific organization of the NIH into disease institutes has historical roots, and it makes sense in a lot of ways. Yet anybody who works in any one of the institutes knows how often you have to figure out when to organize your thinking not according to the specific disease—is it heart disease, is it eye disease, is it cancer —but sometimes in other dimensions altogether. That’s something that you do by crossing boundaries. We talked about that a little bit before. We can talk about it some more.
I think that the NIH changed in parallel with science; I will say particularly in epidemiology. This was the time when we moved from relatively smaller studies to larger studies and then to consortia. For a very good reason. For the same reason that other scientists went through that evolution, too. That gives you the power to answer the questions with more precision and with more accuracy, too.
DCEG during the time that I was there went from—I wasn’t there at the beginning. The beginning was Joe [Fraumeni] and a couple of other people, but it was still young in 1977 when I arrived. I’m going to call it “DCEG” even though the precursor was a Program, and before, when I arrived, it was a Branch. I’m struck that over my 40-year time there, there was a kind of constancy of the vision, of Joe’s vision, with tremendous growth and adaptation along the way. You’ll hear me occasionally quote slogans of his, and one of them was, “Cancer is 100 percent genetic, and 100 percent environment.” He managed to balance a concentration on both of those and, in particular, to have an instinct, when there was tremendous support for looking at the environment, to make sure that we would still keep genetic research going, [and] then when there was tremendous support for looking at genetics, to make sure that we would still keep environmental research going. That’s what I think about the research. Were there other parts of the development that you were specifically thinking of?
HWT: Not right now. We can also talk about that in terms of your own work there over time. So that’s fine.
PH: Okay.
HWT: You did mention Children’s Hospital, Boston. Just one follow-up question, because your background is quite varied. You worked at Children's Hospital. You also have a master’s in economics. And in the 1970s, you worked at the National Bureau of Economic Research. How did you bring those experiences to bear in your work at NIH?
PH: In a couple of ways. They were useful right away because some of the tools were pretty much the same whatever field you’re in. When I arrived, I started right away on the National Bladder Cancer Study, and I was thinking about this. I did have two years of training in epidemiology, but in addition, I’d already written questionnaires. I’d written protocols. I had handled two-stage sampling frames. I had done quality control. I had managed a smaller research office. I was a computer programmer. I had most of the tools in my toolkit already that you have to apply in epidemiology. So, that’s one answer.
I think the other one is along the lines of the toolkit. Because I had been interested in tools in general, and because of this focus on an interdisciplinary approach, when we needed to do something else, I instinctively would see whether something that I had learned in one of these other areas could carry over, and a lot of them have. What we came to call “kin-cohort” is what in economics, a “synthetic cohort”. When we were deciding whether to use, exactly how to do, random-digit-dialing, it’s stratified sampling. I had done it before, but in a slightly different way. In handling large databases that are not identical, this comes up in consortia work. It also comes up in designing questionnaires that have to cover a lot of territory. I had a lot of experience with merging and matching, in a technical sense, merging and matching data from different sources in economics.
I think my health policy work helped, more so when I became a Deputy Director. But I came to some of the management and policy questions with an instinctive sense that you should learn the history first. You should figure out who are the different players. Or, more to the point, what are the boundaries between one field and another, and then you should be always prepared to cross the boundaries–my metaphor’s breaking down here—but to step over the fences, but with permission, with respect for boundaries. I think those experiences that I had begun to have outside of epidemiology transferred into the work that I did. From then on, I was doing it in the epidemiology context, but I was bringing [in] training in both history and policy, and a willingness to shop around for tools from other disciplines, to do what needed to be done.
HWT: Last time we spoke, we got into the weeds quite a lot about the various studies you focused on and led, including of course melanoma, bladder cancer, non-Hodgkin lymphoma, and ovarian and breast cancer. I want to do that, but I want to also be methodological about it in the sense of asking the same questions, but I want you to focus on what you think is important from each one. So, you might not want to focus on each one, in other words. I know that you did start with the bladder cancer study. I believe, if I remember correctly, that was even part of your interview process with Bob Hoover. Can you talk about that?
PH: It was. I’ll take a little time on that one because, in retrospect, I can see that an awful lot of the elements of the National Bladder Cancer Study got incorporated into the next seven studies where I was really hands-on as the designer, that is, as the Principal Investigator or co-Principal Investigator, and probably another 20 or 30 studies on which I was an advisor. So, I will dig in a little bit to the National Bladder Cancer Study, and then probably can more quickly zip through some of the others.
The background was that a hospital-based case control study of bladder cancer in the middle 1970s of bladder cancer implicated saccharin. That was at the time the most widely used artificial sweetener. And [the study] was rapidly followed up with an animal feeding study, which also had an equivocal but positive result. That meant that there was tremendous public interest in whether this widely-used additive should be banned. The FDA had to decide, and they had to decide really fast. The study had to be strong, methodologically strong, and it had to be fast. And big. The design elements that pretty much anyone would have said would be optimal were that it had to be population-based, it had to be case-control, because we were looking at a lot of other exposures, too, that were known to be important to bladder cancer. It had to have an interview component. Because one of the concerns was by-products of chlorination in the drinking [water], we knew we would have to match it also to water supply records across the whole country. For some people, we would need to get their medical records. So, it had to do a lot, but it somehow had to do it well and do it super fast.
I think some of the important pieces –I’m going to toggle a little bit between the organizational part and the strictly scientific part. The strictly scientific part, which caused us then to develop some of these new tools, was to be population-based. We needed to be able to include all cases. There was already the Surveillance, Epidemiology and End Results Network [SEER], which is run by NCI, and it’s how we [in the U.S.] get rates of cancer. But we had to turn it into rapid response. Because even though [SEER centers] already knew how to get all of the cases eventually, sometimes there was such a lag that you couldn’t interview them. They [the patients] were either too sick or they had died. We actually adapted SEER to become a rapid-response national case identification network. That was a big one, and they continued to have that capacity thereafter. You could say that was capacity building.
The population sampling over the age of 65 was novel in that there’s no registry of the whole population in the United States, but CMS, our sister agency, runs Medicare, and they have a file of everyone who would be eligible for Medicare because they’re over 65. We approached them and said, “Can we use this?” They obviously have privacy and confidentiality considerations. We were persuasive, in part because it was such an important national question to respond. I still remember they couldn’t quite figure out how to make enough paperwork move fast enough to be able to guarantee the privacy in the transfer of the data. One of our colleagues had a truck, and we all drove over to the place where the tapes were and then drove back to the office with the tapes. You do what you have to do, but it was a sign of the speed of the project. We went from my arrival in September of 1977 designing the study to within a year having finished it. Which was extraordinary, considering it had 3,000 cases and 6,000 controls.
For [sampling] the under-65 population, we decided to try random-digit-dialing, which had been used for political polling. Holly, you can see how this is one of those where when I reach back to my other research experiences, it’s directly relevant. And we were very fortunate that the fellow who had thought about it [random-digit-dialing], Joe Waxberg, that he was available to help us from our contractor, which was Westat. I’ll tell you that in brief. It’s not that hard.
Telephone numbers are not assigned at random. So that if you truly randomly pick a number out of the hat and it’s a working number and someone answers the phone, it is more likely that other telephone numbers that begin with the same five digits, just varying the last two, will themselves be working numbers with phones. Using that insight, you first sample a phone number and then you decide whether you will sample from the other phone numbers that share those first five digits, and thereby you save enough money that it becomes a practical way to get a sample of the population.
It was extremely successful in that it became widely used by many other people. It should not be used now, it still is in use, but it shouldn’t have stayed in use when cell phones came in, because it no longer has the desirable properties of really giving you a random sample of the population.
I think the other two things that we did that were important (but maybe less marquee than rapid case finding in SEER, getting CMS files to be used for epidemiology and random digit dialing) was that we did several add-ons to the study. Because we knew there would be so much scrutiny. It’s a big industry. There was a lot of concern. We knew whatever we found, [there would be] lots of scrutiny.
Wherever we could, we added a small study on to test the methodology. We added a small study that was hospital-based so we could see whether our assertion that population is the base, is the way to go, would actually have produced a different answer. We added, wherever we could, nested studies to say, well, we coded occupation this way, [call it] Plan A. We could have used Plan B, so we would take a random sample, and see if Plan A or Plan B worked. Those were all really useful. Again, not just for our study but for future studies. We did a lot of quality control to make it transparent so that if you today wanted to read what we did or know why we did it, you could find it.
We did that for the general scrutiny by the world, but also, we did it because Joe [Fraumeni] understood that our evaluators, the Board of Scientific Counselors, would want to know: when you use a contractor to do so much of the field work (which you have to do to scale up) do the scientists still have control of what’s going on? The only way you can show that is to say: here are the procedures we followed, here’s where we went, this is how we did it. So, we set the standard within the Division for future marquee studies: show that you have assessed and assured the quality as you went along the way.
What we found was that saccharin itself contributed next to nothing to bladder cancer risk, which we published in The Lancet. But, before we went to The Lancet, we reported it to the Food and Drug Administration, to DHHS—I still remember going to the meeting, the great big, long conference room and doing the report. They had co-funded the study. So had the Environmental Protection Agency. We had been able to use SEER because our sister division had let us, encouraged us so to do. Another effect, a long-running effect, of this study was that we really strengthened those relationships with the sister agencies and the sister divisions.
And now, back to the beginning: How did we wind up doing this? And that’s because when the inquiries first started coming in after the positive study, Bob [Hoover] and Joe [Fraumeni] were fielding them and beginning a quiet campaign within the Cancer Institute with food advocates that we [NCI] should be allowed to do this study. Not to the exclusion of putting money in the R01 pool for extramural investigators to do it. Indeed, a very good friend and colleague of ours did an R01-supported study at the same time, but [rather] to say, this is what the DCEG can do. We can be your strike force to respond to a public, high-profile problem and deliver a study that will be very sound, in this case very big, and we can do it reasonably fast. So, I think it was, it was a good study.
It also for me said, “Yeah, this will be fun, let’s keep doing it.” It also introduced me to a management concept with which I think Jennifer’s probably familiar.[1] It is the idea of the medallion study. At any time, an organization like ours has to have some way to say we’re doing a lot of things at the same time. We need to keep doing them. We need to do them well, but along comes a special study, and you need to be able to say, at the moment, this one and this one have the medallion. Which means other people will be all hands on deck [for those studies]. And you can pull in resources. Get some help getting your OMB clearance. Whatever it is. So that was a medallion study. When I became Deputy Director, one of the things I could do was, if someone else was doing such a study, help them. Help them get over the hurdles, pave the way, bring the resources that we need. It was a very good science experience, but it was also a good management experience. That was the National Bladder Cancer Study. I will not say as much all the rest because they’re all a lot like it.
HWT: That’s all right. Fantastic, though. Important to get this down, so I appreciate that. I do have a couple of follow-up questions. I understand, and you’ve been talking about all of these things in terms of the methodologies that you developed. My understanding is that you’ve introduced methods that helped to accelerate the field.
PH: Yes.
HWT: Can you just take a moment to talk about that?
PH: Yeah, I think probably all five of those from bladder cancer, I would claim all five of those novel things that we did in the bladder cancer study moved the field. So, when the NCI SEER network became capable of rapid reporting, it became, really for the first time in its existence, able to perform the mission of epidemiology. The name stands for Surveillance, Epidemiology and Results. But if you can’t get out there fast, you can’t do it. With that, SEER became capable [of rapid response] and thereafter they regarded it, indeed they still say, this was a landmark study. And that’s what SEER means by that. As for using the Medicare files, many, many studies since ours have done it because we showed that it could be done safely, respectfully, fast, and not compromise any of the confidentiality. You no longer have to drive there in your truck to get the files to be able to sample people over the age of 65, and many studies have used that. Random digit dialing has probably been used 2,000 times since we showed that it could work on that study.
A little bit furrier, [is that] we wrote a couple of papers about the methods of the study, emphasizing the quality control and the quality assurance activities that you ought to do when you’re doing a study like this. Now, by “like this,” I mean using a lot of the taxpayers’ dollars, or liable to get scrutiny. Honestly, what I meant is that any epidemiology study ought to do these things. Documenting precisely, so that it’s reproducible. Epidemiology is just a science like any other, and a hallmark of science is it's reproducible. You should be able to do what I did and get the same thing. If you cannot, there’s something wrong with what I did. (laughs) Actually, I’ll put a pin in that and come back to it, because it’s back to your question about sizes. The one thing is, if you do it three times and I do it five times, we will get different answers because three and five are very small numbers. Procedurally, if you do what I did, you ought to get the same answer. I believe we raised the bar for, particularly for case-control studies published thereafter. What you need to do as you go. Does that make sense?
HWT: Yes. Thank you for that, and I also wanted to ask you, you arrived at NCI when epidemiology was at an inflection point. You mentioned cancer maps. I always like to ask the people I’m interviewing, in particular in the science world, about technology and how technology influences the field, from basic computing to the Human Genome Project to a lot of other things. My understanding is that it was a time when epidemiologists moved to using technology. You know, starting with the maps, the ecological studies of cancer mortality. Also, the genetic and molecular epidemiology was born and grew during your time in DCEG and that you contributed a lot to both. Can you address those?
PH: Yeah. I’m going to try to be a little bit more succinct because we could talk for a long time about that. Yeah, a couple of observations. Even small changes in technology matter. When I started the bladder cancer study, I said [that] we need to store these data in this [new] way. It was not what was being done then. It was still small scale, I guess. So just using the then-available computer power more effectively was tremendously important. When I arrived, people would still print out cross tabs and then take a handheld calculator to compute the odds ratios and the confidence intervals. I always had partners. Louise Brinton and I trained at the same time, and we said, “This cannot be.” So, we got what became a contract for real computer science support. So, even small things like that.
For bigger changes, I think there are two striking effects when I look back on how technological revolutions come in, and they’re at odds with each other. One is there’s often an immediate belief that, “Oh, yes, this will solve everything, we can do everything.” Genotyping at first had that, without a realization that there could be a lot of errors, and particularly errors due to the sample sizes. There was a tremendous fervor when it became possible to measure a few SNPs [single-nucleotide polymorphisms] in a few [of] what you thought surely would be the important genes, and a lot of papers came out that were just all over the place, in conflict with one another. We now know that it’s because we didn’t know as much as we thought we knew. You pick your favorite genetic pathway and look at a few SNPs in it, it’s very, very, very unlikely to succeed more than 10 percent of the time, probably even less than that. So, there’s sometimes a first flush of, “Oh, this will do it for us.” [SNPs,], machine learning. Quite a few things.
At the same time, there’s a reluctance. We all do what we do. We know how to do it. We can be a little hidebound. There can be some resistance. Obviously, the trick is to be in the middle, and not in the middle for its own sake, but alert to where the possibilities may actually translate into utility for epidemiology. And now I’ll speak first as a manager and then as a scientist. As a manager, Joe always understood this. It was very important - for the entire time I was in senior leadership, so the last 20 years of my career. You really want somebody in the Division playing with the new stuff, but only playing. You want them exploring it. Nutrition is filled with these things. Liquid assays. Lots of very promising, very new, untested. And you’d like a little bit of the [Division] portfolio exploring the technology in the place where it looks most promising actually to reveal something. You don’t want to turn the ship around and just head that way.
And then, this is the hardest part. Because [the] people who are willing to do it and wanting to do it are wildly enthusiastic, you need to put some milestone stopping points in. At those milestone stopping points, you need to take a good skeptical look and say, “So, how did that go?” Metabolomics was one of those. Oh, it’s just going to tell us right away what we need. So, by then, whoever it is that’s in it, they’re so invested that they really don’t want to hear, “This didn’t move the field.” But you have to be prepared to say that. So, from an overall manager’s point of view, if you’re Joe and you’re leading the direction, you really want to be reading widely. Every technological advance that could bear [fruit], you want to know about it. From some of those, you want—and there will always be people who want to do this—you want a little bit of experimentation and then you want to look. You want to stop and call and stop and look, and then you want to do it again three years later. Because maybe the technology wasn’t right then but maybe it is now.
From an individual scientist’s point of view, you want to do the same thing, but you don’t want to get too sort of mesmerized, we’ll say. I cannot tell you how many times I have seen people – or I myself – have been mesmerized by the latest new technology. Honestly, the less you understand it, probably the more mesmerizing it can be. New technology, you must explore it. You have to be aware of the siren song. You have to constantly be considering, could this be useful to me. Was there another part to your question?
HWT: No. I think that’s a good answer. Let’s move on. I wanted to ask you if there was anything that you wanted to add about some of the other studies that we focused on in the first interview, including non-Hodgkin lymphoma.
PH: Yeah, I do, actually.
HWT: We talked a great deal about the Washington Ashkenazi study. We talked about the design of the studies. Is there anything that you feel is important to talk about right now?
PH: Yeah, let me, if I may. I made some little notes for myself. So, after doing the bladder cancer study, actually while I was doing the bladder cancer study, it was a few months in, I launched an ovarian cancer case-control study in the District, in Washington, D.C., and it was hospital-based. The focus was on ovarian cancer etiology, particularly the protective effect of oral contraceptives, the role of parity, and talc. Talc has since become even more controversial, but it was raised as a hypothesis then. A key concern was that ovarian cancer has a wide variety of histologic types. So, you need a study that has decent histology.
The methods were in some ways straightforward case-control, but hospital-based. And doing that study, I realized I had to figure out the catchment area problem. That is, you can draw a circle where everybody shows up in a hospital. Within that circle, almost for sure, you can draw a larger circle where many people do, but not everybody. And then there’s like the rest of the world and some people show up there. So, you have to figure out what’s the right control group to do that, which I learned to do on that study. And we did show that oral contraceptives were protective. Talcum powder had no effect. And there was a lot of heterogeneity.
Immediately—well, no, a few years later—Alice Whittemore said, “We have a lot of good studies of ovarian cancer. We need to pool them for numbers.” So I joined her collaborative ovarian cancer pooling project, which is part of my training in how to do collaborative studies and pooled analyses. I learned from that the value that a consortium can bring in talking to other people who are working on something you care about. Ovarian cancer, I cared about. This was a theme throughout my career of ovarian cancer.
And I also found that I could make a risk model by using the case control data. And she [Whittemore] and I together had a lot of fun doing that. That became something I did in other studies.
Then I worked on the skin cancer study with Peggy [Tucker]. She was the leader of this. I mostly worked on some design elements. When I pull out [now] what was most important for me, it was really all about dysplastic nevi [a cancerous mole]. Are they real in the sense that everybody can recognize them? Do they matter outside of the family context? Our study showed that indeed they do. They’re a very important predictor of melanoma. More than that, that they’re common enough outside the family context that, if you have a risk model, if you’re trying to predict who ought to be screened, you should take that [dysplastic nevi] into account. Then something that I was able to add was a reliability study to show that experts will agree. If you show them photographs of moles, they will, by and large, agree on the color, the shape, the size. And they will agree on what’s dysplastic.
So that, the reason that was worthy of publication in JAMA is because those were important problems. It didn’t have the urgency of the saccharin study. But it had real impetus because of the clinical implications. So that one [study] showed me again that I could contribute by helping in simple design ways, and also adding on studies that might strengthen the interpretation of a study that would get a lot of scrutiny. In this case, it was clinical scrutiny because not all dermatologists at the time we began the study thought that this was a practical and meaningful thing to do, to look for dysplastic nevi.
The same thing when I look back on the Washington Ashkenazi study, I remember a lot of the anecdotes about it. And it was fun. But really, the three breakthroughs that got us there also give you a sense of what life was like in DCEG in those days. We couldn’t have done it, we couldn’t have done the study, except for Jeff Strewing discovering there are Ashkenazi founder mutations. That meant we did not have to scan two big genes using expensive technology, which was actually at that point under patent. And then, Larry Brody and he [Jeff] found that it was practical to do the finger stick. And then we showed that kin-cohort [design], again, a kind of synthetic cohort, could work.
But I realize I haven’t really explained the cleverness of that one. Because the discoverer of the BRCA1 and 2 gene said, “If you carry a mutation your breast cancer risk is 90 percent”, which I thought could not plausibly be so. I’m not trained as a geneticist. But I thought, you can only inherit your genes from your mother or your father. So half of the time, if you have a mutation it had to have come from your mother. Therefore, half of the mothers of any man or woman who carries a mutation, they must have been carriers themselves. Half of them, therefore, by her claim, would have had in excess of a 90 percent risk, and the other half would have been ordinary mothers, and they would have had the background risk, which I’m going to just call 10 percent. If you put all the mothers together, if her claim had been correct, you’d get the average. The average is 10 plus 90 divided by two is 50 percent.
If you ask people who have a mutation, man or woman, did your mother have breast cancer, and the answers are, for half of them, if less than half of them say, “My mother did,” you know 90 percent [risk] isn’t right. It’s a pretty simple insight, but we had to prove it. Once we had proven it, it meant that we didn’t have to do a long, expensive prospective study. We could make a synthetic cohort. Again, a concept I borrowed from economics.
It was unusual enough as a concept that we did really need to confirm it right away, which we were able to do by going to Iceland, where they [also] have a founder mutation. It was easy to show that indeed the risk of breast cancer is much closer to 50/50 than to 90%. When I think back on Washington Ashkenazi, that’s part of what sticks in my mind.
[The] DES [diethylstilbestrol, a synthetic estrogen] study was in five phases. In brief, DES was heavily promoted in the mid-century. Millions of mothers, basically my mother’s generation, were exposed. The research had really five very different phases, and my role came in the later ones. The randomized clinical trials showed it doesn’t work. Then there was the cluster of very exotic clear cell cancers [of the vagina or cervix] in young women that showed that it [DES] had this astonishing ability to affect the development of the women in utero. Then there were some very good cohort studies that showed a whole lot of other effects on the reproductive tract and on fertility and infertility. I entered only at the next stage where the question arises: when women get to middle age, where cancers could show up, would the daughters have an excess risk of breast cancer? The problem that we were solving was essentially one of numbers. Each of the five individual studies that we knew about, that were good, were designed differently, and they were going to be underpowered. It was the same problem that we later encountered in genome-wide association studies, but they [investigators] didn’t automatically want to work with one another. Bob Hoover led this. I was very pleased to be involved in this field for ten or fifteen years, but he [Bob] was really in it for the whole 60 years. He was really a giant, and that New England Journal article on the daughters is, I think, the definitive work on what happens to them. I did some work on the mothers. I think the other thing that for me was important was just promoting the consortium.
HWT: Okay. I’m going to interrupt you there because I just want to clarify for the record, I know we talked about DES before, but can you tell us the full name?
PH: Oh, I should start again. Yes, I’m sorry.
HWT: Also this is specific to mothers, sort of mid-twentieth century, being given by doctors a drug when they were pregnant in order to prevent miscarriage that turned out to be carcinogenic. Is that correct?
PH: Yeah, let me step back. I keep saying “we said” before. Let me just take it from the top, as it were. It was heavily promoted in the middle of the century as a way to prevent miscarriage. Millions of women took it in pregnancy. Then, by the 1950s the randomized clinical trials showed that it didn’t work, and then by 1971, there was an extraordinary discovery of a cluster, meaning (I think it was seven) young women were seen at Mass [Massachusetts] General with this very unusual cancer at a very unusual age. What was in common was that all of their mothers had taken DES, diethylstilbestrol, when they were in utero. That was the shocker.
But that prompted studies, particularly of the daughters, but secondarily of the mothers and the sons, to see what else was going on. It became clear that the daughters had an increased risk of a variety of abnormalities, many of which would prevent them from carrying a pregnancy to term, but it wasn’t clear whether they would have later risks of cancer as time went by. It wasn’t until enough of the women who had been exposed in utero got to an age where they would be expected to have cancer that you could do those studies.
That’s where I entered the field because it was clear to anybody that we needed to combine the strength of the individual studies and make a new study. This is not a consortium. This was making a new study. Each cohort would keep track of the women they had been following, but they had to agree to a common protocol. I wrote another questionnaire. They [principal investigators of the separate cohorts] had to agree we’d do follow-up in a certain way. It was a brand-new study from people who separately had five rosters of mothers, daughters, and sons who had been exposed. Holly, did that answer it?
HWT: Yes, it did. Very well. Thank you. That was very clear. I have a couple of follow-up questions with regard to some of the studies that you’ve talked about.
PH: Sure.
HWT: When you talked about the ovarian cancer study, you mentioned, you said people show up there. Are you talking about the different sort of circles? You know, beginning with the hospital and then moving outwards. Can you just elaborate on that a little bit?
PH: Yes. When you do a hospital-based case control study, you’re making the following assumption. The people who have the disease I’m interested in, whether it is melanoma, ovarian cancer, whatever it is, they come from somewhere. You want to compare them to the population whence they come, and that’s not recorded anywhere. In other countries where there’s a National Health Service, you can say, “Oh, yes, oh, yes, if Holly had had it, she would be sent to the same hospital that Trisha is sent to.” You can’t do that here. You have to understand for any individual hospital, when you are creating a study, where did the people come from? You especially have to do it if you’re interested in things that are themselves related to geography, and a lot of things are. In the melanoma study, it would certainly be related to your income and your sunlight exposure and where you go on vacation and all of those things. In the ovarian cancer study, it certainly would be related to your income and your wealth and how many children you have and are you treated for infertility and all the things you care about.
So, it’s an important concern. Our insight in Washington, which then we were able to use in the melanoma study, was that you can actually do a little digging and figure it out. So, once you look at a hospital and figure out where the patients come from in the service [or ward or clinic] that you’re interested in, then you can say, “I think I’ll get controls who represent those areas.” You don’t have to, but it’s an addition that takes away one of the concerns you otherwise have with clinic-based studies and hospital-based studies. It means that your definition of who’s in [the study] looks a little bit like a target, [with a center and two rings], sort of like that. Does that explain it, Holly? This was one of many fairly minor methodologic additions. I’m proud of them [methodologic additions], and this is somewhere between our questions about my life as a scientist and the [individual] studies. For me, [each methodologic addition] was [also] another tool in the toolkit. So, if I encounter somebody who says, “I need your consultation. I want to do this,” I can say, “Here’s an option you could consider.”
HWT: I wanted to ask you as well, going back to bladder cancer, I thought I heard you say that you realized that saccharin was in fact not the problem. But you did see in the 1970s an increase in bladder cancer, especially among women. Do I have that correct?
PH: No, no. I confused you.
HWT: Can you explain to me?
PH: No. I have confused you. So, the report that caused us to do the bladder cancer study was weak, a poorly designed study that said women are at increased risk of bladder cancer because they had artificial sweeteners in their diet. That’s not true. I mean, that is, we found that not to be so. The animal studies unfortunately were also kind of weak, but I think it was the female rats that got it. So that left it as a viable hypothesis when we started the study. Because of that, we were very careful to say that we cannot say that it’s absolutely zero in women. It’s very hard to say zero. You can’t say zero. What you can say is it is vanishingly small. Because it did still cause cancer in laboratory rats, I think the FDA for a very long time, perhaps still, have to have signs up that say—
JL: [Jennifer Louikissas] I’m jumping in just because I want to add that Bob Hoover, I was going to write it in the chat, but it’s taking me too long. Bob used to say that the rats were given doses that resulted in like crystals in their urine that was a secondary factor, but we didn’t know that at the time, right? We only knew that later.
PH: Right. Right.
JL: You didn’t say that part.
PH: That’s true. That’s good. That’s good. And it gives a flavor, I think, of how it was at the time. You’re always responding in these high-profile situations. You need to do the best you can at the time you can, but you continue to listen out for that. The poor FDA didn’t ban saccharin. [It was] actually replaced because it had so much bad press and it doesn’t taste that great, so it eventually got replaced. But we could answer the question, which was, “In humans, no, this is not causing a lot of bladder cancer.” What I may have confused you with, Holly, was that for non-Hodgkin’s lymphoma, there continues to be, there was, there was a long-time trend of increase in risk for men and for women. Did I clear that up?
HWT: Yes. But I do have one other follow-up question before we move on. I just want to make sure we’re clear.
PH: Okay.
HWT: Which is, getting back to the ovarian cancer, so the study was specific to DES.
PH: No.
HWT: It wasn’t more generalized.
PH: No. No, no, sorry, two studies. I’m sorry.
HWT: So, can you explain a little bit more about the ovarian cancer, what the focus was?
PH: Sure. Sure. I did an ovarian cancer case-control study in the District of Columbia starting in 1978. [It had] one question about DES; it was not focused on DES. But it did get me into the world of ovarian cancer, which I continued to study in a variety of ways for the rest of my career. The reason I mention it there was that study, the [Washington,] D.C. study, taught me some methods, and it introduced me to the power of, after a study, joining a consortium, doing a pooled analysis. That introduced me to some other lines of research, which were generally applicable across the board.
DES I got involved in substantially later, in the 1990s. I already knew about it because every cancer epidemiologist does, but my involvement in the DES Combined Cohort study that Bob [Hoover] and I and many other people helped to create, that was in the 1990s. I’m sorry, that may have been in the interest of speed. But they were completely different studies. One was a small, 300 cases in the District of Columbia, 300 controls, 1978 case-control study of ovarian cancer. The DES combined cohort studies were in the 1990s.
HWT: And are there any other findings that you want to mention with regard to your work on ovarian cancer previous to DES, or shall we move on?
PH: I’m going to come back to—well, I’ll just give you the whole thread, then. I worked to help design the Prostate, Lung, Colon, Ovarian cancer screening trial [“PLCO”]. And that’s because I had, by then, a lot of background in ovarian cancer. And—well, I think let’s pick up that one later. Let’s go on.
HWT: Okay, so you’ve mentioned the importance of pooled studies, collaborative studies, consortia, collaboration, which we also spoke about last time. Let’s talk about your role as the NCI chair of the cohort consortium. First, can you describe that consortium?
PH: Yeah. I made a few notes on this because there’s so much to say. I want to say less. So, the [NCI] Cohort Consortium is a voluntary organization that prospective cohorts can join. They have to [include] maybe 10,000 people or more. They have to be a well-defined cohort, which is a technical term. And they have to be able to ascertain cancers.
Setting up this consortium, this particular one, the NCI Cohort Consortium, was one of the very big goals that Bob and I had when we began the [Epidemiology and Biostatistics] Program. We were motivated because we knew that with genome-wide association studies on the horizon, we would be swamped with false positives. Bob was especially passionate about this, and was on the road giving his tsunami speech, which was correct. So, we wanted to be ready to meet it. It wasn’t happening at that time, but it was part of what we thought the Division of Cancer Epidemiology and Genetics, our brand new division, should do, and that our Program, Epidemiology and Biostatistics, should really support. It was actually part of a wider strategy that we had. But it was critical, to answer your cohort consortium question.
The goals were to be able to do both parallel—you use your study, but you put it in the same format as mine—and pooled—we put our data together—analyses, when it would be possible to do these genome-wide association studies.
We thought that the first need, the driving need, would be in the common cancers, in breast cancer and prostate [cancer]. Because we knew that they were common enough that that would be the first place in which you would begin to see [potentially] one report from the Teachers’ Cohort, one report from the Nurses’ Cohort. Our sister division, the extramural Division of Cancer Control and Population Science, could not possibly fund all of this. It would break the bank. It would litter the—you know, we would learn much less than we could. We would have false positives; we would have false negatives. So, we knew that, in breast and prostate [cancer], there was one other possibility, and indeed this happened. Studies that are not as rigorous as [those of] breast cancer occurring in a cohort, would get out there first and do it first. Indeed, a consortium for breast cancer studies without any particular definitions, or very lax definitions of where the studies came from did get going. They do a lot of work and important work. I’m not opposed to that, but if you want to know if it’s right, you want to start with the strongest design you can. If you are going to spend the kind of money it takes to do a genome-wide association study, it better be in a prospective cohort. That was kind of widely known, but how do we get from here to there? That is, it was widely known we would need genome-wide association studies done in good studies. Widely known by epidemiologists. It was widely known by non-epidemiologists that genome-wide association studies would happen, but we were quite focused beginning by, as I say, 1995, on getting the infrastructure in place so that this could happen. Our notion was the very first round of studies would be what were called BPC3, Breast Prostate Cancer Cohort Consortium. Bob took the lead on these.
At this very same time, we had agreed, he and I, that in parallel we would be strengthening some case-control consortia in the less common cancers. That included InterLymph (for non-Hodgkin’s lymphoma), and an ovarian cancer consortium and a pancreatic cancer consortium. But the gold ring to get the cohort consortium going would be a [successful] project.
Can I pop sideways for just a minute here? I just wanted to say something that I learned over the years about consortia, like how they work. Because I find that now I’m going back and forth between the consortium itself and its projects. And I want to sort of sharpen that distinction. Would that be okay? Okay. I think that a consortium can be very powerful, but it has to follow best practices. I believe that those are voluntary, that you make clear what the requirements are to join, and that it’s open. You can’t say, “You just have to be a friend of mine to join.” That’s not right.
The projects need to be well-defined. They are distinct from the consortium. In many consortia, you can say yes to one project and no to another. Or you may not be able to be in one project because you don’t have the data. And you could say no to another. I think the project should fit into an overall strategic plan. I think the leadership should always rotate. I think that’s where I have had success, where Bob has had success, where DCEG in general has had success, is when we’ve been able to help early, to show that it can be done, to set up the structure. Honestly, to be generous, to share the credit. Because otherwise it looks like “Let’s build a clubhouse and I’ll be the president.” Right? It’s not right, and it’s not sustainable.
I think I learned that partly from the Oxford pooling project. Valerie Beral was one of my heroes. I mentioned Alice Whittemore - she did that in the Ovarian Cancer Pooling Project. IARC has a lot of experience in this world, bringing people together from other countries. These are, I would say, consortium best practices, and we tried to apply those in general in the consortium. When I talk about the consortium, I mean the 36 cohorts joining together. When I talk about the projects, I’ll try to be distinct.
I tried to work on some but not all, again, as a matter of principle. And Bob did, too. If every single project had to have a DCEG person on it, that’s [not good]. I mean, you could represent [particular DCEG] cohorts. But always to be run by someone in DCEG, that doesn’t feel like building a collaborative structure that is going to last, have buy-in, be creative, and evolve as technology changes. We [Bob Hoover and I] both made a point to be very active in some projects and inactive in others by design.
Okay, now bouncing back, (laughs) you’ll have to put me back to where you want me on the cohort consortium questions. But basically, Bob [led] on BPC3. I did some helping there. I can explain what I did. More important, I took the baton to get the uncommon cancers, starting with pancreas—the uncommon lethal ones, PanScan in particular, and I got deeply involved. That’s one where I needed to co-lead the study, to lead the study in order for the consortium to be able to succeed in its strategic plan. I did not have to do subsequent ones, but I did need to get the first one to go.
Also, the same thing [happened] with Vitamin D. I had to get deeply involved to prove the principle that the cohort consortium could use serum that had been banked to do bioassay studies. And also, the first “dry” projects, BMI, tobacco and smoking. So, kind of [over] the whole map. Why don’t you tell me which parts of the cohort consortium studies you want me to talk about?
HWT: Well actually, you mentioned PanScan, and I think you know, because that was a landmark study, you can begin there.
PH: Okay.
HWT: I know we talked about it before, but I’m not sure. You might want to add about the role more generally of cost-effective genome-typing technologies in cancer research as well.
PH: Okay. Let me back up then to before PanScan, because that was fundamental to the consortium. The way that Bob was able to launch BPC3, the first [Cohort Consortium project], was to partner with David Hunter at Harvard and with Stephen Chanock, who is now our Division Director. And Stephen said, “I can make the genotyping go well enough here.” David said, at Harvard, “I can make the genotyping go well enough here.’ We’re going to need some money. And we’re going to need people, we’re going to need five studies. Big studies. Only the biggest ones. We don’t need the small ones to do this. Every time you add a study into a combined analysis, you add [to] the cost. The linchpin was that we can do high-quality, genome-wide association studies because we can do the genotyping. The technology there was transformational. I’m describing the part that’s the epidemiology. I’m describing all the stuff that goes around it, but the transformation was [technologically] we can do this.
Stephen led it, again, with great partnership, lots of sharing. Lots of sharing, but you can’t just share; you have to lead. That was the secret sauce for BPC3. Which, in turn, showed the consortium you’re really valuable, and the Cancer Institute thinks this is really important and a way to go. And then came PanScan. So having proved with breast and prostate that bringing the cost down of a genome-wide scan means that we really can find brand new genetic markers for risk. We can do it for other cancers, too. Is that okay, Holly? I know I jumped out. Unless you know that, PanScan doesn’t make any sense. Okay, so having done that, then the next—
HWT: Just one quick follow-up question. I need to interrupt you. Tell me about the timing of this. What are the years that we’re talking about for PanScan?
PH: Oh, goodness. I think I probably don’t know that I can retrieve them. Oh. I don’t know.
HWT: We can always put that in. I’m just curious.
PH: Let’s put a pin in it.
HWT: Yes.
PH: We’ll put a pin in it.
HWT: So, go ahead. Continue. Sorry to interrupt.
PH: That’s okay. I can quickly look at my papers. At the beginning of one’s career, one is in a blur doing things like the Bladder Cancer Study, and when you’re trying to get the Cohort Consortium up and going, it’s a kind of a blur, too. As soon as it was clear that breast and prostate could go and would be very successful examples of doing a genome-wide association study with really solid epidemiologic studies, the next order of business (again that we had foreseen in 1995). Oh, I guess I can give you the years right. I think we were in about 2005 for breast. We were probably out in about 2008 or ’09 for pancreas.
[A GWAS] for pancreas, pancreatic cancer, was a terrific candidate to answer some science and strengthen the consortium. The science question is “What are the causes of this incredibly lethal tumor?” And there is a familial component, so there was a lot of reason to think that there would be genes at play. But because it’s so lethal, case-control studies really leave you uneasy because so many people have died before you can interview them. Prospective studies have a lot to recommend them if you’re looking at pancreatic cancer, and particularly if you’re going to spend a whole lot of money to do a genome-wide scan. You don’t want to spend that kind of money unless the people in the study on whom you’re doing it are really going to tell you the, give you confidence, in the answers that you want. Three times before the Cancer Institute had tried to launch a pancreatic cancer study because the advocates want to know, ‘So what are you doing about it?” And [the earlier attempts] just hadn’t gone. They just hadn’t flown because there didn’t seem to be enough promise that much would be revealed.
PanScan was different. We said we’re going to use this new technology. We’re going to look across the entire genome. We’re going to do it in the best possible study design. Then the organizational trick was to persuade everybody in the consortium to get onboard. The original price tag was prohibitive.
Now I’m going to toggle again, now not to the science, but to the management. I was spending a lot of time going back and forth between the extramural division, DCCPS, and DCEG, at this time. Again, very mindful of fences, but knowing when to be invited to step over them. This was a shared problem: how will we handle genome-wide epidemiological studies? It was a shared problem for the extramural and the intramural divisions, as was [the problem of] what do we do about pancreatic cancer? And I said—this is taking a page from Bob Hoover’s notebook—I said, “I think we can. I believe we can persuade these people to work together. I believe that if you can put in some money, maybe a tenth of what people think they need, but some money, we can do this. I believe that Stephen can do the genotyping just as for BPC3.” And it was enough. It was enough to have everybody excited about the possibility of following the same model. Although now many more cohorts.
I also am very pleased that I got some other sister divisions to play, too. The Heart, Lung and Blood [National Heart, Lung, and Blood Institute] are the ones who run a big trial that I wanted in. And they said, “We will kick in some money, so that they will be able to find out who’s got pancreatic cancer and send you the specimens out of the freezers. And you’ll be able to do the genotyping.” So, with a lot of people helping, we were able to do—
JL: Was that Framingham, Tricia?
PH: Yes.
JL: Okay. That’s an important study.
PH: Framingham was one. They also ran the, what’s the WHI? The Women’s Health Initiative. They, thank you, Jennifer. Heart, Lung, and Blood runs Framingham. I don’t think they joined this study, but they’re in the consortium. But important for this study was Women’s Health Initiative, which was a trial of vitamins, aspirin, some other things. Anyway, they—
JL: Hormone replacement therapy was notoriously—
PH: And hormone replacement therapy, mostly hormone replacement therapy. Thank you. So, they helped. Anyway, so there was a lot of—(laughs) I’ll digress a second. I know we’re going to have to make a return date, so I’m going to allow myself a digression. BPC3, as I told you, actually stood for Breast Prostate Cancer Cohort Consortium. (laughs) But when I would hear it, I would say to myself, “BPCCC, Bob, please, chair, cheer, cajole.” (laughs) Because that’s kind of what you need: Chair as in lead. Cheer as in, “You can do this!” And cajole. You do have to kind of push people a little bit to get them to move. I think that a lot of what I did managerially for PanScan was chair, cajole, and cheer.
Scientifically, the most exciting thing about PanScan, and Jennifer may remember this, is when the results came in, and lo and behold we had findings in the ABO blood type gene. And Joe Fraumeni knows everything; there’s nothing he does not know in the world of cancer. And he was, I think, the very first person that, well, I told Bob. But then I said, “I’ve got to tell Joe.” And I ran down the hall. And I said, “You won’t believe this, but we have found hits in the ABO blood type [genetic region].” And sure enough, he said—
JL: I was standing there. I remember this exact moment.
PH: (laughs) Chime in. It was fun. And of course, he already could say, “Why, I believe there was a paper back in—” And he was right. There had been a couple of papers saying ABO blood types influence your risk of developing pancreatic cancer.
And then, and this was a lot of fun, our Harvard colleagues said, “You know, I think we once asked the nurses, what is your blood type?” Without using any—and they’re nurses, they know—without using any genotyping data whatsoever, I’m going to go back and see what is the risk of pancreatic cancer in nurses who [said} they were B and nurses who were A and nurses who are O, and it completely confirmed the genetic finding. That was very exciting. I do talk a lot.
JL: I’m only going to add one thing to say.
PH: Oh, please do.
JL: Associated with. We don’t actually know, we still don’t know, 10 years, 15 years later, why ABO—or maybe we do and I just haven’t seen that paper—but I just want to make clear that back then, you would say “associated with elevated risk.” It wasn’t increasing risk. We don’t know how it’s increasing the risk. I’m just clarifying that–Trisha, Bob, all of them–always very careful about causal language and the need to stay away from over-interpreting the finding.
PH: I have to say that since I’ve retired, I’ve gone back to sort of casual speak. (laughs) So you might find that—so, thank you. Yes, I generally mean associated with. But I do mean that the people with that the blood type [were at risk]. So, it was always nice to have separate lines of evidence support what you find in the genome-wide association study.
That was PanScan, and because it was so successful, it was really not hard to persuade people that we would be ready to try not just looking at DNA—because they could go back at home to their own studies, extract a little DNA and send it to us and still not be really sharing the crown jewels—that it was time to actually send some serum and do a study of vitamin D, a highly controversial area. That’s how we got into that. I also want to say, hopping out to the managerial [level] for a minute, I thought it was really important that an extramural activist in the Cohort Consortium take the lead on that study. It was very, very important. So, I asked Kathy Helzlsouer, who subsequently became the director of the epidemiology and genetics program on the extramural side. She was at [Johns] Hopkins then, and she said she would do that. And she had been skeptical that the Cohort Consortium would ever become valuable for the smaller cohorts. Not only did PanScan make her a believer, but it made her willing then to take major responsibility for an examination of vitamin D, which is how we came to do a study of vitamin D [in relation to] a handful of cancers at the same time.
HWT: Before we move on from your role as NCI chair of the cohort consortium, is there anything else you want to talk about? I want to move on to your work as deputy director beginning in 1996, as well. So, anything else about—I know there’s a lot more there.
PH: I’ll more or less recap.
HWT: Or more about any of the individual studies that we’ve talked about?
PH: Yeah. I look back on the Cohort Consortium with a great deal of satisfaction, and I think we’ve touched on why. I think structurally it represents best practices. I think some of the proof is, I don’t know what it’s doing now, but I think it exists. I think if you build a consortium, it should live after you. So I think that’s good. That means we did well then if it lasted even a little bit afterwards, and I think the projects fit well in a strategic plan at that time. I do not know whether they do now. I think the Cohort Consortium itself fit in a larger plan that Joe had. I’m kind of sliding into your question about Deputy Director.
Joe [Fraumeni]’s concept when the division was formed was [that] we will still stay very intramural. He famously said, “They have x-rayed me, and I don’t have an extramural bone in my body.” I never found that completely plausible, but that was his view. But he saw that for cancer epidemiology and genetics, we would have a special role to play.
I have to talk about the mission of the Division in order for you to understand the mission of the Program. Is that okay? Okay. After we became a Division, we got a tagline, “Discovering the causes of cancer.” I think I might even have nominated it. But in any case, I subscribed to it, because I think it encapsulates what we did very well. So, “discover” means research, not treatment, not basic biology, not even prevention itself. Discovering means we are doing it. We’re not fostering it; we’re not supporting it; we’re not giving grants. We’re actually doing it. We’re intramural. The “causes”. We say cause because we mostly do etiology. We occasionally do things that we have to do that aren’t etiology in order to support our etiologic studies, but we are mostly etiology. We say causes, plural, because we are a big division. My Program was a big program. It had radiation, infection, nutrition, hormones, occupation, environment, and biostatistics. That was our way of saying there are a lot of causes of cancer, and in order for us to fulfill mission, we need that breadth. That’s part of who we are. That was true across the Division, too, other branches. But now I’m focusing a little bit down on the Epidemiology and Biostatics Program. We said cancer, not cancers, the singular, because that means the entire spectrum. That includes precursors and that touches on survival.
The Epidemiology and Biostatistics Program was a major piece, probably the larger piece at the time, when the Division of Cancer Epidemiology and Genetics was formally created. Within that framework, the goals that Bob [Hoover] and I had for the Program were some of what you expect. [First], we had these good strong branches - most of the time strong. Sometimes they’d be weak but most of the time, strong branches, and we needed to support them. We needed to help the Division achieve these big Divisional initiatives for where we thought we sat in the whole enterprise of cancer, and that included the Cohort Consortium. Then I think the other thing we wanted to do was just for the individual scientists, do what we could to nurture their careers. Make sure to help them strike the balance between novelty and [being immediately] useful -- all of the things that you have to do in managing a scientific organization.
That’s what we had in mind. We weren't young (laughs) I have to say, when we started the program. We had already worked together, Bob and I, for 20 years. So that was our sense of what the program was about. We had a lot of strong branches. We had a lot of work to do at the Divisional level together. We had our own philosophy of what the—philosophies, I should say, not necessarily identical—of what the individual investigators need. Jennifer, have I forgotten something? Holly, go ahead.
JL: I was just putting in the chat—
PH: I can’t read the chat and talk. Like I can’t chew gum and walk at the same time. (laughs) I’m sorry.
JL: No, Holly can read the chat. I was going to say Office of Education. So, you touched on training.
HWT: [Reading the Zoom chat] Jennifer, you mentioned the Office of Education and the Fellowship Training Program. Yes.
PH: Okay. I’ll go there. That’s an offshoot of “treat your scientist well.” Okay. So, I could [compress] what I just said about what Bob and I wanted to do. We wanted to do three things. We wanted to help the Director. That would be all of the things like the Cohort Consortium and Deepwater Horizon, and every time we were used as part of senior staff to solve larger problems, either across the division or outside the walls. So, we wanted to do that. We wanted to make our branches strong, and we wanted to make our individual scientists strong. So, I could reduce it down to those three.
In the individual scientists’ domain, we had had a casual approach to recruitment and training. [That] is how I was hired. You know, it worked well. We got a lot of smart people, but it was not consistent, and I think at that time I felt there was a lot of dissatisfaction – unnecessary dissatisfaction – in the fellowship ranks and in the early tenure track. And so, I started a brown bag lunch program, and I don’t know, it [recruitment and training] wasn’t equitable. Oh, yeah, well, that’s true, too.
JL: By formalizing the training program.
PH: Yeah, exactly. The evolution was to start with a brown bag [lunch]. I think in general when you encounter these problems, you have to go down both a formal and an informal track. The informal track is partly to explore what’s really wrong, and what will work. So that was what I started, very early, in the mid-1990s. I started a brown bag lunch. Just so that the fellows could talk to one another, and I could tell them stuff.
The formal track was to figure out what would we do that might look not quite like the one fancy formal training program that our sister division [Division of Cancer Prevention] ran, but maybe a little bit more like that. It had structured intake. Everybody applied at the same time. Then you got training together. There were routine check-ins. At the same time, the NIH as a whole and NCI in particular had discovered dissatisfaction among the fellows. They were also interested in training.
I like to take on the problem—as you can tell—when it’s ripe, when it’s still early, but I think it’s tractable, and that’s where it was in the mid-1990s. We were growing. There were a lot of fellows. There were a lot of young tenure-track investigators. Problems were arising. There were equity issues, too, but mostly it was unnecessary. We had a lot of strength in the Division that wasn’t necessarily getting funneled to the people earlier in their training. It had been funneled to me both because it’s idiosyncratic and also because it was small. When the Division grew, it had to find some ways to manage these. So, I spearheaded developing something that then became the Office of Education. Joe used to tease me and say, “Do you want to be Dean?” I did not want to be Dean. (laughs) But I did want to see a good solid training program that would make sure that the fellows and the junior faculty—the fellows, in particular—would get that sort of wonderful, rich experience that I had had.
HWT: We’ve talked about the mid-1990s, but how did your goals evolve in terms of being Deputy Director of that program?
PH: You know, they evolved because I was incredibly lucky. And together, Bob and I enjoyed, looking back on it, a lot of successes. So, you know, shadows and failures, too. But just think of the last few things we have said. We did strengthen the branches. We did make the fellowship experience stronger. But we started informal. We got formal. I think we evolved. I evolved, in understanding as I went, also how to interact with Bob in an official way, where we had worked together for so long we could finish each other’s sentences, but that wasn’t really the point. The point is when you have a Director and a Deputy Director, you really want them to do different things. [You want] to keep some separation in what they pay attention to, [in order to] strengthen the leadership. I think our evolution was—(laughs) to some extent, as things succeeded, as we launched studies or saw people launch studies that went very well, we thought, “Oh my goodness, we’re going to need some more staff. Oh my goodness, we’re going to need some more funding.”
Then there was another phase of learning how to manage managers, I guess you could say. And that happens in every organization. It certainly happened to us. I think some of our success in the “consortial” world, which did mean going over fences–I like to say invited, not crashing them–actually widened the sphere that we had to work in. And we did find ourselves quite often being asked to help structure a solution to a problem, to go to another place where there was a little conflict on the ground.
Here I would say our goals evolved in the following way. By realizing, by analyzing, what was working for us in DCEG, we were able to identify, again, a toolkit. [We would] identify the management tools or the policy and management tools that would be widely applicable. When asked, [we would] take them to other places. That’s how I came to be on Deepwater Horizon. That was just commonly the basis upon which somebody in senior leadership would come to Joe and say, “Could one of your people,” meaning Peggy [Tucker] or Sheila [Zahm] or Bob [Hoover] or me in my day, in my day that was—obviously the Branch Chiefs, too. But particularly [they asked] could you send Bob, Trisha, Sheila or Peggy, because I need somebody to help with something where the science and the policy are rubbing against each other or there’s some conflict going on.
That evolution, I would say, was obviously gradual.
Bob had more experience than I, so this may have happened sooner for him, but I think it was in the last ten years [of my career] when I understood [that] when you’re called into that sort of a situation, that there are some things that you can just always do. You can always start with asking, “What do we share? What is our foundational understanding of what we’re trying to do here? What’s the big picture? Which usually is the basis for saying, we are all fundamentally on the same side. This is our broad goal. We’re sort of together. Then I found, and only then, it was useful almost always to say not “Who’s in charge?’ which immediately gets people’s fur up, but, “Who’s in charge of what?” Then, with a little bit of work, you can kind of figure out, okay, collectively people will say person A is handling this, or group A, or organization A and like that. And then usually, but only then, you can say, “Well, how about just a little bit of structure? Don’t think this is too bureaucratic, but how about if we—” or sometimes, “How about if you have a timeline?” I can’t tell you how often I’ve said to people, “You know, time is actually measured in units. Months and years.” [Make] a timeline. “How about if we say we’ll check in at these appointed times? How about if we say the oversight will be in the form of a routine check. Not I’ll call you if I have a problem.” Maybe every month. How about if we say the oversight will be quantitative? Instead of saying, “I’ll get a lot done.” How about if we can measure that, too; of the 3,000 cases, how many have been interviewed?
So, I think our evolution in management went in all these domains—my evolution in management, I would say, and I think it went in tandem with Bob’s. It’s not identical. He’s got a different temperament and different personality, for sure. But in those ways, I think those were the principles that we came to see our own experience within the Division had taught us would be useful elsewhere. There are shadows. There are things that we never got right, but I don’t like looking back on—
HWT: Sorry. Go ahead?
PH: I don’t like thinking about the shadows so much.
HWT: Ironically, I was about to ask you about those.
PH: Okay. Go ahead.
HWT: Well, in the sense of learning. Because we have focused a lot on accomplishments and on successes, but setbacks are super interesting because people learn from those.
PH: Yes.
HWT: Are there any that come to mind that you want to mention?
PH: Sort of in the, do you mean sort of personally? Or kind of in my science? Or in the sort of management work?
HWT: It could be any of them. But I was really specifically talking about—I mean, it could be personally.
PH: Okay. Let me just think for a moment. Okay. I already told you [about] a setback in my personal training was to think, “Oh my gosh, Harvard’s not the place for me.” And I had to leave and do some work and come back, and it was perfect. That setback taught me go sideways, and sometimes you return. You return with a new understanding. I think, I had some setbacks in collaboration with some of my very specific colleagues. And one in particular was very talented, but [had] a fundamentally different philosophy about sharing, and I spent too much time trying to either persuade the colleague that my philosophy is right, which you know, mostly that doesn’t work. It doesn’t work in your family, and it doesn’t work in the office. And not enough time just saying to myself, “Okay, this one is a take-it-or-leave-it.” If you cannot come to agreement on how you’re going to do this thing, you need to either move on, sort of cede the field, let him take it—it was almost always a him—let him do it. Or decide you’re going to get in and really, really try to make things go the way that you think they ought to go. So, there were some setbacks, which—and here Bob and I have a slightly different philosophy, I would say—there were some setbacks that I put on the category of scientists collaborating together necessarily bump heads on occasion. I just don’t think I can; I think my lesson was really just about sort of the work environment. I don’t think it was very particular to epidemiology. I think it was more just growing and learning.
Oh, I will say this. We lived together in DCEG for a very long time. It was a very unusual place for having so many colleagues with whom I worked for 40 years, 20 years, ten years. So that introduces a familial quality to some of the relationships. That actually can be beneficial, but it also can be a little bit confusing. I think some of my setbacks were wasting time trying to change other people. (laughs) Okay, that’s at the sort of personal level.
In my studies, yeah, I had a few setbacks when I made the NHL study a little too complicated. That probably cost us about six months. PanScan was a good example. I had tried to get something going [in pancreatic cancer] a little earlier. But the setback I think ultimately was good because, in fact, it really wasn’t ripe until we could offer the world something really terrific. Yes, I was disappointed when it was proposed, and it didn’t go. I proposed a study right after Washington Ashkenazi, and our internal review panel, which was called senior advisory group, said, “Uh uh.” And I was disappointed, but in retrospect, [it was the] same thing. It wasn’t time. That’s what comes to mind.
HWT: I just want to also ask you if you could focus on mentoring for a moment and the role of mentoring on science in general, and its importance to you. I know when we previously talked, you talked at some length about Bob Hoover and Joe Fraumeni. But also of course about your cousin who played a role in your career path. But just more generally, and I know that you have a reputation as a mentor at NCI, quote-unquote, “that is simply legendary.” So obviously it’s important to you. You won an award. How can you describe your own role as a mentor and also the role of mentors in general in science?
PH: At the general level, I would say that when I think about mentoring, I always think about the Oxford Cleric in the Canterbury Tales. “Gladly would he learn and gladly teach.” And from the time I was in high school, I thought the thing about mentoring that works is [that] you learn and you teach. That is, you’re a better mentor if you spend some time taking the posture that the person you’re mentoring is going to teach you something. And in the people who were great mentors for me (that was in school and also at the office), I was lucky. I’ve had a lot of good mentors. Part of what made them good mentors is that they would spend, let’s see, most of the time listening to what I was wrestling with. Really, really trying to listen to what I was going to say. Let’s say, three parts that. And then two parts just giving me general information along the lines of, “Well, people who come with a master’s degree do this. Well, if you go here, this happens. Well, five out of seven people who I’ve known have done this. Well, so and so did that.” Just information. “If you submitted again, they might take the paper. If you change to another topic, it might go. If you just do a little study now and a big study later.” Just feed information. So, two parts that.
And I think one part advice. When I have done good mentoring, it has been that sort of ratio. It’s sort of three: listen, two: give back information, and one: advice, because usually people don’t want that [advice]. If they want advice, now if you’re down there, like okay, tell me what you think, you still have to bring it back to their foundational principles. So, if you say, “I know that to you, this, this and this is terribly important. Does that help you with your question, your problem, your choice, whatever?” And very often it does. Sometimes it doesn’t. But then sometimes you just say, “Okay. From everything you’ve told me, and knowing how much you value this, this or this, I would advise you to do so and so.” So that’s I think, the generic template for mentoring. I don’t think it’s always so easy to do, but that’s what my mentors did for me, and it’s what I tried to do for the people that I mentored.
I have to say, okay, along the shadows line. One of the things my very favorite mentee said to me, “You know you do actually have a little reputation for choosing the people to mentor who are already smart and know where they’re going.” That was probably true. “So you have a little reputation for favoritism.” She told me that ten years before my career ended, and I listened to her. The next five people who came in my office that I thought were not necessarily the people I would have chosen as my very favorite acolytes, I thought oh, yay, let’s do this. (laughs) That was actually a great experience. Out of the wisdom of the young. Gladly learn and gladly teach. She taught me. She said, “You’ve got to do this for the ones you don’t automatically like.” She was right. Probably not the note to end on.
HWT: Just one quick follow-up question about that. What made it a great experience? Why was she right?
PH: If you, by listening to people who really do look at the world differently, and maybe are not the absolute sharpest knife in the drawer, you really do learn, you learn more about how the place works, how people work. And you learn to be a little humble. Because if you only talk to people whose sort of values and practices line up with your own, you get kind of narrow. I found it broadening. I found it humbling, and ultimately, I found it satisfying when they would come back and say to me, “You know, I did try that thing that you suggested.” Or sometimes they would say, “I discovered,” (laughs) and then they would discover the thing that I had said “go do this” and it worked. So that was even more gratifying when the ones that I was not ordinarily going to say, “Oh, let’s figure out how to do this together.” That was even more gratifying. But I really did. I learned from them. I was humbled by them. And in the end, it was really gratifying. I will say that was the last ten–
HWT: So what advice would you give to encourage young scientists today?
PH: Sorry?
HWT: What advice would you give to encourage young scientists continuing to pursue their goals or to seek out necessary resources even in spite of setbacks or barriers?
PH: Oh. [pause] I think I would say basically that—sorry, Holly, I’m trying to think of how to answer it more, sort of more, in a more focused way. Okay. First of all, I would try, if I really were mentoring somebody, I really would try to follow the advice I said and listen to what are they really asking. What’s really on their mind? My advice generically would be to the person, ”Listen to yourself.” See if you can write down and figure out what is the barrier, and where are you trying to go. What’s your wish? For information in the sort of, three cups of listen and learn to what are you trying to do, what’s the problem here. Two cups of let me give you some information, and a then a cup of advice. So, in the two cups of information, I would say there’s an awful lot of ways to have a satisfying scientific career. If you truly are unhappy and banging your head against something, it may not be a fit. Really know that there are a lot more choices. There’s a wide, wide world of science out there. That’s information.
I would say life is long. My own course of zigzagging is one path, but other people have straight paths. There are a lot of ways to get from here to there. For information, I would say, “I don’t have all the information. Let’s talk about who else has been where you’ve been or has been where you want to go. Let’s see if we can’t find them, collect some data, do a survey. See if a survey has been done. You do some interviews. Let’s be about data collection.”
I would try to say, “I have known people who want to do what you want to do, and this is what they did.” I would try to say, “I have known people who came from where you came from,” or I didn’t, and this is what they did. I would try to think of as many ways in that box of “Here’s some information I have”.
Then sparingly would I go to the last part, which is sort of genuine advice. For genuine advice, I think, no, I couldn’t. It would be back to cheerleading. I would say, “If what turns you on, what makes you happy to get up in the morning is the kind of thing I’ve described, ‘We discovered that vacuum cleaner bags collect dust, which is the objective way to find out whether you are tracking pesticides into your house and are ultimately the best way to find out whether the epidemic of non-Hodgkin’s lymphoma is being caused by us all being bathed in pesticides’ – if that’s your idea of a fun day, you probably want to do science.” So that would be [my] advice.
JL: There is one thing you used to say I’ll remind you of.
PH: Okay. Go ahead.
JL: You’re definitely invoking, and I do this, too, Voltaire. “Don’t let the best be the enemy of the good.” You would also say that simple is sometimes better, right? Like don’t make it overly complex. Like go back to the two-by-two table.
PH: Yes.
JL: When things were getting very, very like the rule of three or, one thing that Trisha did was she was the primo coach for major talks. One of her gifts is structuring a scientific talk to deeply engage the audience, and she has not fully talked about this. I will say unique, because within DCEG not that many people have this skill and flex this muscle as much as Trisha did. And is connecting it back to like why does this matter? She calls herself—called herself—“the people’s epidemiologist.” I haven’t heard you say that yet. She inspired that in a generation of epidemiologists that she trained and that have followed her.
The other thing she didn’t talk about is emphasis on women, people from other underrepresented groups. Mostly women. But fostering mothers as scientists and things like that. But the rule of three, go for simple, don’t use a ten-dollar word when a five-cent word will do the job. Clear, efficient writing. I think those are the main things you’ve taught me.
PH: That’s really, that’s very gracious and generous of you. So yeah, in that spirit I would say you’re right, and I can tell Holly those slogans, too. (laughs) You’re right. So yes, very often at that level of mentoring. I think I was off on big picture, where shall I go with my career kind of mentoring. But yeah, mentoring, I have a task to do, absolutely.
I tell people the things that work for me. Look at a two-by-two table. Just start there. Do keep it simple. Most people can only remember three things. (laughs) So if you’re giving a talk, start with three. Remember the science, not yourself. Because a lot of times, younger scientists in particular do get up in their own heads. And they’re fine if you say, “This is not about you. You are the conduit of science. Just tell the science.” What else, when people are giving a talk?
HWT: I just want to add that in our last discussion, you did talk further, much further, both in terms of your own career and in general about women, and if there’s anything you wanted to add. I mean obviously in science, women and people of color are still underrepresented. So, if you want to, but we did speak about that.
PH: I think actually I want to, Jennifer, I can quickly tell you what I said, which is I think the women scientists benefit most from structural changes – let the clock pause when you’re on the tenure track – and less from “How to talk more like a man.” I think I would add to that, Holly, that for women especially, for underrepresented minorities especially, that you want to use a mix of exactly these fundamentals, saying to them you can do the basics. You’re on top of the science. You need to present the science if you’re doing the analysis. Like think about a two-by-two table. Be clear. You give them the exact same message. Because the meta-message there is, you’re an equal among equals. You’re a worker among workers. You are as good as anybody. I am giving you the same tools that I’m going to give the next person who walks in. Because these are tools for everybody. This is what we do.
I think the extra part that you give the person who is not, I’m going to say “mainstream”, is to say, I hear you, and sometimes this is a curse. But believe me, in science, it is also a blessing. You have been given the opportunity to practice independent thinking. Has it always been fun? Not so much. Will it take you longer to get to your next promotion? Probably. Will it take you longer to be immediately the person that people run up to in the hallway when you’re at professional meetings? Probably. But you, along with that hurdle and those difficulties, you have been given the opportunity to develop that muscle of independent thought probably more than many of your peers. So that’s my take on women and minorities. Do the structural stuff. Don’t waste your time on lots and lots of workshops about how to behave, and then with the individual, sort of stick to the basics but know your hurdle is also something that’s going to give you strength for the rest of your career.
HWT: My final question is broad; which is, as we conclude this interview, what you would like to add. If there’s anything that we haven’t discussed from either discussion, really, that you think is important to add here. And Jennifer as well.
PH: Looking at my little three-by-five cards and listening for Jennifer.
JL: You’re doing great. I’m letting you do it. (laughs)
PH: Okay.
JL: International, occupational, radiation. Do you want to talk about Elaine?
HWT: We did talk at great length about the international work last time.
JL: Oh, good. Okay.
HWT: You talked about radiation. We talked about the future of research at DCEG.
PH: Yeah, I think for the international, the one thing I wanted to add after we had spoken before about the Costa Rica trial and the Chernobyl, I guess. The division does a lot of international work, and I think it’s very important for the science. I also think it’s very important because we are citizens in an international community of scientists. I’ll say for myself I came somewhat later to do international work. I probably, probably if I could have done the balance of family raising and science a little more differently, I might have gotten on a plane and gone to Europe sooner. But I think that when I did, I realized how much NIH really is a, it’s not just a national treasure [but also] a very important institution for medical research in the world. And the NCI is an important institution for cancer research in the world as a whole. We are regarded with admiration generally, I would say, and I, even though I’m now long retired, I think that it’s hard for me to imagine a better home than the one that I had in DCEG, in NCI, in NIH, in the Department of Health and Human Services, than the home that I had. It really, if it didn’t let me do everything I imagined when I was 18 years old that I might do, really discover a lot more causes of cancer and save a lot more lives, it let me do some, and it let me be part of an organization that really met my goals. Apply science and apply math, apply your logic, to solving problems that will help a lot of people. So, I feel very good about what I did, and I feel very hopeful. I think the world’s quite different. But I feel quite hopeful for the people who are working in DCEG now. I still think the NIH is a great place to work. I’m so happy that Jennifer and another entire cohort of really smart, really hardworking people are there. I’m gone, but the place is going on and it’s going strong. That’s kind of how it looks from the 2022 December perspective of somebody who started in September 1977. (laughs) And thank you, Holly, for being so patient.
HWT: Well, thank you. Thank you both. Jennifer, do you have anything to add right now?
JL: No. That was the perfect ending. Thank you.
[End Interview.]
[1] Jennifer Loukissas, Director of Communications at DCEG, was on the call.