Dr. William C. "Skip" Eastin Oral History 2004
Download the PDF: Eastin_William_Oral_History_2004 (PDF 57 KB)
Dr. William C. (Skip) Eastin
Interview date: January 21, 2004
Interviewer: Sara Shostak
Skip Eastin: [Beginning of interview]…because I think when the idea of using transgenic models developed, we thought it was going to go in one direction, and after looking at the results from the models, we weren’t sure they had as wide an application as we thought they would; whereas with toxicogenomics, I think everybody is just beginning to realize that the results from these studies will have an awful lot of applications in a lot of different fields, both from basic biology to the disease area. So although I don’t want to discourage you from following up on the transgenic models -- that’s interesting too -- we in the NTP are not pursuing these models as adjuncts to the 2-year rodent studies. We are doing genetically manipulated models in special cases . But we are beginning to look into including toxicogenomics as a part of the toxicity studies to determine if these data will help explain mechanisms. Anyway, I can answer questions that you have about the NTP transgenic models project.
Sara Shostak: Okay. Well, could you start with telling me just a little bit about when you came to NIEHS and what lab you came to work with?
Eastin: Actually, I was hired to work in the National Toxicology Program which is headquartered at the National Institute of Environmental Sciences (NIEHS) in Research Triangle Park, NC. I came to NIEHS in 1981, and at that time their new building was not completed and staff were spread out all over the Research Triangle Park. I came into the Program as one of the research staff, and although I did some laboratory work on various projects while I was here, most of what I’ve done has been to design, monitor, and report NTP toxicology and carcinogenesis studies. Probably 15 years ago, I took the lead for all the NTP dermal exposure studies, and at that time we had an interest in initiation/promotion, so I designed several mouse model topical studies. Because of this interest I was asked to be the liaison with other agencies for the topical applications. When the studies with transgenic models were started, one model, the Tg.AC, used skin as the route of application and as the target site, and I was asked if I wanted to participate. So I designed the studies that were done at the contract laboratories. The advantage of using that option, of course, is that contract labs have many more animal rooms, many more technicians, so we could do larger studies than could be done in-house. And they had the capabilities to do full pathology evaluations that was difficult to have done in-house. The NTP involvement with genetically manipulated mice was sort of the next step after Ray [Tennant]’s group presented the basic experimental work to determine what the advantages and disadvantages would be of using the Tg.AC and p53 knockout mice as models to determine carcinogenic potential. Let me back up a little. Ray [Tennant] gave a talk -- I don’t remember the year now, -- in which he gave an introduction to the Tg.AC and p53 models that they were looking at. I could see an application for the Tg.AC as a potential screen for untested agents. Even though this model wouldn’t tell us anything specific about a mechanism, it seemed to identify chemicals that had the potential to cause cancer. So even though we looked at skin papillomas in the Tg.AC model, the same set of chemicals that caused skin papilloma’s in this model, were identified as carcinogens in the NTP 2-year rodent bioassay. I got into the arena, I guess, at the point when we were trying to learn something about what the model limitations were, so for our initial studies we chose a set of chemicals that we knew were both mutagenic (Salmonella tests) and carcinogenic, some that were not mutagenic but carcinogenic, some that were neither mutagenic nor carcinogenic. We were able to do some short-term testing, to obtain some preliminary data, just to see what the limitations were. For example, was this model going to tell us that everything’s a carcinogen or nothing’s a carcinogen, or did the results tie at all back to what we had already learned about a chemicals carcinogenic potential from the two-year bioassays? So that was the initial set of studies. Then the second set of studies was the NTP’s participation with the ILSI project, where I didn’t have as much control over what chemicals were being chosen for evaluation because it was more of a pharmaceutical exercise. In these studies several different mouse models were used and all participants tried to follow the same protocols, look at the same set of agents and tissues, and the results from similar models were reported as a group effort. The objective of these studies with different models and from different laboratories was to see what the models were telling us, i.e., giving us the best information. I think I sent you those published papers that came out of that exercise.
Shostak: Yes, I have them.
Eastin: So after that and after the presentation of all of that information, I sort of got out of the picture.
Shostak: Why did you get out?
Eastin: Well, because I have other chemicals that I have to follow as a staff scientist in the NTP, and another major project is moving all of the NTP data into a format to be searchable by the public using the Web. That goal is to make all of the NTP study data accessible to the world. That effort is just occupying all my time now.
Shostak: So, about when did you begin working with these models, and when did you stop?
Eastin: I remember the date of the first publication was 1998, but it was probably more like 1995 when I began developing the protocols.
Eastin: And then after we presented the information and published the second paper from the project we participated in for ILSI which came out in 2001, I did not do any more transgenic model studies after that.,
Shostak: Okay. The way that you just described your involvement with this research emphasized the site-specific, skin-specific nature of the Tg.AC model. Is that what most interested you in working with the model?
Eastin: That was my primary interest. But the National Toxicology Program was interested in what the limitations were of all of the models that we were looking at I was primarily interested in whether Tg.AC could serve as a model, as a first test of chemicals that we knew nothing about to see if they really would have the potential to cause tumors. If they indicate a potential for carcinogenic activity, maybe we could do some short-term studies to learn something about the dose response and toxicity, and wouldn’t have to do any more. Hopefully with that kind of information about potential for a carcinogenic effect, the Regulators could require testing from industry. If the model didn’t indicate a potential for carcinogenicity, then we would probably have to do a long-term study to confirm that was true.
Shostak: Was there anything about transgenic models per se that were interesting or compelling to you as a scientist?
Eastin: Well, the fact that it may be a model that would be a good screen to identify potential carcinogens was it. When chemicals come into the program for testing, it’s usually because nobody knows anything about the toxicity and/or there is widespread exposure, but there are a number of reasons for NTP to initiate studies.. Whatever the reason to study we do short-term studies and mid-term studies, usually two-week and 90-day studies and then two-year studies to look at the toxicity. The whole process from start to finish takes more than 5 years. But here was a genetically manipulated model (Tg.AC) that we could use and potentially get the same information, and instead of doing a two-year study, we’d do a six-month study. If this model was proven to identify potential carcinogens, we could save time and money reduce the number of animals, and evaluate more test agents in the time it now takes us to conduct the standard bioassays we do now. Instead of using 50 animals per group in the 2-year studies, we could use 15 or 20, as we did in the Tg.AC studies. Instead of looking at all of the tissues as we do in the 2-year studies, we could look at only one tissue, the skin. The pathological evaluation of thousands of tissues is very time consuming and costly. Another advantage of the Tg.AC skin model is that we would get an idea of when a lesion developed and when it progressed, because some of the lesions went from papillomas to carcinomas. The Tg.AC model allows you to monitor tumor development without killing the animals. Instead of doing three dose groups and controls as we have done in the evaluation studies, we could do five and controls and still not use as many animals. Questions that come up - would we need to use both rats and mice? Tg.AC is a mouse model and apparently detects potential carcinogens that were also seen in the 2-year rat studies , so now we’re talking about the possibility of using only one species. Do we need to use males and females? Maybe not, because they seem to respond pretty much the same. So instead of using two sexes, you might be able to use one sex. I created some tables in our clinical observations tracking system that gives the skin papilloma status of animals in the study. Using this option I could watch the chemical effect as the Tg.AC study was going on
Shostak: Okay. One of the things that I’m interested in understanding better is how technologies or research questions move back and forth between NIEHS and NTP. Can you share your perspective on that?
Eastin: NTP is a testing program, and the objective is to be able study various test agents to determine what the toxicity is, what the dose response would be for that toxicity, and with long-term exposure, what the potential for carcinogenesis would be. The NTP studies do not usually look for individual mechanisms of action, which are questions addressed on the research side. On the research side some mechanistic studies are done by people involved with the NTP but most of the NIEHS scientists do the work in this area. However, we will share the animals that we’re using in the testing program with other staff scientists especially if someone has an interest in what the mechanisms would be from the agent we are testing. For example, we have studied a number of estrogenic type compounds, and they are also being tested in other places, so we share knowledge with the researchers doing that work. There is some more basic research done on the test agents. For many studies that we do, we collect data on absorption, establishing ,disposition and excretion to accompany the toxicity studies. If someone is interested in a particular mechanism, we can always build special tests into our studies. One example would be for the interaction going on now between the NTP and the National Center for Toxicogenomics (NCT). When we complete our studies, we can provide tissues to the NCT and they may do the microarray analysis. In this way we can collaborate. Currently there’s an enormous project going on between NTP and NCT with acetaminophen studies in which we’re learning a lot about what happens to the genome at various times, after exposure. With microarray data we’re learning about animal biology because this information from the unexposed control groups is acquired as they age. Even if you didn’t look at any of the test animals, you have the control groups that you can say at four days of age, the gene pattern looks like this; and here’s another view at two weeks and six weeks and eight weeks, and you can see all the changes occurring in the gene activity just by aging. The more we can learn, the better we can use the information. In other cooperative efforts we have shared information and animals with groups outside of NIEHS. For example we did two different studies in which we exposed rodents to magnetic fields in one and to ozone in the other. In both studies and we built in ways to share test animals with, I think CIIT [Chemical Industry Institute of Toxicology) and some NIEHS grantees at universities. We will sometimes get a nomination from the NIEHS research side where they are working with a chemical in the laboratory and they want to know something about the toxicity, and the easiest way to do that would be to recommend that the NTP do some testing. That’s less frequent because most of the time they’re using chemicals as a tool to learn something about a mechanism of an agent with a known biological effect, and the toxicity may already be known.
Shostak: Okay. Could you describe for me the process of working with the contract labs to test the Tg.AC model?
Eastin: The Tg.AC or any model. Because we work with a pretty strict statement of work, laboratories are required to follow a lot of specifications that we have established to provide consistency in testing., These specifications include the feed type, how you cage the animals, what the temperature should be, what the changes of the air should be, humidity, all of this is controlled. How they collect the data, how long they run the studies, how many animals per group, how old the animals are at the start of the study. It’s very, very repeatable from study to study. So a study at contract lab A and at contract lab B done for the NTP, would be done the same way and should produce the same results.
Shostak: Are these good laboratory practices, GLPs ?
Eastin: Yeah, they are.
Eastin: But the strength of our program is that all of the studies are done in an identical manner. We look at the same tissues; we have the same people doing it. The testing labs compete for contracts to conduct the NTP studies and then we monitor them very closely. We have assigned project officers that will go to the laboratories to monitor everything that’s going on to make sure they’re following the specifications.. All of the information we get, becomes public as soon as it’s been reviewed, so you can go online and view the data and look at it closely.
Eastin: I’m not sure what else I could tell you about the program’s use of transgenic models.
Shostak: Along that same line of thought, can you tell me how the NTP has changed during your tenure here?
Eastin: The Program has continually updated and modified the way in which we do our testing to maintain state-of-the-art in this field. There were six or seven testing laboratories that we were using at the time I came into the Program that are no longer applying for our studies Since I joined the Program, we have added more endpoints and special studies to our protocols more fully define the toxicity of the test agent. We have also continued to refine the specifications to make the conduct of the studies as consistent as possible across the testing laboratories. Some of the testing labs now can’t meet the requirements or do not want to add the additional endpoints. The scrutiny for the NTP studies has always been very tight. To help with public awareness and as soon as the study data are available, we now put it out on our website for the public to see, so we get a lot of feedback from the outside.
Shostak: Who is the scrutiny from?
Eastin: It seems like whoever looks at the Web site. All of the pathology data are placed on the NTP website once the evaluation has been reviewed in-house by the QA system, and any discrepancies resolved by a pathology working group, We even invite companies to come in and sit on that review process. So the study data are now available as soon as we finish our in-house review whereas before, that information as not available until the report was written. With a lot of chemicals going through the testing program, those reports could be delayed two or three years. So that part of it’s changed. During the first years I was here outside interests really attacked the conduct of the studies. Now that happens infrequently. I believe that is because of the continual improvements we have made to the way in which we are conducting the studies. Now the criticism is more like, your decimal is in the wrong place or what happened to the body weight recorded for animal number 24 on day 65. . So, to me, that says that the studies are becoming a lot better, and our ability to gather information has become a lot better. Another major improvement is that in the contract laboratories conducting the studies, there’s very little written on paper. Everything is electronically recorded. When the technicians go into an animal room, they enter information into a computer and it’s sent here to NIEHS, so we have it stored on-site. The project officers can use their desktop computers to retrieve that information as soon as it is sent. Electronically capturing the data also reduces the chance for errors in data transcription which to me is a big improvement.
Shostak: The NTP, since its inception, has had an interest in developing alternatives to the two-year rodent cancer bioassay, and certainly transgenics were part of that larger quest. I also got the sense from the reading that I’ve done that there still is no real alternative to that assay. Could you comment on that and help me understand it?
Eastin: To the . . .
Shostak: Two-year . . .
Eastin: I don’t think we’ll be able to abandon the two-year studies until we we know that regulatory agencies will accept something else. With the transgenic models, FDA said they would, for pharmaceuticals, accept the input from a transgenic model as one of the evaluations, together with a rat cancer study. In fact most of the regulators in the world still want a rat toxicity study. But the second species was not required by Europeans and some of the others involved with protecting human health. If you talk to other Federal Regulators that are involved with establishing limits for protecting human health, they still say that “the two-year bioassay is what we must use to establish guidelines to protect humans. TTo just switch to something like a transgenic model in lieu of a rodent toxicity study, you have to do a lot of background to verify that it’s going to give you the right answers. In fact that was one of the ILSI objectives to have industry and Government test the transgenic and other models. In those transgenic comparisons sometimes the models worked well and sometimes they didn’t. I think a part of the problem in the ILSI project was that we were testing all pharmaceuticals and the ones selected for testing were either potent and well-known carcinogens or known non-carcinogens, so I’m not sure if we answered the question about the use of these models as substitutes for a bioassay was clearly addressed.Have you talked to Bill Stokes?
Shostak: Not yet.
Eastin: Is he on your list?
Shostak: He’s on my list.
Eastin: I’m not sure how much they’ve looked at the transgenic models. I think they’ve been looking at other kinds of assay systems. But since I’ve been here, the other screening test type that the Program always looked at was Salmonella results as a first pass on chemical testing for toxicity.
Shostak: The Ames test?
Eastin: Well, mostly the Ames test.
Eastin: We’ve done others, too. The micronucleus assay in blood seems to be a good test for indicating a potential for toxicity, that’s done mostly in mice. There are other test systems that Dr. Stokes and the ICCVAM group are continuing to look at. But I don’t think any of those are being reviewed as a replacement for the current rodent bioassay.
I do have hopes for the toxicogenomics projects underway. What those investigations are going to have to show is that genetic changes in the short time that they expose these animals produces a long-lasting effect. You said you were going to meet with Dr. Tennant and I am sure he will cover this new area of investigation.
Shostak: You suggested when we first started talking that there are ways in which toxicogenomics might be a better kind of case study for the development of a new testing system. Could you say more about why that’s likely to be the case, why you think that is likely to be the case?
Eastin: What we have measured in conducting the tests leading up to and including the two-year bioassay, is overt toxicity. In the shorter term toxicity studies we look for the dose levels animals can tolerate to be able to expose them for a little longer period of time. I think what we’re realizing from the toxicogenomics studies is that genetic changes can occur after one or two exposures, but they’re not manifest as a clinical change for weeks. So you may be able to detect a potential toxic effect in two days or three days that you wouldn’t be able to see unless you exposed the animals for 13 weeks, or two years with the standard bioassy What you have to know in toxicogenomic studies is what’s a permanent, irreversible change and what’s not, and I think that’s what the NCT and others in this area are dealing with now. So, to me, toxicogenomics is at the same point that Tg.AC and the p53 several years ago when these models were introduced as potential models for evaluations of chemical toxicity and we were asked to define their limits.
Shostak: The technology wasn’t there. You mentioned that the ILSI group was looking primarily at pharmaceuticals. My understanding is that there was never a comparable group for environmental chemicals. Is that correct, and if so, why?
Eastin: ILSI is supported by pharmaceutical group, and so they, the pharmaceutical group, was interested in the short-term, low-cost, quick study that FDA would accept, so that was why the comparison project was put together. I can’t remember, but I think EPA was involved in that evaluation.
Shostak: And was there ever any indication from the EPA that they would accept transgenic models?
Eastin: I’m not sure that any regulatory agency said they would accept transgenic models in place of a two-year bioassay or in conjunction with a 2-year bioassay conducted in one rodent species. I think they were all willing to look at it.
Shostak: Are there ways in which science progresses or moves forward differently due to the interactions between NIEHS, NTP, and the regulatory agencies than it would, say, in a research laboratory at a university?
Eastin: Well, with the capabilities that the NTP has, we can test a chemical with two species, three dose groups and controls, each one has 50 animals, so we’re talking about 600 animals all at once, and that laboratory can do four or five of those. A university because of space and other limitations has trouble doing a lot of animals. In addition, we are able to require that the testing labs follow the specifications that we defined.
Shostak: This is a bit of a digression, but I’m struck by the fact that the Tg.AC model was of interest despite the fact that it didn’t come with any evident information about what the mechanism for the response is. It seems like much of environmental health science is moving towards mechanistic-based understandings in studying pathways, and I’m wondering what the relationship of the NTP is to those kinds of transformations in the field more generally.
Eastin: With transgenic models?
Shostak: Not specifically.
Eastin: Well, but maybe that’s a good example. We tested phenolphthalein. You remember that one?
Shostak: I do. I’m talking to June tomorrow, too.
Eastin: The Tg.AC model was hypothesized to be able to detect potential carcinogens. That fact made this model worth evaluating even if we did not know the mechanism. The NTP tested phenolphthalein and the results were positive. Because this agent had been in commerce for so long the regulators wanted additional supporting information. FDA contacted the NTP and asked that phenolphthalein be tested in our transgenic models and they were also positive. We have never been reluctant to add something that would give us that kind of supporting information. I think since I’ve been here, the NTP has always been very adaptable, although they stick to the standard specifications that has to be done, and then they’ll add additional endpoints or special tests in to learn as much about toxicity as they can. You know, the trend is changing. Many of the Agencies that nominate agents to be studied are beginning to ask whether the program can give more of this additional information. I don’t know if you saw Chris Portier request for guidance for our future. It’s a vision statement and addresses many of the options that might be considered for the NTP to pursue.
Shostak: I know there’s a meeting on the future of NTP at the end of the month.
Eastin: Well, the Vision statement has been out for comment, since the last peer review. NTP is asking questions of reviewers: what do you think we (NTP) should be doing in the next five, 10 years, where should we be going with our study plans; what should we be providing back to people who ask for information? Where do you want us to change? What else do you want us to do? I think transgenic models was sort of an outcome of that idea when we started evaluating these strains several years ago and continue to be considered: So I know they’re amenable to changes. But as I said, until we can convince regulatory agencies or they are willing to accept alternatives, then we’re still going to have to provide the information that we do with the two-year study.
Shostak: Related to what you were just saying, what do you see as the future of this transgenic research within the NTP or within the field more generally?
Eastin: Well, I think the models will always be good for understanding processes, but the focus for a specific process will determine the model. So if you know that the p53 is going to be involved, by looking at that model, you can understand what the relationship is of that particular gene to the affected system. So I think it’s more of a research tool.
Shostak: Is there anything about the evaluation of the models as bioassays that contributed to their utility as a tool? Were those two different lines of research with not a lot of implication for each other?
Eastin: For the use of it as a model for the program?
Shostak: Is there anything about the evaluation of the transgenic models for use in the NTP that has contributed back to their utility as research tools?
Eastin: Well, I think we did learn some of the limitations. For the set of chemicals that were tested, I think, we did for the most part get the outcome we expected If we didn’t, we tried to address why we didn’t? What was the restriction? Why didn’t we get what we thought we would get? Was there a different mechanism than we thought there would be? Was the p53 involved and that’s why we saw it, or it wasn’t involved and that’s why we didn’t see the response we expected? I think that the Tg.AC lost favor because we couldn’t explain why you got or did not get a response. The mechanism involved in a response is not understood. So, it’s difficult for regulators to accept your data, regardless of whether it’s telling you it’s a good screen, or whether it’s telling you that this tested agent has the potential to be a carcinogen or not, if you can’t explain why you got that answer? I don’t think that the researchers involved with the model evaluations were discouraged with the p53 knockout, though. I think that they could better understand why they were getting the results they were getting, and I think the research is still going with that model. I think that as new models are developed, they will be provide good tools for studying specific questions about the biological system. Since my involvement with transgenics had ended, I really can’t say how much research is being done at NIEHS with the these models.
Shostak: Related, did the NTP do any evaluation of the H2 ras model?
Eastin: NTP didn’t specifically, but Bob Maronpot did collaborate with some Japanese investigators with that model.
Shostak: Right. I talked to him this morning. That’s part of why I was asking.
Shostak: I spoke with him this morning. Anything else you would add to the history of transgenic research at the NTP?
Eastin: Well, personally, I was disappointed to see that the Tg.AC didn’t go further than it did. I just think that the decision to not do further work with this model was the inability to identify a mechanism that was related to the non altered biological system. I still thought it was a good tool for screening for potential carcinogenicity; short term, just a few animals, and it might give you some direction.
Shostak: All right. Thank you for talking.
END OF INTERVIEW