Dr. Ray Tennant Oral History 2003

Download the PDF: Tennant_Ray_Oral_History_2003 (PDF 101 kB)


Ray Tennant                               

November 13, 2003                         

It’s November 13th, and I’m interviewing Ray Tennant of the National Institute of Environmental Health Sciences (NIEHS).

 

Sara Shostak:              And you’re aware of the fact that I’m taping this conversation.

Ray Tennant:               Right.

Shostak:                      Thank you. I’d like to start by asking you to tell me when you came to the National Institute of Environmental Health Sciences and what you came here to do

Tennant:                      It was in July of 1980, and I came from the Oak Ridge National Lab.  It was an improbable transition because I’d worked on virus genetics during the time in Oak Ridge. I was brought in here to head up the Cellular and Genetic Toxicology Branch, that was then becoming a part of the new National Toxicology Program that had just been established down here.

Shostak:                      So from the beginning, you were working both at NIEHS and NTP.

Tennant:                      Yes.

Shostak:                      What was the focus of your research at that time?


Tennant:                      Well, I maintained my focus in the virus field, so I was actually able to bring my lab here, a postdoc and technician and so on, and we were able to keep going with what we had carried on at Oak Ridge.  But the task of managing the Cellular and Genetic Toxicology Branch had to do with attempting to interface genetic toxicology with toxicology and the Carcinogenicity Testing Program; that’s what it was all about. I say it was an improbable selection because I had no background in that area.  I mean, my only background was that I’d come from Oak Ridge, where much of genetic toxicology sort of had its origins.  But I’d like to think maybe I was brought in for some degree of objectivity because I had no biases; I wasn’t selling any assays; I wasn’t really involved in the area.

Shostak:                      Can you help me understand, then, how that initial appointment and initial research evolved into that interest in transgenics?  What were the research projects that make that connect them?

Tennant:                      Well, the primary effort of the NTP at that time were these long-term carcinogenicity studies, and they were doing many of them per year.  This turned out to be a rich resource of information, and these are some of the most thoroughly studied chemicals. There have been a lot of things that have been studied for carcinogenicity, but many of them have been studied on sort of an ad-hoc manner, whereas these agents were studied in a very consistent way.  So,  you could actually have some confidence that a substance that did not induce tumors when it was applied to animals virtually five days a week or 60 percent of their life span, if this substance did not cause an induction of tumors, then it was very likely this was not carcinogenic.  Very few people would argue that this was not evidence of non-carcinogenicity.  It’s the best evidence that we have to date for a substance not being carcinogenic.  So it provided not just an identification of things that would cause tumors, but things that would not.  Also, if you really want to understand whether or not you have a surrogate system, you have to be able to detect those substances which are really positive and not detect those which are really not positive, so that sensitivity-specificity algorithm could be finally attacked.  It was only through the NTP database that this could be attacked. So what we tried to do was to utilize the background of this solid carcinogenicity database, even though it was all hard copy, and to then construct genetic toxicology assays that were equally reliable; that is, those that would produce unambiguous positive and negative results to a good degree.  Well, nothing in biology is ever all unambiguous, so there’s always a gray area in both systems.  But we were able to develop some studies that would help us to, as unambiguously as we could, such that we could take data to statisticians and actually begin to get categorical answers about what we were accomplishing with the tests.  And so that’s what led us to try to really evaluate the best genetic toxicity assays. As I mentioned yesterday, in an effort to try to resolve the Ames imperative, people had kept evolving more and more tests because the initial test just didn’t quite cut it.  They missed a lot of carcinogens.  So if carcinogens are mutagens, then we need better mutagenicity assays, so let’s try this or let’s try that.  But they always kept missing the mark.  There was always a residue of chemicals that slipped through.  So this effort to proliferate tests could literally go on forever and do so without any real direction.  Tests were popping up all over the place.  Anybody who had a bright idea would try to create a new model system. So we just simply tried to bring objectivity to the process and decide which minimal tests could we use here to help triage the chemicals that are nominated to go into two-year assays, on the principle that an agent that is mutagenic would probably have a high probability of being carcinogenic.  So we did a very large study that involved carcinogenesis people, genetic toxicologists, statisticians, and so on, and compiled enough data to be able to derive some answers about what sort of genetic toxicity tests are important and useful, and what were we missing.  It turns out that in fact you miss a large number of carcinogens.  The way it works out is, approximately 70 percent of everything that is a mutagen is also a carcinogen.  That that means 30 percent of the things you might identify as a mutagen don’t cause tumors even when they’re put into animals for two years.  On the other hand, 50 percent of the agents that were tumorogenic weren’t mutagens.  They failed in this battery of assays, well-defined genetic toxicity assays that we used. So, this then brought some focus to the field by highlighting the fact that mutagenicity was not the whole answer, that in fact there were substantial numbers of non-mutagenic carcinogens that we would have to go at identifying in different ways.

Shostak:                      This study led to the 1987 paper in Science?

Tennant:                      Yes, right. The other important thing is that cancer is an endpoint, but it occurs through a process, through processes, and the problem with in vitro systems are that we can sort of reproduce, quasi-reproduce the endpoint.  We can do things to cells that we say will transform them.  And if we take those cells and then put them into an animal, we can generally produce a tumor.  But we can’t study the process because the in vitro system is so different than the whole animal.  So it was our conviction that in order to understand how non-mutagenic carcinogens cause tumors, that we would have to look at whole animals. So, we began to look at other whole animal-model systems, to actually take one big step back from in vitro, and to see whether we could come up with some combination of in vivo models that would be informative. There had been a number of different assays that had been developed over the years by some very well-respected carcinogenesis researchers, but each had its own limitation in various ways.  Some had high backgrounds of tumor incidence, so that you really had to struggle when trying to determine what was a carcinogenic effect and what was just modulating the background tumors.  Or some had very complicated protocols that involved treating the animals with chemicals, followed by partial hepatectomy, and then later opening the animals up and counting foci in the liver.  These were very complicated protocols that were liver specific, and it wouldn’t tell you about whether the chemical was potentially tumorogenic in breast tissue or in brain or other tissue.  Each had its own limitation. So we were searching around for better methods and this was at about the time that transgenic models were just being reported out.  Some of the very early [transgenic] models used oncogenes; they linked them to very promiscuous promoter sequences, so the gene was highly penetrant and they would produce tumors, large numbers of tumors.  So we thought it might be useful to have essentially a genetically engineered animal to see whether this raised the level of sensitivity of the model in a more general way, since the transgene gets distributed to every cell in the body.  So we started doing some experiments, primarily with mammary tumor-based models.

Shostak:                      Those were the first models that you worked with?


Tennant:                      The first ones we worked with were the mammary tumor.  MMTV promoter promoted ras, myc, and neu genes.  There were three different lines of transgenes. These also came out of the Leder lab. So we got breeders, bred them up, created little colonies, and then began some tumor studies.  Well, the transgenes were just too penetrant.  They really masked the tumorogenicity even of some known carcinogens, and so it was clear that this was not the very best way to go. But right about that time, Larry Donehower and his collaborators had reported the knockout of the p53.  There was a key paper[1] -- and I’m trying to think who the first author was now.  I’m sure it’s cited in one of our papers.  I just can’t remember.  But in that paper, they demonstrated that animals with the p53 knockouts, the heterozygotes were sensitive to chemicals and to ionizing radiation.  So this looked like just about the perfect model.  The problem was it was responsive principally to mutagens. We were still searching around for things that might work for non-mutagens when, in a conversation with [Phil] Leder, he said, “I’ve got another model that we’ve just created.  It’s kind of paradoxical.  But the interesting thing is that if you injure the animal, wound the skin, then it’ll develop papillomas.”  We said, “Okay, we’ll take a look at that.”  So, we brought both of these strains in and started working with them, and the more we worked with them, the more specificity we saw to their responses.  They  just -- they looked better and better as we accumulated more data, until finally we said, “You know, this just might work.  You’ve got p53 that preferentially responds to mutagens, and we’ve got this Tg.AC [mouse]  that responds to tumor promoters, and tumor promoters are almost always non-genotoxic carcinogens. Maybe in some combination, they can be used to triage chemicals coming in through the bioassay.”  That led to this ‘95 paper that was published in Environmental Health Perspectives. So that basically is the path we took from genetic toxicology to transgenics.  And the people who can give you more details and tell a better story would be Jud Spalding or Stan Staciewisz .  They can fill in more about that.

Shostak:                      Could you help me understand more about how the mice actually came to NIEHS.  What patenting or licensing issues, if any, were involved?  What enabled you to do this research?


Tennant:                      Well, with the p53, I mean, we basically initiated a collaboration with Larry Donehower.  We got breeding pairs to breed up some initial colonies.  See, the National Toxicology Program maintained a contract that bred animals for all these bioassays, and so we could actually get some animals bred on that contract.  Jud may help remember exactly what happened here, or Jef French might remember more precisely how we gained access.  But I know it was through basically through two collaborations with Larry.  And then I think they may have licensed those animals to Taconic.  I think somewhere along the line, Taconic made some sort of a decision to try to move more aggressively into developing lines of these transgenic or knockouts and so on, and we didn’t have to breed them ourselves anymore.  We could just simply buy them from Taconic, which is what we did for most of the time.  We just simply bought the animals that we needed.  So I think that was just a licensing agreement between Baylor and Taconic, if I’m not mistaken.  Donna Gulezian can give you all of this detail.  She’ll know all of it.  With the Tg.AC, Phil simply set up breeding pairs, and we bred up a colony.  But in the meantime, the Tg.AC was covered under this original Oncomouse patent.  This was a patent issued to the Leder Laboratory and Harvard University based upon transgenic animals created with oncogenes which had any kind of a tumor phenotype. Subsequently, Harvard sold that patent, or licensed that patent, to Dupont.  At that time, Dupont was itself trying to get into the transgenic business because Dupont also bought the patent for the cre-lox mouse.  It’s a means of doing custom insertions.  This caused some controversy and involved the NIH, because there were some restrictions placed on actually being able to get those cre-lox animals.  We never used them, so I never got directly involved in this, but a large number of academic investigators and also some at the NIH were very unhappy with what they considered to be the restrictions on research imposed by Dupont.  The NIH finally intervened under Harold Varmus and basically worked out an agreement with Dupont that allowed the free use of the cre-lox model for non-profitmaking research.

Shostak:                      So, it sounds like you never had a problem getting access to the animals you needed to do your research?

Tennant:                      No. We had a royalty-free license directly with Harvard to use these animals only in research, and we would notify the Leder Lab any time we were providing them to another investigator for collaboration, and that’s how it worked over the years.  They subsequently sublicensed those mice to Taconic Farms, and Taconic now breeds the Tg.ACs and makes them available to other organizations, other researchers.

Shostak:                      So, have you been working with the NTP from the beginning?


Tennant:                      Well, when . . .  Let’s see.  I’m trying to think of the actual year now.  Perhaps 1990 was the year that the Institute underwent a reorganization.  Up to that point, so let’s say from ‘80 to ‘90, I was head of the Genetic Toxicology Branch.  Then, when they reorganized, Dr. Olden’s idea was to try to integrate NTP more clearly with the Intramural Research Program.  I had maintained a research effort in my own lab for that decade, too, besides the gene tox work.  So, when they reorganized, they basically disbanded the Gene Tox Branch and re-created the Laboratory of Environmental Carcinogenesis and Mutagenesis, so that’s when the LECM came into existence, in about 1990.  Then I was no longer linked to NTP.  I was just directly part of the Intramural Program.  So while we were doing all of this transgenic work that ultimately was meant to impact on NTP, at that time I wasn’t really active in the NTP organization anymore.

Shostak:                      Did you have collaborators at NTP, or were you doing the work in the LECM?

Tennant:                      We were pretty much doing the work in LECM.  We were still conceptually developing what ultimately was the proposal to use these animals.  We were an intramural lab, and we had independent research programs, but we also had some translational work, I would call it, where we were encouraged to go ahead and work with developing some of these transgenics as potential alternatives.  So we did a little bit of both over that period of time.

Shostak:                      What was the research focus of the LECM?

Tennant:                      We still maintained an interest in mutagenesis, but expanded it specifically to be carcinogenesis; that is, not just to study the induction of tumors but the processes by which tumors do occur. So it basically dealt with the mechanisms by which environmental agents induce tumors, both by mutagenic and non-mutagenic mechanisms.


Shostak:                      When, then, did you begin working with the NTP?

Tennant:                      Well, we worked along with them right along.  Even though I wasn’t administratively part of that, it was just a natural fit.  If you want these models to be useful as an alternative test, then you really need to work with people. We went through a period where the NTP had not yet shown much interest in these transgenic animals, but that’s when the interest began to develop on the outside.  We published in about ‘95, and it was just about that time, I think, that this International Congress on Harmonization was going on in the pharmaceutical industry.  I knew nothing about all this.  But there was an impasse between the European, American, and Japanese contingents.  The Europeans fundamentally favored simply eliminating the mouse out of the mouse-rat long-term carcinogenicity studies.  They felt the mouse contributed relatively little information for the cost.  The FDA looked at the problem differently and felt that it was still important to have a mouse component because substances that produce tumors in most species are not species specific, and so you get more information.  The Japanese were, I think, somewhere in the middle of this.  In order to try to resolve that impasse, somehow someone suggested that they look into utilizing alternatives such as these transgenics.

Shostak:                      So, were you at ICH meetings?


Tennant:                      No, I wasn’t.  I only know this now from other people. Then, they were skeptical – there was legitimate skepticism about the transgenic models, because when we published that paper in ‘95, we published it on the basis of just a couple of chemicals, a few.  So, there was legitimate criticism or concern that the chemicals were selected to produce a certain effect; that they weren’t drugs, they were potent chemicals or potent carcinogens.  For example, how do you know that these models are not supersensitive, that they’re not going to react inappropriately to certain things, and therefore cause us to lose a very important drug?  So, in order to deal with all of these skepticisms, somehow or other the pharmaceutical people came together with ILSI.  That’s when they created this committee to study alternatives to the carcinogenicity assay, and specifically to assess the value of the transgenic mouse.

Shostak:                      This morning, Jef French said that both you and Ken Olden need to take credit for the Institute’s involvement with the ILSI initiative.  Can you explain to me how that happened?

Tennant:                      Well, the first time that I think I found out about it is that I was invited to one of the ILSI meetings as sort of a scientific advisor because they wanted information and details about these animals and how to design the studies and those sorts of things.  I’d been invited to give some talks at some pharmaceutical companies. I know how this happened.  That’s where Ray Stoll comes in, and talking to him, because he was actively involved in that harmonization process.  He was also a member of this group -- what do they call it -- DRUSAFE; it’s a coalition of drug-safety experts that meet and talk about the science underlying what they do. Among other places I was invited, I was invited to Boehringer-Ingelheim Pharmaceuticals to give a seminar, and then I spent quite a bit of time talking to Ray Stoll. I’d say at first, he was interested but skeptical.  But I think he was among the first to actually get some of the mice and do some studies.  There was something specific he wanted to tackle, some specific issue, and so he wanted to do some background studies before he did that, and so he started actually doing some studies in the p53 and Tg.AC. In the meantime, Jef-- well, the other person you’ll talk to now who played an important role was June Dunnick and June and Jef . . .  June, I think, was the study scientist on a chemical called phenolphthalein.  Phenolphthalein is a drug.  It is a -- it was, used to be -- a component of over-the-counter laxatives.  And some people, I guess, take very large quantities.


Tennant:                      The FDA was worried about it, for reasons I don’t really know.  There was concern about the long-term safety of this, and they wanted some animal studies done.  And the NTP I think was not enthusiastic about doing that, and so they said, “Well, okay, June, you can put them in with p53.”  There was some suspicion about it being mutagenic -- no clear evidence, but something worried some of them.  So June and Jef did a study of phenolphthalein.  It turned out remarkably clear in that it mimicked what phenolphthalein was reported to do in wild-type animals, the same types of tumors, but you see them in a six-month period.  And when they took the tumors out of these animals, they can measure actual loss of that other p53 allele, so clearly, the animals got the tumors because they were sensitized by having only one p53 gene.  Phenolphthalein in some way resulted in the loss of a second allele, and they got tumors. It ended up that those data were used by FDA.  I think it’s the first time that transgenic information was used in a regulatory decision, and ultimately phenolphthalein was taken off the market, so you can’t get it in a laxative anymore. I know Ray Stoll was very interested in those results also, but I think that study came along after the ILSI effort was underway.  The time lines are just a little bit fuzzy; there was a lot going on at that time.  But I first became involved with ILSI as a scientific advisor and then made a permanent member of that committee.  So I started participating in all of their decisions.  They had already decided about which chemicals or drugs they were going to use.  That had already been decided.  What they were dealing with are design issues for the studies and so on.  So that’s the stage at which I became involved in that.

Shostak:                      And what were the outcomes of the ILSI committee?


Tennant:                      Well, in a very broad way, it was, first of all, a forum in which they brought together the researchers, the developmental people, and the regulatory people all in one forum to design, oversee, and analyze the data I think it really harmonized the process of trying to decide whether or not a new assay could be used in a regulatory sense.  It represented a collective compilation of data done, I would say, to the best state-of-the-art study that you can conduct.  We were conducting studies literally all around the world and getting people to agree on doing them in a similar manner so that they can be compared and interpreted at the end.  Just being able to facilitate that process was no small feat.  So it’s an example of how one can do these sorts of things in the first place.  But, secondly, it ended up yielding a set of data by which the relative value of these transgenic mice could be judged, based upon that selection of chemicals. The upshot, I think, of the ILSI project was, it identified the strengths and weaknesses of these particular models.  Number one, it showed that they were not supersensitive.  They really loaded this list of agents with a lot of carcinogens and these models did not throw off spurious false-positives.  I think it also reaffirmed the position of the FDA that they would selectively accept data from studies conducted in alternatives in lieu of long-term mouse data for drug-safety assessment.  I think all of that came out of ILSI..


Shostak:                      Is the FDA unique in its receptivity to data coming out of research done with transgenics?

Tennant:                      Well, I would say the only real contrast you can make is with EPA.  Those are the two major regulatory agencies. You’ve got the European Union Regulatory Commission, but they don’t function like our EPA does. Now, I haven’t had a lot of contact with them, so I can’t speak to the Regulatory Commission, but in terms of drug-safety assessment, the European drug-safety assessment is much more like our FDA.  So ILSI was definitely an international coalition because there were drug-safety people from Europe involved just as FDA people were.

Shostak:                      Were EPA people involved?

Tennant:                      I would say they were there sort of in an advisory role.  I mean, I know they were invited, but by and large, there was not good, effective EPA representation.  They didn’t become involved in it in the same way that FDA did.  So I would say, in contrast to EPA, FDA was certainly very proactive in assessing new alternatives.

Shostak:                      I realize that this is asking you to speculate, but do you have a sense of what accounts for the differences between EPA and FDA?


Tennant:                      Well, you can find differences even within FDA.  FDA is made up of different centers, so everything I’m talking about has to do here with the Center for Drugs.  So, for example, Center for Food, Center for Devices, they’re driven by different legislation even, so you’ll find differences even within FDA.  But the biggest component of FDA is the Center for Drugs.

And as in most cases, it comes down to a couple of individuals, some individuals who see opportunity, have insight, or have motivation, for whatever reason, to look at new things.  That’s my opinion. I think, for example, that Joe DeGeorge, who left the FDA, was a major influence in FDA’s involvement because, I mean, literally, he feared no one.  You know, he could hold his own in any kind of an argument.  So he actually enjoyed mixing it up with people and forcing them to think and to be very specific.  He would be pretty demanding in what he wanted, but I thought that he was extremely objective and, in a lot of ways, courageous.  In  regulatory agencies, the atmosphere is not always conducive to promote new agendas because it sends waves of uncertainty through the organization. But I think he did them a great service. Certainly, they might not have shown the same degree of interest in all of this had he not been there.  I can’t say that for sure; that’s just pure speculation.  But he did play a real critical role.

Shostak:                      How did the goal of developing alternative testing mechanisms shape the research agenda at the NIEHS or NTP?  Is research different when it’s oriented towards regulatory applications in any meaningful way?

Tennant:                      Well, yeah.  I think it has to be.   That’s a very tough question to answer, but let me try to think my way through it a little bit here. People who conduct the more routine assays have a particular way of looking at a problem and looking at a result.  They developed sort of their own algorithm for factoring in what is believable and what is not and explaining away what is not believable or acceptable.  Now, these long-term assays are among the most rigorously scrutinized studies that are done today, so, nothing that I’m saying is to demean the quality of the work.  The quality of the work is extremely good.  But they developed a strong confidence in the value of the information that these sorts of systems bring.  And if you want to convince them that something is better, you have to be ready to play ball on their court.

Shostak:                      They own the gold standard.

Tennant:                      Yeah.  And so you’d better work chemicals that they are familiar with, those for which they are confident that they understand what the answer is.  So, yes, we purposefully selected chemicals that we knew the NTP had a lot of experience with.  And if we gave them a certain result from the transgenic, that they had a context within which to judge how important that was.  So clearly we did that. In contrast, when you’re doing some work that’s more investigator initiated, you can be driven by your own conviction solely and by your own intuition, and that’s why people love to do it. For example, within my lab over here, a small lab, it’s a much different type of thing.  We don’t think in the same way as if we’re thinking about how to assess these models in a way in which the data would be used by NTP.  It’s a different process.

Shostak:                      Did you meet with resistance when you were developing the transgenic models, and, if so, what was the basis for that kind of resistance?

Tennant:                      No, I don’t think we met any -- I can’t say that we met resistance.  I had a lot of help from people within NTP, and I would start with G.N. Rao.   You know, without his help in providing sources of animals, we would have been really hamstrung.  We had a lot of help from Joel Mahler as a pathologist.  We needed somebody who was interested in what happened in transgenic animals whose opinion as a pathologist would be accepted by the other pathologists.  They’re sure as heck not going to believe if I tell them that this is a squamous-cell carcinoma.  Bob Maronpot, too, was interested and started working with the Japanese to evaluate another transgenic model.  June Dunnick was interested also, and on her own promoted trying to use the transgenics in some studies. So, no, I won’t say that we met resistance.  I wouldn’t say that at all.  I would say we met with skepticism that these things could ever replace a bioassay; that they were really that useful; that you can really understand what’s going on in this animal.  It was more skepticism than resistance.

Shostak:                      Has that skepticism waned over time?


Tennant:                      Yes, with some people it has, but with others, I still think . . .  I mean, there are some people who have a great deal of difficulty moving out of the framework within which those two-year studies were done.  And what I want to say now, it sounds like it’s going to be pejorative, but it’s . . . we can take a bioassay report from 1980 and we can pick a report from 2000 and open them up, and if you read one, it would be just like reading the other.  It began as a patho-statistical process and it continues as a patho-statistical process.  It’s extremely well done, highly organized, very scrutinized, very believable, but in that 20-year period, we’ve discovered oncogenes and suppressor genes.  The whole field of molecular genetics has evolved.  Genetic cloning, the whole genomics era has evolved.  In that period of time, there’s been an enormous progress in what we have learned biologically, and yet it has yet to impose itself on how the bioassays are done.  So that tells you there is some intrinsic stability, predilection, myopicity -- I don’t know which adjective fits best, but there’s something about how those who conduct and rely on these assays view the rest of biology.  It’s a hard club to break into, a very hard club to break into.

Shostak:                      What allows one to break into that club?


Tennant:                      Well, I think the fact that the FDA would use information from those assays is compelling.  Decisions are made today about drug safety based upon using these models in very selective cases.  But the EPA has to work within some different framework of judgment, and they have not pushed for any of that kind of information.  Consequently, many in the NTP consider the EPA their client.  I think until some more substantial interest would develop on the part of EPA, I don’t think you’ll see a very big movement of NTP into using the models.

Shostak:                      One of the things that I’m interested in that we talked about the last time I met with you here is this question of how molecular biology or biotechnologies technologies arrive in toxicology.  What I hear you saying now is that there’s a big difference between individual investigator-initiated research using molecular biological techniques, which has taken off; and a more applied bioassay focus, toxicology, where the molecular biology has been taken up somewhat more slowly.  Is that correct?

Tennant:                      Mm-hmm.

Shostak:                      So, if the field of biology, and parts of toxicology, are moving more and more towards molecular biology and molecular technologies and the bioassay programs are not, does that create any kind of crisis of legitimacy for the more traditional approaches?  Or do they remain solid in their own right?


Tennant:                      No.  I think they’re entrenched in their own right because they have been around, you know, at least 30 years, and those who conduct them and use that information have become very comfortable with how to interpret them.  So, they’re very well entrenched. I guess maybe another analogy -- and I’m not sure how appropriate it is -- but we monitor our water supply by coliforms; we monitor coliforms.  Well, there are probably some better indicators of water quality than coliforms.  But, on the other hand, it has become such a reliable tool.  We’ve got analytical tools in chemistry now that would allow you to measure at parts per trillion what is in the water, but yet the context within which detection of coliforms is performed is such that it would be extremely hard to change.  So once the regulatory community develops some tools that give it confidence about the nature of its decisions, I think they tend to hold on tenaciously to those tools.

Shostak:                      Is it fair to say that your career has been shaped by the intention to develop new tools and promote the refinement of existing tools.

Tennant:                      Well, we’re still doing it with toxicogenomics.

Shostak:                      Right.

Tennant:                      Yeah.  Maybe I’m as hard-headed as they are.  But I think that there is so much to be gained in understanding from these other tools that in most cases I strain to understand why there is reluctance to use them.

Shostak:                      In what ways, if any, are the transgenic research initiatives and the toxicogenomics research related to each other?


Tennant:                      Well, I guess they’re related in the sense they are both tools for acquiring information.  They are very, very different tools, but they’re different ways of learning what chemicals do and what the processes are that lead to adverse effects.No w, right now in toxicogenomics, we’re focusing on acute toxicity because the complexity of analyzing high-density data is such that you learn to walk before you try to run.  But the obvious application of toxicogenomics ultimately will be in really dissecting the processes that lead to cancer, because you can now look across the genome.  You don’t have to try to guess what is changing, what is important.  You can actually look at what happens and then try to use your judgment to see what’s important out of all of the things that are occurring when you treat with a carcinogen or you do something to initiate tumor development. So, I mean, in my lab the two merge together because, for example, we’re still working with the Tg.AC model.  Why are we doing that?  Well, we’re not testing chemicals anymore.  That’s up to FDA, EPA, or NTP or whoever else wants to use it for that.  But there are models where we can essentially synchronize a carcinogenesis process.  That is, we can apply a stimulus, and from that stimulus you get a series of events that will occur, ultimately leading to the development of a tumor.  My best analogy is that we didn’t learn anything about cell cycle until we learned how to synchronize cells.  You apply a stimulus.  In this case you hold the cells in the absence of serum, you apply serum, and you begin a series of events that lead ultimately to cell division.  And so people have begun to dissect that whole process, and over the last decade we’ve come to learn an incredible amount about the whole process of how cell cycle is regulated. I think the same thing is true now for tumorigenesis, taking these sorts of models where you can actually synchronize the process, then applying genomic technology to essentially map the genetic changes, the changes in gene expression that are occurring sequentially along this process, I think is the way to go, and that’s what we’re trying to do in my lab right now.  So, in my lab, the two approaches have merged. In the NCT, what we’re trying to do is to show that we can take patterns of altered gene expression and essentially use it as digital pathology; that is, instead of having to take a section out of the tissue, you can take blood from animals exposed to a certain agent and actually get a profile of altered gene expression that will tell you what that chemical, that drug, is likely to do.  So that’s what we’re trying to do now, but it’s still anchored in what people conventionally accept as toxicity.  So when we do a toxicogenomics experiments, we treat the animals, we let them develop the adverse effect, and we characterize that in the same way that any toxicologist does.  You know, you weigh the animal, weigh the organ weights, take tissues for histopathology, do clinical chemistry, all of that stuff, so that you have a full profile of what the toxicity is, and then you match that with the changes that are occurring in time in gene expression.

Shostak:                      This is called “phenotypic anchoring,” right?

Tennant:                      Yeah.  So you’ve read about this?

Shostak:                      Yes.

Tennant:                      So, the two come together in that sense;  it’s a similar paradigm.  If you don’t link it to what is conventionally understood -- it’s playing ball on their court or on their field -- if you don’t link it to what they conventionally understand to be a real effect, then you’ll never convince them.

Shostak:                      We talked a little bit yesterday about the ways in which transgenics and environmental genomics are related to each other.  Would you help me understand more about how transgenics enable the study of genetic susceptibility to environmental exposures?


Tennant:                      Well, I’ve only been keeping up with that from a distance, but part of the original goal of the environmental genome program was to take a large number, like 400 or so, genes that were known to be important in environmentally related disease and to resequence all of those genes to identify all the possible polymorphisms in those particular 400 genes and then, in some manner, try to zero in on which of those polymorphisms were linked to these susceptibility phenotypes and so on, and then to specifically genetically alter that gene and to create a transgenic or knockout animal that would mimic the phenotype of the human disease.

Shostak:                      But that’s separate from the work that’s been done with the p53 and the Tg.AC mice.

Tennant:                      Right, right.  They’re working on basically non-tumor models.

They’re more interested in things like Parkinson’s disease or other things suspected of having a clear-cut, non-tumor-related diseases that are believed to have an environmental component.

Shostak:                      Two more questions, both highly speculative. First, generally, what would you say are the lessons learned from the transgenic research initiatives with which you’ve been involved?  I realize that could be answered at multiple levels, so wherever you want to focus.

Tennant:                      Well, if I was going to be glib, I would say don’t fight city hall.  No. That’s a really good question, Sara.  I don’t know.  I have to just let it perk here for a minute and see . . .What are the lessons learned?  Well, I mean, one inescapable lesson is that if you want to tackle a very difficult problem on which there are widely discrepant approaches, you really have got to somehow strive for scientific consensus through a process, like ILSI did.  I mean, bringing all the players to the table and working your way through these sorts of things.  If we’re going to be doing more and more complex work and involving multidisciplines and high-risk research, then you’ve got no choice but to create forums for the type of communication that had to take place here, so that’s one lesson.

Shostak:                      Related, why did this research happen at NIEHS?  What is it about the Institute, the relationship with NTP, or whatever other factors were salient that led to the NIH as being a key site in the story?

Tennant:                      Well, somehow or other the NTP was created here.  You know, it’s a combination of what we each perceive the mission to be, the intellectual resources available to you here in this institution.  It didn’t happen in any of the other institutes because I guess they didn’t perceive of having a mission like the specific mission that we did.  And then there are just opportunities that arise from having certain people and certain information available to you.   That’s another good question. You know, Oak Ridge developed genetic toxicology, but they could never have, I don’t think, married it to . . .Well, they could have done it early on.  I was going to say marry it to the longer-term carcinogenicity work.  Now, the milieu wasn’t right.  They had some of the same intellectual and certainly the physical resources to do it, but the milieu just wasn’t right for them to do it.

Shostak:                      How big a part of the milieu is the NTP?

Tennant:                      In NIEHS?

Shostak:                      Mm-hmm.


Tennant:                      A big part of it.  Yeah.  I mean, the whole thing permeates to, to varying degrees, all parts of the institute.  Now, there are enclaves within the Division of Intramural Research where it doesn’t have any impact at all, where the problems of the really independent investigator that works on neurological science or -- I’m trying to think of another one -- calcium channels, apoptosis, where they study a lot of the basic, real fundamental biological issues, it doesn’t reach down there.  But the whole thing permeates.  In one way or another, a lot of people are directly or indirectly involved in and influenced by what goes on with NTP.

Shostak:                      Let me also ask you a novice’s question.  As an outside observer, one thing that strikes me again and again is that when I do interviews at other institutes, I’d be much more likely to hear about drugs that are being developed or clinical interventions that are desired, but when I’m at NIEHS, I hear much more about testing and risk assessment and regulation and these kind of broader public health . . .


Tennant:                      Well, the broadest public health perspective that I think fits in NIEHS is prevention.  I mean, fundamentally, what we would like to be able to do is to be able to recognize an adverse, a potential adverse effect based upon some well-defined principles or well-defined tools to obviate the occurrence of disease.  I mean, that’s ultimately what it would be all about.  But there’s a long, long way to go on that.  But, I mean, that’s -- I would say our mission is linked to prevention.

Shostak:                      My final question is always, what should I have asked you that I haven’t?  You know the story better than I do.  Are there other pieces of it that we should be talking about today?

Tennant:                      Well, you asked a lot of really good questions.

Shostak:                      Thank you.  One thing that I thought might come up that hasn’t, and I haven’t been able to craft a good question for it, is what the Ashby-Tennant challenge was and if it has any role in this.

Tennant:                      Oh.  Well, in 25 words or less, what it was is that in the late ‘80s, early ‘90s, there were a lot of, there were several groups that were purporting to have developed software packages that had great predictive potential, and so what we decided to do was to take a group of these assays that were in progress for which the results would unequivocally be known in about two years and to ask everyone who thought they could do so to predict the outcome of the bioassay.

Shostak:                      Using the software?

Tennant:                      Based upon their software or however they wanted to do it: voodoo, cards, whatever.  And in order to stimulate that process, John, Jud Spalding and I -- and it was mainly John and Jud who really put in the hard work . . .

Shostak:                      John Bucher


Tennant:                      No, John Ashby and Jud Spalding.  We put out our own prediction.  And so at the end of that two-year period, we had a big meeting here where everybody who had made predictions came in and had to answer for them, and it was a very interesting meeting.  I mean, that was probably one of the more interesting workshops here because there was no place to hide.  I mean, you had published what you said was going to happen.  So there was a lot of backtracking and rationalizing and everything going on, but what it did serve to do was to show that, with our current state of knowledge, either human experts or expert systems, we had a long way to go.  So that’s what that was all about.  It was a way to manifest our skepticism about what a lot of people were claiming they could do.Yeah, that was fun. I wish I had -- there’s a picture, I’ve got it somewhere here.  John Ashby used to come over and spend literally his summer holiday with us.  He’d spend the month of August here working, collaborating on something or other, and one of the things we decided to do was to take a large number of these two-year assays and to just literally see if there were any patterns of effects related to either chemical structure, mutagenicity, whatever we could think of, that we could pull out of the data.  So we compiled data on 300 of those bioassay reports -- that’s why I got rid of them.  I couldn’t stand looking at them anymore -- I mean literally went through them one by one and tabulated the tumor data and everything else about the chemical and structure and all of that, and published, ended up publishing those results. But there was one Sunday morning when we were trying to bring this to closure, and we had, up in the hallway, we had all of these charts pasted along the side of the wall, and one of the other people who was working with us came in at that time, and there was John Ashby down in some cutoff jeans and a baseball hat, and he’s down on the floor writing with all of these things there, you know, most of them on the wall.  And he snapped that picture.  It’s really a -- I should have gotten that thing framed because that was . . .  I mean, it was typical John.  John was a unique individual.  And that was actually a very fun time, a really fun time.

Shostak:                      If you find the picture, I’ll have it archived.

Tennant:                      Okay.

Shostak:                      Let me know.

Tennant:                      I will look for it.

Shostak:                      Okay.

Tennant:                      I’ve got it filed somewhere where I’ve got photos.


Shostak:                      Actually, I was just this morning talking with Steve McCaw in arts and photography about his concern that there are lots of photos that will be lost that could be part of NIEHS history.  So I can reassure you that if you find pictures that you decide that you don’t want, they want them over there.  And I’ll work with them on making sure that those get preserved.  So let us know what you find.

Tennant:                      Okay.  And I will look through the files here and see if there is anything that might be of use to you, but I’ve gotten rid of a lot of stuff.  I did hold onto a lot of old files.

Shostak:                      I’ll turn this off.

                                                            END OF INTERVIEW



[1]Nature (1992) 356: 215-221.