Dr. John "Jef" French Oral History 2003

Download the PDF: French_John_Oral_History_2003 (PDF 129 kB)


John “Jef” French

November 13, 2003

 

It’s November 13th, and I’m interviewing Jef French at the National Institute of Environmental Health Sciences.
                                                                                               

Sara Shostak:              Dr. French, you realize that I’m taping?

Jef French:                  I understand, yes.

Shostak:                      Excellent, thank you.  I wanted to start by asking you what your research was prior to the transgenic research you’ve conducted at the NIEHS.

French:                        Primarily in chemical carcinogenesis using rodent models, and particularly focused on the sporadic tumor in Fisher 344 rats called large granular lymphocytic leukemia, so it was the role of molecular genetic changes in that, particularly on the c-Fms (now Csf1r) gene, which is a receptor for the granulocyte-macrophage colony-stimulating factor.

Shostak:                      And what model systems were most important to that research?

French:                        Well, primarily rodents, particularly rats of different strains, because the Fisher 344 rat strain shows a particular susceptibility to the sporadic disease, and it’s one of the major causes of death in aging F344 rats.  So we had used that in a number of ways, that model, to look at the modulation of the leukemia, the insulin growth factor 1 hormone, IGF1, and the role of caloric restriction in chemical interactions on them.  So that was the primary model that was being used.

Shostak:                      When did you first hear about the transgenic mice?

French:                        I had the good fortune of having a USPHS foreign work-study fellowship through NIH.  I spent a year in between Finland and Sweden working on molecular genetics, a human molecular genetics project.  I had diverged a bit.  I wanted to get some experience with human malignancies and was working on a project with a group that - I guess it was the Finnish National Institute of Occupational Health Sciences.  I was doing cell-culture work from lung malignancies and looking at the role of loss of heterozygosity. I spent my year there, coming back to try to start a project on that, but got to talking with Ray Tennant and his interest in that. Having worked previously with rodent bioassays, it became very intriguing to the idea of being able to combine both a paradigm for both testing for identification of carcinogens and at the same time being able to research those new models and look at the mechanisms responsible for that in terms of the molecular genetic changes.

Shostak:                      Which were the first models, transgenic models,  that you worked with?

French:                        With Ray, we started out at that time with a model called Pim1, which was a transgenic mouse made by Anton Berns in the Netherlands.  It was -- it had carried a twofold enhancer region of an immunoglobulin gene and a Pim1 sequence, which was a proto-oncogene associated with lymphomas in mice.  So we studied that, looking at a variety of carcinogens of known potency identified by the NTP and tested that as well as the Oncomouse™ developed by Phil Leder at Harvard, previously at NIH.  So we started with those.  Then I became interested in the p53 tumor-suppressor gene knockout created by Alan Bradley and Larry Donehower with the Baylor College of Medicine, because this was early ‘90s and already the p53 tumor-suppressor gene was the most studied cancer gene at that time, and it seemed logical to try to exploit the similarities between the human and the mouse in that area.  So p53 became the focus at that time.

Shostak:                      Which laboratory were you working in at the time that you began the transgenic research?

French:                        Well, when I came back from my year fellowship, I approached Ray about the project that he was starting and transferred laterally into the Cancer Biology group within, at that time -- I cannot remember the actual name of the branch.  It was the forerunner that became the Laboratory of Environmental Carcinogenesis and Mutagenesis; LECM is the acronym.

Shostak:                      You mentioned that you were working with carcinogens that had been identified by the NTP.  Am I right in thinking that that suggests that the transgenic mouse models appeared to be both what’s called a basic and applied research interest from the outset?

French:                        I believe that was recognized by Ray particularly early because of the problem in identification of carcinogens presumed to be the greatest risk to humans was that it was often confounded by the development of early sporadic tumors in the rodent models, and they often didn’t have any apparent direct relevance to human malignancies.  So I think it was -- we considered the idea that, with the gene that’s in a mouse that was inactivated or activated by a mutation, that what was known about its relevance to humans, we could exploit that and do the same thing at the same time, both look at its potential for human, being a human carcinogen as well as being able to exploit that mechanism and mode of action in the mouse and determine if it would be of presumed relevance to humans.  And mainly because it gave us a focus on the pathway of tumor suppression that we could look at, and it was assumed that if it showed it in the mouse, that if that gene was inactivated or the remaining wild-type allele was inactivated or mutated, that would be directly related to what’s known was occurring in human cancers, so that would at least remove some of the uncertainty, if you would, about trying to extrapolate between a mouse model and the human condition.  So, yes, that was, recognized pretty early that we could do both of those things, whereas in the conventional rodent models used at the time, it was very difficult to determine what mutations might have been induced.  The term was coined -- I don’t know if you know the term -- it’s called molecular archeology .  Or maybe it was -- it was either Varmus or the other guy at UC-San Francisco.

Shostak:                      Michael Bishop?

French:                        I think it was actually Michael Bishop who coined the term molecular archeology because you only find what you look for.  So by having a known focus, we could emphasize that pathway in the process and at the same time range off of that to look for other, broader changes.  So they gave some focus to it.

Shostak:                      Can you tell me about your early research with the p53 mouse model?

French:                        It started out as a collaborative effort with Ray Tennant and Judd Spalding   because we considered that with the gene known to be involved in human cancer, one which we presumed would truncate the remaining wild-type allele was inactivated, it would truncate the latency, that we could exploit that by looking at a range of human and rodent carcinogens .  Basically what that means is looking at a range of carcinogens of different potency because they range all the way from carcinogens that are known to induce cancer in humans; most of those are called, Ray called [them] trans-species carcinogens because they caused carcinogen effects in more than one species of mammals.  All the way down to single-specie carcinogens, which were considered to be less potent if for no other reason, it’s because they were part of a much greater dose and a greater link with exposure to induce cancer in the rodent.  So we wanted to start out by looking at a range of carcinogens of different potencies and looking then to see if we could discriminate between those sets of chemicals that would be of great risk to humans versus those which would be of lesser risk to humans; and also to look at the genetic changes in the remaining wild-type allele as a focus, whether it was a mutation event or whether it was a chromosomal event involving a deletion or chromosome rearrangement or simply a loss of heterozygosity, which is usually associated with loss of other tumor-suppressor genes.  So that started the focus.

Shostak:                      And how did that research progress over time?

French:                        Well, it was very encouraging because after a first set of chemicals that we, carcinogens that we selected, it became apparent very quickly that potent human carcinogens, which were also rodent trans-species carcinogens, quickly induced the knockout or the loss of the remaining wild-type allele and rapidly induce cancers that were very tissue specific.  In other words, we could look at the cancers that were associated with those exposures in humans.  We could see similar, for example, aromatic amines in human bladder cancer.  Aromatic amines gave bladder cancers in rats and mice.  Aromatic amines induced rapid bladder cancers in the p53 tumor-suppressor gene knockout.  So it was a continuum there which gave us encouragement.  And we saw that, in the first dozen or so chemical carcinogens that we studied, we saw a reduction in the time to tumor observation, and we saw primarily a loss of heterozygosity then involving the wild-type allele in most carcinogens, but we also saw some carcinogens which induced point mutations in the p53 gene, and those were pretty consistent with what we knew and understood about human cancer and rodent cancers, which, as I said earlier, helped us, we believe, to remove some of the uncertainty between trying to extrapolate between rodents and humans.  So that involved a series of studies where we looked at the types of cancers induced, the time it took to induce them, then actually studied those cancers along with their contiguous wild-type tissues and could show the molecular basis for that.

Shostak:                      Can you catch me up to date on the course of that research?

French:                        Well, the course to date is that it’s taken us -- for me, I thought, oh, this is really neat, and everybody’s going to see the value in this very quickly, and we’ve already leaped ahead and we’d like to start developing other models that we can investigate this and try to refine it.  But to my surprise -- maybe not to Ray’s, but it was to my surprise, it took a great deal of continuation along that same research track till we built up a large set of chemical studies to try to convince others involved in this research that there was value in using transgenic or genetically altered mouse models in carcinogen identification; especially for investigating the molecular mechanisms associated with that for a given chemical.  So we’ve moved basically a bit further in that standpoint, as I think the models -- the validity and the utility of the models are recognized now.  And as far as my research is concerned, it’s led me into investigating the role of p53 and genomic instability and what other genes are involved in that, particularly what are the molecular events involved in loss of heterozygosity, for example, where you actually show a reduction  from a heterozygous region where you had every locus, you have two different kinds of alleles which are heterozygous to each other, to a reduction to homozygosity and to look at the extent of that and try to understand the mechanisms that are associated with that.  That’s led into looking at mitotic aberrant recombination events, the genes and the pathways that are involved in both homologous sequence-induced repair versus non-homologous sequence- induced repair, the whole series of pathways involved in that.   For example, when you have either a single or double-stranded break, the repair pathways can take homologous sequences and repair from that point.  But oftentimes it’s a disjointed break and the pathway becomes involved that uses a non-homologous sequence.  That often results in more error and more damage because sequences are lost in that process.  So one is a normal repair.  The other can lead to repair, but often it is aberrant in that case. So we’ve gone from studying a range of chemicals to try to model environmental carcinogens of known consequence to humans, like ionizing radiation and benzene, which, after many, many decades of research, still aren’t thoroughly understood.

Shostak:                      In investigating the molecular mechanisms of environmental carcinogenesis, which is what I . . .

French:                        I think that’s what I’m trying to say.

Shostak:                      What are your prime motivators?  What are the drives behind this research?

French:                        I have several.  One is, it’s always pleased me to work in public health, in other words, trying to do something to improve the human condition.  That’s very satisfying, but it’s also very satisfying to understand something about how things work normally by being able to study environmentally induced disease, and in this case we’re trying to focus particularly on the correlation between environmental exposures and cancers.  So it’s fulfilling from both aspects, public health standpoint as well as just building knowledge and trying to understand actually how these things happen at the molecular level.

Shostak:                      Going back to the beginning, when you started working on the p53 mouse model, were there new questions that you could ask using that model that you have not been able to explore previously?

French:                        I think so.  The first thing that comes to my mind is that, by being able to discriminate between sporadic tumors, which often occur in rodents due to their particular susceptible genotype, the combinations of susceptibility and resistant genes that drives a genetic-based tumor-genicity, we could look at a chemically induced cancer very early in the process and be able to discriminate.  For example, the model that we did with bladder cancer allowed us to look at the temporal sequence of events that occurred very, very early, and we knew we could be able to discriminate very early events from very late events and try to get a handle on the role other acquired genetic lesions. For example, it wasn’t appreciated until recently that actually very early events could increase the genomic instability of the genome, allowing a rapid adaptation to that and acclimation of tumorigenic cell or malignant cells, which are often aneuploid, but they [the chromosomes] sort themselves in such a way that they can maintain the basic processes.  The problem is they become uncontrolled in that process.  So it allows us to investigate that process.  So, if that makes any sense.

Shostak:                      It does.

French:                        You know, the problem with scientists, they’re always speaking in jargon and using the simplest language that they talk with each other, so if I need to explain something in a more clear way, don’t hesitate to say that because I should be able to do that.  We’re always encouraged to do that, but I’m afraid often we forget it and just rush ahead.

Shostak:                      So far I feel like I’m following, but I’ll ask if I need more background.

French:                        Well, tell me if I need to clarify something, and I’ll try to give sufficient information without giving too much or too little.

Shostak:                      Thank you.

French:                        Okay.

Shostak:                      In working with the p53 model, how much retooling or restructuring the actual physical laboratory did this research require?

French:                        Well, not so much.  To my way of thinking, it’s basically semantics in the sense that it changed from biochemistry into molecular biology, so the tools became more refined.  But the same basic tools have been used all along.  They have modified in the sense that we’ve essentially worked with the nucleic acids, DNA and RNA, and protein [of normal and malignant cells] and understand much better the regulation of the expression of the RNAs and how they make protein.  But in the case of cancer, we have a much better understanding of the dysfunctional processes using newer tools.  So it’s required the use of, development of new tools.  Like cDNA expression ...rather than working with one gene at a time, now we’ve had the tool often to work with many genes and many proteins at the same time because it’s the interaction of those that’s become important.  So there’s been a constant stream, I think, of evolution in the lab between those. As you develop one focus of research and the familiarity with the literature in that research,  you have to often go outside of the lab to acquire new tools, and I think that’s been a real boon working at NIH, is that those things are easily acquired in a fairly collegial community of research, so you can go to others to help you understand quickly what the new technologies mean. In one sense, I felt maybe I’ve lagged a little bit behind that because it’s taken so much, so many years and so many animal studies looking at the same thing in a really narrow focus on that to show the utility;  I’ve lagged a bit behind some of the newer tools.  But we’ve done that in the past few years as far as expression profiling both at the RNA and the protein levels in particular, so that we can understand what dysregulation of events that are occurring in multiple pathways associated with tumor suppression, if that helps.

Shostak:                      It does, absolutely. Related, the p53 mouse itself is patented.  Correct?

French:                        It’s a good question, because -Yes.  The technology . . .  I don’t know whether the mouse itself is patented or whether it was just copyrighted.  I know that the technology for homologous recombination tools for knocking out specific genes, there’s parts of that that was patented by Mario Capecchi, and others.  Now, in general, all of these mice that have been created to study cancer primarily have been superceded by a Dupont patent, who basically got a very broad patent who had an exclusive license from Phil Leder and Harvard on the Oncomouse™  So it’s a good question.  But I don’t know whether the mouse is specifically patented or whether it’s the processes which takes precedent now.  But if anyone had patented it, it would have been Alan Bradley at Baylor College of Medicine.  I will ask and I’d like to ask Larry now exactly.  Did he patent this mouse or what?  Because the people at Baylor were very collegial and supportive with their mice and very valuable to this work. Although, we received some mice early from that lab because the goal was to try to expand it and to make it a tool within the toxicology and environmental toxicology community.  Really, it was desirable to have those mice commercially available, and they are through Taconic.  And the value of that is simply because they can do the quality control and make sure they’re specific pathogen-free, etc.

Shostak:                      So this is exactly what I was interested in.  It sounds like you haven’t encountered any difficulty in accessing the mice themselves.

French:                        We have not [had any difficulties] either through material-transfer agreements with individual investigators or even being able to purchase mice.  If we’ve had to purchase breeders from commercial colonies or even purchase significant numbers of mice from a commercial breeder, it’s not been a problem for us, mainly because Harold Varmus was able to negotiate with Dupont, an NIH license which has allowed both NIH and NIH R01 grantees to use these mice basically without restriction. The problem has become, in our standpoint, because a lot of companies that have to use, who want to use these mice for safety and efficacy testing in commercial labs have a  restriction on them about how they can use them.  They can’t really divert, they can’t really develop any commercial products from that without having a license from Dupont, which I understand has probably restricted and maybe retarded the use of these models for more than just carcinogen identification.  It’s become problematic for them for not only looking at that, but wanting to use those same mice, for example, to use them as a cancer-prone model and then develop therapeutic models for intervention, for example.  So for NIH scientists, it’s not a problem.       I’m concerned about the problem  it might be for industrial or commercial scientists, particularly in the pharmaceutical industry.

Shostak:                      Has there been much debate or action within the field of pharmacology or toxicology around this issue?

French:                        The academic scientists ignore it, so thus far it hasn’t become a problem for them.  There has been limited debate among our colleagues in the commercial sector, but the Dupont position is so strong, I think it sort of stifles any concern.  So I’m not aware of any, just that it has the potential of doing probably more than it has actually caused at this point. The patent that Dupont bought from Harvard had actually run out a few years ago, and they were able to supersede that with another.  I don’t understand the legalities of it, but they were able to somehow change that patent and get it for another 17 years, so it’s a long-term, potential long-term problem.

Shostak:                      Can you tell me about the ways in which you participated in the NTP efforts to develop transgenic models for their testing program?

French:                        Yes, it’s been primarily a tripartite effort.  I mean, the effort within this area, with Ray and myself and Jud Spalding and LECM and now several in different labs, we worked early with the NTP, a group of scientists within the NTP, who also wanted to do some validation studies, if you will.  That is, they wanted to get them in some personal experience by selecting model compounds for their own investigation, so we’ve worked with them in the initial stuff they did. We’ve also provided advice and support for individual scientists within the NTP who have done other studies under their contracted research and testing program, whereas -- and that’s basically where in-house scientists developed the project, developed the protocols, and basically those studies were contracted out in individual CROs [contract research organizations] because there’s simply not sufficient support or infrastructure within the institute to do that.  They also have to be done under GLP, good laboratory practices, because they have to stand up to the test of time because there could be a lot of legal challenges to those.  That in itself is historic because that’s a basis for why the NTP was even created.  So we’ve worked those both as a group and individual scientists.          We also worked with the consortium of academic, government, and industrial scientists that made up the ILSI, the International Life Sciences Institute, alternatives for carcinogenicity testing consortia.  So I think we’ve worked pretty closely with all of those groups. And we’re still working with the NTP.  I act both as a group leader in transgenics as well as a discipline leader advising the scientists within NTP on relevant models to be used for individual studies.  I mean, for example, I can describe that . . .If a compound has been nominated for study, and it’s got a -- there are reasons for that.  It’s either a compound of a lot of potential human exposure based upon the sheer bulk of its manufacture and the potential for it to get into the environment, or it can be produced in smaller quantities but it has a structure that suggests that it could be particularly toxic or it might be a potential carcinogen based on that.  Well, the NTP has to study those, and if they know, for example, that it has potential genotoxicity, I can recommend the p53 mouse model, because the bulk of our work has shown that it is particularly valuable in identifying mutagenic or genotoxic carcinogens. Whereas other models like Tg.AC or HRAS2 or other models which are, may not require genotoxicity, they may be more appropriate for those compounds which it might be weaker genotoxins or they might be operating through an epigenetic mechanism.  So I can help triage models and recommend models and work with them to design studies.  That’s, in a sense, the way this has worked out over time.

Shostak:                      How much research on transgenics  is currently happening within the NTP?

French:                        NTP has continued to perform studies with transgenics on more or less a case-by-case basis.  That’s primarily because the individual scientists see value.  Individual NTP scientists or NIEHS scientists have seen value in doing a particular transgenic model based on their knowledge of the literature, etc.  But we’re at a bit of a fork in the road, if you will, trying to decide what future models need to be developed, because we are in the process at present of discussing tissue-specific models that may be where known human malignancies that may be underrepresented in rodents, for example, mammary tumorigenicity, prostate, etc., lung may be underestimated in the rodent model, so trying to sort out what other models should be developed that would give us more information on compounds that might be in the environment that could be associated with the increase in those human malignancies.  So tissue-specific models, continual refinement of present models to be sure that we understand how they work and what their strengths and weaknesses are.  Those are the two avenues that are primarily being pursued at the present time.

Shostak:                      What would you say are the most significant contributions that have been made possible by transgenic research, either in understanding the molecular mechanisms of carcinogenesis or in identifying environmental chemicals and protecting public health?

French:                        Well, all the way from the very early studies, which showed potential utility, I think, having studied many chemicals.  We recently completed a review by a local group [of scientists] that was here [at NIEHS], a group led by John Pritchard and myself, Barbara Davis and Joe Haseman, a statistician.  We were able I was able to compile a data set of approximately 100 chemicals that had been studied [in 3 different models].  Although we had the insight from individual studies that we were able to give some predictability to the outcome, it was after analyzing this much larger set could we get confidence that we could discriminate in the larger arena of those compounds. Which we were fairly certain that they would be presumptive carcinogens in humans -- and many of those were actually known human carcinogens -- and be able to discriminate those compounds which might be rodent carcinogens, but after a lot of research, a lot of review of the epidemiology associated with those, probably were of least risk to humans, and we feel like we can triage those now with these short-term models.  So that’s a major contribution, I think, one we’ve been able to move from the conventional models that have taken 30 years to develop and get a consensus of opinion about how they work and what their value is.  Within a 10-year period, we’ve been able to move forward and show the value of using short-term models that can accomplish the same thing with possible improved accuracy of predicting whether they’re going to be a presumptive human carcinogen or not, and at the same time have been able to demonstrate that there is utility in actually looking at the molecular basis for this.  In my mind, that helps say, okay, now we’re using human models, we’re using models for potential human carcinogens, and we can actually demonstrate a mechanism or mode of action associated with that, whether it’s genotoxic or not, or whether it’s epigenetic, and that all plays a very valuable role in the risk-assessment process.  At the same time . . .  So that’s been a valuable contribution, I think, to public health efforts in this arena.   You could argue all day long about exactly how important are exposures to low levels or at least exogenous compounds in the environment, and what is their real risk to humans.  In other words, what do they contribute to the burden of human cancer in the population?  The epidemiologists take one approach to that, and they’ll probably tell you for the most part that maybe a third or less of human cancers are associated with environmental exposures.  On the other hand, others will say that probably two-thirds of human cancers have some kind of environmental component associated with them.  So we can argue about the relative contribution to that and what particularly commercial or occupational or environmental carcinogens find their way into the environment, and some would say those are even smaller.  But we don’t know what the burden is overall on either cancer or reproductive disease, and we’re still trying to, I think, understand that. But we made a big step forward just showing that we can use short-term cancer models for that.  And at the same time, there’s been a -- I think that’s of significant value in that because it conserves resources, it costs less, and we use much fewer animals in the process, and we are using animals, I think, in a sense in a more humane way because now we don’t have to let them age, develop sporadic disease, suffering associated with that.  We can look at animals fairly young, and we can remove them and euthanize them very early when they develop malignancies, and they’re still relatively healthy in that aspect.  So on the one hand point, you still have to use animals, you still have to kill animals in the process, but we certainly, I think, use fewer of them, and we have less suffering in that process.  So that’s satisfying to be able to do that. It has been very satisfying to me to help make the contribution that we understand more about in terms of human and rodent carcinogens. The value that these compounds play in inducing genomic instability and the role that plays in the development of malignancies, and that would have a tool, in my mind, where we can look at individual chemical compounds as well as mixtures of compounds and look at these processes at the molecular level and how they occur, and hopefully we’ll be able to extrapolate that to humans as well.  So we’ve made the contribution, I think, to create a new paradigm, if you will, in that process.

Shostak:                      You mentioned that you were surprised by how much convincing this took....

French:                        Absolutely shocked, actually, because I told Ray after the first few years, “Hey, we can go off and do other things now.  This wasn’t so hard after all.”  But it’s taken a tremendous effort of, I think, of both this consortium of academic, government, commercial scientists who wanted to be convinced and had to be convinced in order to move from the conventional paradigm over to the new one in this process.  And I would say that probably many people are not convinced yet.  We have the experience of working with colleagues at the Center for Drugs, CDER, the Center for Drugs Evaluation and Research, which is part of the U.S., the FDA.  They quickly saw value and became very involved in this process, where other elements of the FDA have not.  They have resisted and still do not allow the community that is regulated to move into this area.  And somewhat, it’s been less true, though, with the Environmental Protection Agency, I think because there was a concern in the commercial scientific community that they were afraid things were going to be too sensitive.  It was actually going to exacerbate their problems rather than improve their problems.  It actually could improve it because fewer rodent carcinogens that may now be identified, and those compounds that are identified may be more relevant to human exposures.  I believe that, and I think Ray would agree. It’s been an extraordinary effort to try to convince others,  not the scientific community as a whole, because we have to sort of compartmentalize that.  Toxicologists in particular.  Toxicology is a very conservative science, I think partly because they’re called upon trying to use in vitro and in vivo models and try to extrapolate that to humans, and they have to predict human toxicity.  They don’t want to overpredict it and they don’t want to underpredict it, which makes it inherently conservative.  So they’ve had to be convinced.  Many have been, probably not all.

Shostak:                      Can you tell me more about the ILSI committee and the work that you’ve done with that, because it sounds like it’s been a key factor.

French:                        It’s been extremely important.  While we were laboring here, from our standpoint and with our particular set of problems and concerns, we were mostly studying compounds that were grandfathered in, and those compounds had to be nominated to the NTP because there was no commercial interest in evaluating those because, for that reason.  They were grandfathered in under various regulations that have occurred in the past 20 years or so. But all of the new commercially developed compounds that could make their way out of closed systems were problematic in the sense that they were concerned about what these new models would do to the processes that had developed over 20 or 30 years.  It helped to have the ILSI. And Ray must receive credit for that because it was really under Ken Olden’s guidance that Ray took the lead, along with assistance from Judd and I, to help mold this relationship with ILSI, who showed an interest very early and established a consortium of academic, government, and industrial scientists that could bring their combined expertise to this problem. It also elevated it to the next level, because now it became a very public process.  Although we’re a public agency, scientists usually, in government, don’t usually, aren’t very astute at publicizing their work.  They are happy to publish it in the scientific literature in the belief that it’s valuable, it’ll rise up, and other people will learn about it and use it.  But this became a very public process. In becoming a very public process, it brought attention to it, and people had to take note of it, that this was being done, and it also then gave the opportunity that, when the outcome became known, this was all done by selection without having minimal bias in the process, if you will.  We could always be criticized that, yeah, we chose what we wanted to because we thought we might be able to come out with a given answer, so we had had a bit of, because we developed all these ideas on our own. Therefore we could somewhat be criticized for that at some level because it wasn’t necessarily a transparent, public process.  It was an in- house process.  Although we had publicized it to our colleagues and it was peer reviewed at some level, this was -- the ILSI process was a peer review at to the maximum extent possible, because many at the same time within a very short time frame.  So, I forget how many.  There were probably 30 different individuals’ labs that spent $35, $40 million during this year or two, which was a pretty extraordinary occurrence that occurred between such a collection of academics, government, and industry scientists. 

The ILSI effort - that was financially supported by some of the studies done in-house here.  It was supported more extensively among that set of chemicals by the commercial aspect because they made a commitment. There was sufficient data that gave it enough credibility that they were willing to expend a significant amount of resources on it.

Shostak:                      Was the chemical industry involved or was this primarily pharmaceutical companies?

French:                        Primarily pharmaceutical companies, in U.S., Europe, and Japan.  So this became a worldwide effort.  And the ILSI was able to provide an organizational structure because they have an in-house program that’s very attuned to doing this.  They could attract credible scientists from industry who had a lot of experience doing this.  One, Jim McDonald, from Schering Plough became the chairperson for that effort.  Working with ILSI staff, they could bring a lot of resources to bear on that process and made it very transparent at the same time.

Shostak:                      There was also an international committee on harmonization that addressed transgenic . . .

French:                        Yes.  We should step back and look at that because that actually preceded this ILSI effort, and that was critical, because what had happened over many years was that many pharmaceutical safety and efficacy experts believed that the rules that were under this International Congress for Harmonization where, in their preclinical tox safety studies, they had to use at least two species to start out with, and most often those first two species for chemical carcinogenicity studies was the rat and the mouse.  But over time, there were a lot of tumors that occurred in, say, one sex, one tissue of a mouse or a rat that made them suspect over the value of those, and they didn’t know how to discriminate those.  It was most problematic in the mouse because many mice seemed to be prone to developing liver tumors. So, over years, there were a lot of compounds that got approved by FDA.  They simply were able to take liver tumors in mice and add that to their labeling process and say that, yes, this compound did cause liver tumors in mice, and that was a warning, but no one took it seriously.  So that caused the mouse model precautions to lose credibility.  What came along then was when the -- I forget which, I think it was ICH 4, when the Japanese, the European, and the U.S. scientists, Canadian, got together to try to sort that out.  The European scientists involved in this said basically, “Oh, we’re happy just with the rat study.  The mouse is problematic.  Let’s just do away with it.  Let’s just do the mouse.”  The Japanese and U.S. scientists said, “No.  We’ve got to have at least two rodent species in order to make sure that we can understand the basic carcinogenic mechanism because if it causes cancer in both species, it’s more likely to cause cancer in humans.  That’s basically the line of thinking that was taken. So it was proposed by USFDA, CDER in particular, that said, “Well, let’s make it an alternative available, then.”  When the commercial company comes to the FDA and they propose studies for their safety and efficacy, the FDA then advises them on what the most applicable studies are.  So now they could say, “Okay.  Well, you can do a rat, conventional rat study and a conventional mouse, but you also have the opportunity to take an alternative study.  You propose to us what alternative model you want to use.  We’ll evaluate it with your whole package and agree or not agree with you how to do that.”  So ICH became, then, a forum by which made it valid by which gave some credence to the models and gave some effort to the commercial side that, “Yeah, well, we can do this.  We can start out with a short-term mouse study and get some experience with this,” and if it’s negative, they could submit that to the FDA or they could, or if they were concerned about it, they could go ahead and do a rat study.  So ICH gave a forum by which the regulatory bodies amongst all of those major pharmaceutical sectors of countries, regions of the world, that could agree on doing this, and it gave them the ability of the FDA to at least semi-formalize that whole process.

Shostak:                      Do you remember what year that was approximately?

French:                        Well, what sticks in my mind -- don’t hold me to this -- but I think it was in ‘96 or so when that became, I think, more or less codified in the ICH process.  And by ‘96, we had published our first few papers.  The NTP had gotten interested.  I think the FDA had seen that and that gave them, in the ‘96 ICH meetings, I think that gave them some support to propose doing that.  That’s loosely my understanding of it, but you’d have to talk to someone with ICH.  One name in that process, Ray may have mentioned, Joe DeGeorge or Frank Contreras -- no, it’s not Frank -- Joe Contreras at FDA, Abigail Jacobs at FDA.  Those are three names that would understand timing of that process.

Shostak:                      Thank you. From what I’ve read, the FDA was involved but not EPA.   Right?  They were separate.

French:                        Correct, correct.  Separated because the environmental carcinogens are very different from the pharmaceutical problem.  The ways in which the risk-assessment process is carried out by the EPA for environmental carcinogens or potential carcinogens is very different from the way the FDA carries out theirs.  And it’s a different regulatory authority for both.  They were not involved.

Shostak:                      And had there been any significant EPA involvement in these initiatives?

French:                        They have carried out, in conjunction with the NTP, a series, I think, of studies.  They have been, particularly were interested in the chlorinated disinfection byproducts because there are so many.  There are hundreds and hundreds of analogs involved in that, and they just saw no way they could do this either in vitro or with conventional studies.  So they have developed some program, but I do not know what they’re accepting in the way of that effort.  I have no knowledge of that.

Shostak:                      I have two kind of more general questions, and then I’ll ask you to tell me what I should have asked you that I haven’t asked you yet.

French:                        Okay.

Shostak:                      One thing that interests me, just as a historical trend in toxicology, is an increasing focus over time at the molecular level, described to me as phenomenological measures to molecular measures.

French:                        Well characterized...

Shostak:                      Can you tell me what your perspective is on that trend?

French:                        Well, here, toxicology is still largely phenomenology because they still, what is often called “count lumps and bumps” in their studies simply because the regulatory authorities may not always know what to do with molecular studies.  Associated with that is also the problem that unless you know a priori what’s going to be affected, you don’t know exactly where to look, and therefore your findings at the molecular level may not always coordinate with what’s occurred at the tissue at the phenomenological level.  It’s been less of a problem with transgenics because we know a priori what pathway has been made presumably dysfunctional in that process and where other genetic alterations would be the focus for accumulation that would drive that whole process, so it’s been less problematic there. But in the past few years, with the development of microarray technology, it’s rapidly leaped ahead to a belief that it may be simply enough to look for patterns of recognition for human toxins, human carcinogens, and if you know a priori what the concern is in rodents and you see the same pattern, you can extrapolate that potentially to the humans.  Or if you take a known human carcinogen and you look for a particular pattern with unknowns and you see the same pattern, then that would give you credence to suspect that this compound may dysregulate the same pathways involved. So technology is now trying to leap ahead in that sense, but I would argue that that’s still phenomenology because you’re looking for patterns of recognition.  It’s the same thing that’s being done in whole animals at this time without understanding the actual mechanistic basis for that, so we have to be careful that we think we’re now moving into mechanistic studies when it still may be largely phenomenological.  You still are going to have to understand at the molecular level in a living organism what the actual consequence to that is to understand, because largely, the perturbations seen at the tissue level, most are probably inconsequential.  What few studies we’ve done in this area, I have the suspicion that often the critical events are not the large-modulation events; they’re more subtle than that.  So we have to be very careful how we move forward in this.  I applaud it, I encourage it.  We should move ahead.  But we shouldn’t make too many promises too quick.  I think I learned that with transgenics.  What I thought was largely a done job seven or eight years ago turned out not to be, and the same thing, I would suspect, will be done in this area, too.  It will take us time to  understand the meaning of these events.

Shostak:                      Given how much time and energy, resources, are invested in moving the fields towards appropriate mechanisms or towards an appropriation in genomic technology, what does toxicology stand to gain?  What continually motivates these massive investments?

French:                        Well, to a large part, it’s public health still.  But now, over the past decade, we’ve recognized that with the variability within the human population -- we have observed that there are a lot of single-nucleotide polymorphisms, some of which show functional differences, and those might be associated with particular susceptibilities to given diseases -- will move toxicologists into thinking about what the susceptible populations are.  So it may not -- and I will extrapolate back to the selection of rodent strains in the normal toxicological process.  We often choose a rodent strain because it’s the one that’s always been done, without looking to see whether it is either the resistant particular strain of rodent whose genome is resistant to this environmental effect or not.  Some might be very susceptible, some might be very resistant.  The same thing we could see in human populations.  So it’ll move the toxicologists into thinking about having to protect toward the most susceptible of the individuals in the population because we want to protect the privacy of those individuals who might be susceptible and not deny them the right to work in a given industry or not.  It is going to make us think about that and move us in that direction.

Shostak:                      And you anticipated my last question, which is what, if anything, does the p53 mouse model specifically in your research with transgenics in general suggest about variations in genetic susceptibility to environmental exposures among humans?

French:                        Well, if we looked just at p53, I mean, we have argued before, many of us have, that the p53 mouse model that’s heterozygous for a null allele has some analogy to individuals that have germ-line inherited mutations in p53, where the function of p53 has been inactivated, the Li-Fraumeni syndrome of individuals.  Well, we know that, by and large, that most of those individuals don’t have a lost tumor-suppressor gene; they have one that may be inactivated by a mutation.  They are there in a fairly high frequency at some sites like 72nd codon in humans that gives a particular mutation that is generally an inactivating mutation. There are others which might be so-called dominant negatives.  It’s actually a mutation, but it sometimes may show a gain of function, so it may not be the same thing as inactivation of a gene where it has a loss of function.  So we have to sort out what those mean in terms of tumor-suppressor genes. But we see value in the models of tumor-suppressor genes for sorting that out. We know from individuals who develop cancers early that can be diagnosed by physicians, we simply don’t know in the human population at large what mutations there are in tumor-suppressor genes which might make us susceptible but don’t fit into this phenotype of individuals that have these germ-line inherited mutations.  We know something about mutations of the somatic cells that occur at very, very low rates, fortunately, but we don’t know what the burden of that is in the population for any gene of any particular locus, of any particular coding sequence that imparts its function, for example.  We simply don’t know what those fractions are, and we only recognize it when the disease occurs and we can then associate it through a molecular archeology approach back to make an association with that.       I think that’s why the paper this morning talked about how many countries, how many efforts are being made to create large DNA databases to make sense out of the large amount of human variation in trying to sort out which of those actually shows functional consequences and which ones, I would add to that, which ones are primary targets for the milieu of exogenous chemicals, low background radiations that we’re exposed to every day.  So that’s how we have to move.  We have to, if the goal is to ameliorate disease in both the young and the old and decrease morbidity within the public health standpoint, that’s what we have to do.

Shostak:                      Can you help me understand the ways in which transgenics is related to what books have called environmental genomics?

French:                        Well, yeah.  Well, environmental genomics, at least from the NIH perspective, became mostly the study of genes that were of known to have environmental consequence. Or of consequence through environmental exposures, so this paradigm of disease is equal to environmental exposure times the genome, the susceptibilities of it, the resistances of that genome in time, so it’s a pretty interesting paradigm in that sense.  And the ones we focus on primarily, as far as the environmental genome is concerned, would be those genes, the genes known to be involved in metabolism of foreign compounds, xenobiotic metabolism, those genes which are known to have single-nucleotide polymorphisms that have some functional decrement to them.  I could even include p53 there because of the mutation in the 72nd codon and that is known to be involved in susceptibility to cancer, etc.  So, in essence, it is those genes that we know today that are often observed dysregulated in diseases associated with environmental exposures, principally metabolism genes, DNA repair genes, tumor-suppressor genes, genes that modulate inflammation.  There’s several hundred which are a focus because there are just thousands and thousands of polymorphisms in the human genome -- and I actually say genomes because we each have our own particular one -- that make us respond over time to our environment, either give us longevity or result in disease early.

Shostak:                      Thank you.

French:                        You’re welcome.

Shostak:                      We have reached the end of my list of questions, but what have I missed?  What should I have asked about?

French:                        Very little, if anything, I’ve missed, because to me it’s a broad subject.  I mean, we could talk specifically for a long time just about a tumor-suppressor gene in p53 in particular and what environmental exposures might dysregulate them.  When you think in the broad terms of public health, I think you asked really very relevant questions, because it is this broad effort requiring great resources that moves this field incrementally ahead, and it’s always competing.  It’s taken 10 years or more to get to this point, and it’s already having to compete for resources now, like with the microarray center, because science tends to want to move ahead very rapidly, and then it’s not . . .  Maybe sometimes before a field, other fields fully mature.

Shostak:                      How does that get balanced at NIEHS?

French:                        How does that get balanced?  Not very well sometimes because not only, because the resources , as generous as Congress and the citizens of the U.S. have been to NIH over the past five years, or four, as the budget has been doubled, primarily that’s gone out into the extramural community, support of R01 grants and center grants, etc., although the intramural programs have grown.  Unlike Harold Varmus’s idea that government science should be doing things that academic scientists can’t do, there’s still a lot of competition amongst individual scientists for their ideas and the research they want to do.  Larger, big-ticket items like validation of models for carcinogenicity testing, trying to move those ahead, because it requires so much effort -- it’s not a compact, limited research project -- it’s made it a bit more difficult, I think, to move ahead.  Ray Tennant was an excellent advocate for these kinds of programs.  So you have to have an advocate both internally and externally to move these forward, and we’ll just have to see how it goes, because there are so many interesting things to do, can’t do everything.

Shostak:                      Are there other folks who you have worked with who you would suggest that I talk to?  I listed the people you mentioned at FDA.

French:                        Yeah.  Well, I know that you -- I had heard, I guess, from Judd that you had contacted Ray [Tennant].

Shostak:                      Right, and he referred me -- Judd referred me to you, I believe.

French:                        Okay, all right.  Well, I mean, there are other people in-house that you could talk to, and critical to this whole process has been John Bucher, who probably has been mentioned.

Shostak:                      Right.  I’m meeting with him tomorrow.

French:                        Yes.  June Dunnick is another individual whom I’ve worked with.  She’s an individual scientist within the NTP who has had the foresight to do a lot of very interesting things to do in this area.  Ken Olden himself would be interesting, as well as Chris Portier, who is the director of ETP, and they all played and will play roles.  What Ken’s vision is now that he’s decided to step down as director and focus on his own lab research is an interesting question.  But he’s been a very ardent supporter of moving science forward in the NTP so that, you know, as they say, good science for good decisions in that area. Outside of here, I gave you a few names at FDA, and Joe DeGeorge is actually no longer with FDA.  He actually works with Novartis now.  Joe Contreras and Abigail work there, and they have a new . . .  Joe DeGeorge was replaced by a guy named David Jacobson-Kram.  It’s J-a-c-o-b-s-o-n, I believe, hyphen, Kram, K-r-a-m.  He’s the new associate director for pharmacology and toxicology at FDA that now has the responsibility for how these things will be used in that group.  He’s new, but he has industry experience, so he has a different perspective on it. Ray probably mentioned Joe McDonald.  He’s with Schering-Plough. Did anybody give you any names from the ILSI people?  Would you like me to name some ILSI . . .

Shostak:                      Denise Robinson, now at Pfizer.

French:                        She’s at Pfizer, and, what’s her name, who’s at ILSI now that’s involved with Joe McDonald on . . .  Let’s see.  Amy Lavin, L-a-v-i-n.

Shostak:                      At ILSI?

French:                        At ILSI.  Phone number is 202-659-3306.

Shostak:                      Great.

French:                        She might -- if Denise is not available, I think Amy will know something about the historical record.

Shostak:                      Great.

French:                        As an option.

Shostak:                      Thank you so much.

 

END OF INTERVIEW