Dr. Chris Portier Oral History 2004

Download the PDF: Portier_Chris_Oral_History_2004 (PDF 72 kB)


 

Dr. Chris Portier Interview

Office of NIH History Oral History Program

Interviewee:

Dr. Chris Portier, Institute of Environmental Health Sciences

Interview Date:        April 13, 2004

Transcript Date:        April 27, 2004

 

Sara Shostak:  It’s Tuesday –

 

SS:                   April 13th and I’m interviewing Dr. Chris Portier of the National Institute of Environmental  of Environmental Health Sciences. Can we start just generally with your education and your background and how you came to NIEHS?

 

Chris Portier:   I got a high school diploma from a little school in Louisiana, South Terrebonne High School in Louisiana, actually skipped my senior year and went to college, Nicholls State University, got a Bachelor’s Degree in mathematics with a minor in computer science.  From Nicholls I went straight to graduate school, UNC Chapel Hill, got a Master’s Degree in biostatistics and a Ph,D, in biostatistics.  Did my thesis under David Hall [spelled phonetically] who was here at NIEHS after I finished my degree. Then NIEHS offered a job.  I liked it here.  I stayed.

 

SS:                   What was your first job?

 

CP:                  Here at NIEHS?  I was an investigator in biostatistics.

 

SS:                   And how has your line of research at NIEHS evolved during the time you’ve been here?

 

CP:                  Well, becoming much more biological as time goes on.  Initially being pretty much statistical in nature, then becoming more biomathematical and statistical.  Now, I run a tox program so it’s much more toxicology oriented. 

 

SS:                   How did you come to run the tox program? 

 

CP:                  Attrition I think.  I was the only person still here after 25 years.

 

SS:                   Tell me more about that.

 

CP:                  NTP and ETP does more than just toxicology research and testing, obviously.   We do a lot of evaluation, hazard identification, those type things, interactions with other federal agencies, issues like that.  A lot of my research in risk assessment was focused on linking mechanistic data and toxicology data to evaluating health risk so it was natural that if I could at least manage a portion of the NTP portfolio while the previous director of this office was here.  When he left they needed an Acting Director asked me to do it, and then made it permanent without even asking me.

 

SS:                   What year was that?

 

CP:                  2001, I guess.

 

SS:                   And what year did you become the Acting Director?

 

CP:                  2000.

 

SS:                   Okay.  At the time that you assumed the directorship of the program, let’s just go with 2000. What were its strengths and weaknesses and in what ways have you developed it?

 

CP:                  Hmm.  That’s a tough question.  The weakness in the program was in the research program, specifically in toxicology.  The toxicology research portfolio was – it was weak.  It’s still weak.  I haven’t been able to get the Scientific Director to allow me to focus resources on replacing people that I have moved out.  So, I did half of the job.  I moved people out of positions in the research program, but have yet to back-fill for those positions.  The biggest strength of the program at the time were the report on carcinogens was going very well, the testing program has always been strong and we expanded databases to strengthen that even more.  The Center for the Evaluation of Risk to Human Reproduction was weak at the time I came in.  We revamped that, developed guidelines for it and now I think it’s moving along just fine.   ICCVAM was weak when I came in.  ICCVAM is still weak.  Not actually satisfying its Congressional mandate, I find.   That’s about the gist of it.

 

Now through that whole period we’ve gone through three scientific directors and we’ve had a director for the last year who has been inactive simply because he announced he was not staying very much longer, so it’s been a difficult time to get anything done.

 

SS:                   What are the major initiatives underway within the NTP at this time?

 

CP:                  There’s tons of them.  Several different research projects on alternatives to mammalian systems.  We just finished the transgenic thing.  I’m looking at some high throughput in vitro screening methods.  I’m looking at some high output genomic screening methods.  Trying to figure out how they work.  I’m starting some new interagency initiatives to try to improve our use of kinetics data.  We’ve just finished a couple of initiatives internationally where we’re now locking in money for long term international collaborations.  We have the cell phone research that’s just finally coming to start and we’ve got the nano-materials research that is just beginning.  We’re probably going to expand our natural products research a little bit in a couple of new areas.

 

SS:                   Can you – I ask you this question very broadly because I imagine you will be comfortable answering it as I’m about to ask it, but I can also ask it more specifically if you find this too vague.  Can you tell me the story of transgenics research within the NTP?  So you’ve just concluded it; what is it that just concluded?

 

CP:                  Well, transgenics are genetically modified mouse models.  In fact, genetically modified mouse models are not just transgenics.  You can also use a knockout in an animal that’s not necessarily a transgenic, so we worked on genetically modified mouse models.  That was the real issue we were interested in.  So, when they were first introduced 10 or so years ago people began to think that maybe these would be useful not only for mechanistic research, which is what they were really generated for, but potentially as faster screens for carcinogenicity.  They would get tumors faster.  They appeared to be a little bit more sensitive in some cases to carcinogenic effects of compounds.  Why not replace the rat or mouse into these bioassays with a transgenic animal?

 

So, in order to do that the NTP went through a series of studies with transgenics looking at how responsive they were to things that are known human carcinogens, things that are maybe human carcinogens, things that might be mouse only carcinogens, etcetera.  Never reported any of them because we didn’t quite have a technical document for reporting them, so I looked at it and decided I needed several stages of review of transgenics before we could get to a point where we could talk about whether they would be useful.  The first thing we needed to know was, can we make a call in a transgenic animal itself as to whether or not its got cancer from the exposure to the compound.  Ignore humans, ignore anything else – can we look at an animal and make a scientific decision?  For some animals, yes, they felt that was reasonable, for other animals, no.

 

SS:                   Can you name the models for which it seemed reasonable and for which it didn’t?

 

CP:                  I can give you two of them and then use the two as contrasting models. 

 

SS:                   Great!

 

CP:                  The TGAC animal – they felt it was not an indicator of carcinogenicity at all.  We felt we shouldn’t have been doing the studies in the first place.  The TGAC is a reporter phenotype.  So, what it basically has is a modified Harvey ras gene mutated with a promoter sequence in front of it and if you activate that promoter sequence, you turn on that gene, you turn on that gene, you get cancers on the backs of these animals.  That’s basically the way it works.  There’s a number of ways to turn on that gene, but the concern was that then is the chemical only working on that promoter or is the chemical really doing something else that then through downstream events stimulates that activity?  And you couldn’t really tell.  So, you didn’t know whether you had a very expensive assay for gene expression or you actually had a real carcinogenic screen. 

 

The p53, on the other hand, is an animal where modified p53 gene has been inserted for the original p53 gene.  So the animal has already got a changed genetic background, but no fancy promoter region; it’s just a replacement of the existing one.  And, in this case, the scientist involved thought, “Yeah this is really a carcinogenic finding,” because you’re not – you’ve just got a faster animal.  That’s the basic conclusion that they felt. 

 

So, once we cleared that up then we had to decide how are we going to report these.  We started a new technical reporting series.  So, we’re in the process of putting or 30 to 40 different studies into these technical report series and then came the question of now can we actually do a replacement and are we ready to address that question and that’s the one that was brought before SACATM (Scientific Advisory Committee on Alternative Toxicological Methods) and it’s been discussed in other forms.  It’s not just there that we discussed this, but I think we’ve pretty much concluded from all of this that the answer is no.  These models were, for all intents and purposes, were aimed at a very narrow spectrum of the carcinogenic process and unless you know your chemical is going down that pathway you – a negative in a transgenic doesn’t tell you anything about the lack of carcinogenicity whereas a positive might tell you something.  What are you going to do if it’s turning into negatives all over the place? You’re just going to have to the original studies anyway.

 

So, we’re going to revisit the transgenic – our use of transgenics over the next few months and decide whether we are ever going to use them anymore except as follow up studies for things that we’ve already seen as positive. 

 

SS:                   So, an example of their use in a follow up study for something that’s already been seen as positive would be the phenolphthalein case.  Is that correct or –

 

CP:                  Kind of.  With phenolphthalein, we didn’t do a chronic. 

 

SS:                   Okay. Phenolphthalein was only a transgenic..I thought that there was –

 

CP:                  I don’t think that was a chronic on phenolphthalein..

 

SS:                   I could double check again.  I think there was.

 

CP:                  I may be wrong.  Okay then that would be an example of – the idea would be you have a certain finding, you think it’s tied to this particular mechanism, you can use a very specific genetic modified animal to address it.  Dioxins cause cancer through the AH receptor if you get an AH receptor knockout mouse, do you get cancers from exposure to dioxins?  That would be a use of a genetically modified animal to prove that the mechanism is solely controlled by the AH receptor.

 

SS:                   Okay.  Have there been models other than p53 and TGAC that have been of particular interest to the NTP?

 

CP:                  There have.  There’s an H-RAS, a different H-RAS, modified animal the Japanese put together and this particular mouse is probably better for screening than TGAC is.  I can’t remember its name.

 

SS:                   TRAMP [?], is it that one?

 

CP:                  I don’t remember.

 

SS:                   Okay.

 

CP:                  There’s a p16 model that, I think it’s p16, Bob Maronpot has been working on that is inclined towards prostate tumors that, you know, it’s one of the biggest tumor findings in human males and we don’t have very good models in rodents.  So, he has been working with that one to see how useful it would be.  The estrogen receptor knockouts are useful models in certain cases where we are looking at endocrine active compounds.  The COX models, the COX knockouts are interesting in cases where we are looking at a neurological effect from some of the compounds because the COXs are an important aspect of neurological response in a lot of different measures, things like that.

 

SS:                   Okay and how will the feedback that you’ve gotten from, in forums, in addition to SACATM as well as feedback you got at SACATM shake the NTP’s use of this array of models?

 

CP:                  Well, I think the feedback we got from our boards was fairly clear.  If we have reason for using a particular transgenic and it’s really scientifically defensible we should, but using the transgenic as a first screen without any knowledge of what the target might be would not be a useful exercise.

 

SS:                   Given that conclusion how do you understand or how do you qualify the NTP’s work with transgenics up until that point?

 

CP:                  Up to that point?  Meaning our historical database on it?

 

SS:                   Yes.

 

CP:                  Well, since we were focusing on compounds for which testing had already been done it was a sort of validation exercise.  It wasn’t validation in the sense that SACATM and ICCVAM would argue about validation, but it provided us enough scientific evidence to make a decision about how we would use them further.

 

SS:                   Can you talk to me a bit about the ways in which the NTP collaborated or interacted with other researchers at the NIEHS in the development of these models?

 

CP:                  No.

 

SS:                   Okay.

 

CP:                  I don’t think I can.

 

SS:                   Okay, were there significant collaborations?

 

CP:                  It’s hard to tell, okay.  Drawing a line between NTP and NIEHS is not easy. 

 

SS:                   I’ve noticed.

 

CP:                  And so you’re asking me to draw that line.  Is Ray Tennant NIEHS or NTP, for example. 

 

SS:                   How would you answer that?  I guess I should ask Ray how he would answer that.

 

CP:                  Or Dr. Olden how he would answer that.  Everybody is going to have a different answer.  I think at the time that Ray was pushing us to use transgenics in the testing program Ray was executively NTP.  That’s why it’s difficult.   Now Ray may no longer be considered NTP.  It’s hard to say what Ken would say on that, so it’s a little fuzzier.  Certainly we’ve played around with the use of ERKO.  We’ve played around with the use of COX1 and COX2 knockouts, but developers of those are intramural scientists not NTP so, but that wasn’t a very strong collaboration that was more of can we use your model sort of thing. 

 

SS:                   Do you think that the collaborations substantially different given that they’re intramural scientist than it would be if they were on the faculty of UNC?

 

CP:                  No.

 

SS:                   Okay.

 

CP:                  Mostly – see NTP’s work is not necessarily cutting edge research.  You know, it’s just answering the questions that the public health community needs to answer for specific agents.  Usually it’s very routine work.  The best interactions we have are with laboratories that are looking for tumors or they’re looking for tissue from a chronic exposure study and don’t have the facilities to do that themselves, set-up such a study or acquire the tumors, etc, etc,, and so we’ve done a lot of that work, but DIR most people are fairly well funded.  They’re very focused in narrow areas and if we’re not doing a compound that’s targeted down the path they’re looking at generally they don’t get very interested in it.

 

SS:                   Okay.  Ken Olden undertook a major reorganization of NIEHS in the early mid-90s, correct?  And that reorganization brought NTP into the DIR. 

 

CP:                  Yes.

 

SS:                   Can you talk to me about the processes and the consequences of that reorganization for the program?

 

CP:                  Not easily because I wasn’t in NTP at that point.  I was effectively a DIR scientist. 

 

SS:                   How did you –

 

CP:                  Actually, I wasn’t even in DIR, I was in Biometry and Risk Assessment Program.

 

SS:                   Which was separate at that time, also. 

 

CP:                  Yes.

 

SS:                   Okay.

 

CP:                  I didn’t view it as any different at all at the time.  What did I care?  I could go any route they wanted me to, so I could only give the impression of other people.  Boy, that was a bad tomato. 

 

SS:                   Sorry.  Bad tomatoes are really bad.

 

CP:                  If you weren’t here I’d spit it out.

 

SS:                   Feel free.

 

CP:                  Too late now.  The impression I got from a lot of the other people who were here as a part of NTP felt that Ken has screwed NTP to some degree, bled out resources from NTP into the rest of DIR to support pet projects he wanted done, that he made it much more difficult for NTP to do their job, because there’s no clear line between who reviews the NTP scientists and who reviews the DIR scientists and what’s expected of NTP scientists as compared to DIR scientists.  All those issues got very, very cloudy and messed up.  Not at all fair. 

 

SS:                   And how does that affect the current functioning of the program, if it does in any way?

 

CP:                  It’s still historically the same problem.  The program is pretty much managed and run by staff scientists, not by principal investigators.  But NIH generally, in the intramural research program, staff scientists are under PI’s, and so a PI is responsible for all the staff scientists and everything else under them.  NTP doesn’t have that option, unless I’m the PI for all of NTP, which I refuse to be.  So it’s a little bit of a difficulty working with NIH DIR and not understanding what NTP is and how it fits into a DIR program.

 

SS:                   I’m asking because I’ve heard different takes on this from different scientists in the Institute, but I hadn’t gotten to talk to someone in your sort of position. 

 

CP:                  I’m sure everybody has a different view.  Some people loved it, some people hate it. 

 

SS:                   One of the other things I’ve heard a lot about as a change within the NTP over time is the emergence of mechanism-based toxicology, and I would love it if you could help me understand what that means both conceptually and for the NTP operations.

 

CP:                  And what it means for the utility of what we do – so, classic toxicology – you take a model, rat, mouse, monkey, dog, cat, whatever, administer it some toxins, and then you look to see what happens to it.  It dies, gets cancer, whatever toxicity you’re focused on.  But as we begin to understand more and more of what caused these diseases, the question became twofold.  Number one, can we better understand the toxicity finding if we don’t just go for the toxicity itself, but also try to go for part of the mechanism tied to the toxicity.  So if we expose mice to a chemical and they get tumors, then we go back and look for mutations in the tumors, and find a Harvey Ras mutation of such and such type, is that going to strengthen the finding?  Is that going to make us believe more about this model in terms of its relevance to humans?  So that’s the first question.  The second question was – some of these toxicity – some of these mechanistic endpoints have different characteristics to them statistically so that you might be able to get a smaller dose than you could for something like cancer, or death, because we’re interested in – death’s easier, let’s just choose death.  Certainly, I don’t want a compound to go into the environment at a dose that causes one in ten people to die. 

 

SS:                   I would agree with that.

 

CP:                  That would be bad. 

 

SS:                   1 in 100, that would be bad.  1 in 1000, still pretty bad.  1 in a million – depends on how important it is.  1 in 10 million, we’re starting to not care anymore.  One in 100 million, we probably don’t care at all unless –

 

CP:                  Unless it’s our kid.

 

SS:                   The entire population.  One in 300 million?  That is the entire population, if we can only possibly kill one person in the lifespan.  So there’s all these varying things.  I can do a study in a rodent population, and I might get 50% of the animals to die, and I can measure that accurately with just about – well, I know I can do it with nine animals.  Given nine animals, I can estimate fairly accurately what dose will kill 50% of those animals.  Supposed I want 1 in 100, instead of 50%.  Instead of 1 in 2, I want 1 in 100.  Well, if it just held at the same ratio, ten, to get the – 1 in 2, so to get one in 100, I’d need 500 animals.  Actually, I’d need more than that, but let’s just choose that as the number.  One in a million – I’d need 500 million animals.  I can’t work with 500 million animals.  I can’t work with 500 million animals.  It’s just a little bit big.  So the idea would be that suppose I know activation of this gene is what causes the death of the animal.  I know that – I can work through the mechanism, and I know that’s the case.  If I can measure activation of a gene very accurately in really small doses, so I can look at the entire dose response for the activation of the gene, relate that to the dose response for death, I can potentially do my extrapolation on something that I can get real data on, rather than just from 1 in 100 animals, down to one in a million.  So, it serves two purposes – low dose extrapolation, extrapolation across the [inaudible].  Improvement in both areas was the general idea, plus keeping statisticians and mathematicians employed. 

 

SS:                   Which could be seen as a public good.

 

CP:                  It’s definitely a public good.  Trust me.

 

SS:                   Let’s see – this also is a general question, but what would you say are the lessons learned in regards to validation from the experience of NTP with transgenics?

 

CP:                  That’s a good question.  When we started the validation issue for ICCVAM and NICEATM nine years ago, eight years ago, validation was thought of as a very simple concept.  One assay, one replacement, they need to coincide with each other.  If they do, it’s valid, if they don’t we have to do something else.  That’s not a good way to look at validation in the modern world.  It’s likely to be multiple endpoints to replace one assay.  It may be that the entire concept of replacement won’t make any sense, and that’s the thing from transgenics that’s interesting.  There’s nothing to compare to with the transgenics.  I don’t want to predict other mice; I want to predict humans.  So suppose I come up with something where I can make a good scientific argument that this is a better predictor of human cancer, even though I might not have studies in humans to validate this against.  But I can make a reasonable argument.  Do I actually have to go and do 100 studies, 500 human studies, make an argument that they correlate and coincide with each other, or are there ways to progress a scientific argument to the point of agreement that this is a better – this is a good replacement for something else that is trying to estimate something for which we actually don’t know the truth.  I think that’s what transgenics have led us to start thinking about, is how are we going to consider complicated validation schemes where you have multiple endpoints trying to replace one thing. And how are we going to deal with things where we actually have an assay that we want to replace another assay and both of them are targeted at something for which we don’t know the truth. And we think this assay is better than that one, so we don’t want to replace that one because this one works the same as that one, we actually want to replace it because this one works different than that one, and better.  Not an easy concept for the regulatory community to handle.  It’s going to take some effort on our part to start thinking about how we are going to move forward with a validation concept that’s broader. 

 

SS:                   And is ICCVAM set up for that, or will this require a new sort of process?

 

CP:                  Well, it would still have to go through ICCVAM.

 

SS:                   Okay.

 

CP:                  But ICCVAM, when they were created, codified this validation process that’s – it’s like any other regulatory approach to something.  They codify it.  They set a process in place so that everybody is comfortable with it, that’s it.  And the validation process they have in hand is not flexible enough to manage current science, and so it might be our responsibility to improve that document by bringing together a scientific meeting, developing a new validation process and model, etc.

 

SS:                   Denise Lasko was nice enough to save a drawer full of documents that she had found, some of which Mary had found, some of which Jay had found, that were just sort of sitting around – they’re early NTP requests for contributions to the annual plan, the annual report, and I was going through them, and one of the things that struck me was that in the original NTP mandate, there is an emphasis on developing and validating relevant testing methods.  That is, from the very beginning, NTP has been focused both on testing, and developing tests, and this made me –

 

CP:                  Absolutely.

 

SS:                   This made me very curious about how that mandate to do test development has shaped the form and the function of the NTP.

 

CP:                  You’re asking the wrong person.

 

SS:                   Okay, who should I ask?

 

CP:                  Dr. Olden or Dr. Sassaman.

 

SS:                   Okay.

 

CP:                  Because what happened when Ken came in – and Ken did all this reshuffling, was then the development part of the NTP was effectively moved into extramural and handled through grants.  And so, while we don’t actively go out and set up a contract and say, “We want you to develop an assay to replace this one.”  Through the contract mechanism – it’s folks that we are stimulating the production of alternatives.  Then there’s small business grants where you take new discoveries and try to turn them into products that can replace existing products, the testing marketplace, and so that’s effectively where it is, not really in intramural, and not really as part of my office. 

 

SS:                   Okay.  I think – two more questions.  The first…I’m trying to figure out how best to ask it. 

 

CP:                  Remember no questions are bad, it’s only the answers that are bad.

 

SS:                   When – the regulatory community is in some way your market – your service market.  Information goes to them, it needs to be usable for them.

 

CP:                  The public health decision making community – not always just regulatory.   

 

SS:                   Okay.  Some questions are improved by iteration.  At the same time, it sounds like you have a role in bringing them along with current science.

 

CP:                  Yes.

 

SS:                   How does – at the most general questioning level – how does that work?  How do you do that?

 

CP:                  Workshops, joint collaborative projects with them, demonstration activities, a lot of different ways.  Transgenics was a good example of where we held a lot of discussions back and forth about what they wanted, what we wanted, just our technical reports series on transgenics stimulated a debate of months with our regulatory partners, what were they going to do when one of our technical reports came out and said significant carcinogenic activity in P53 mice.  Does that mean it’s going to trigger some sort of hazard identification for them, or not?  What are they going to do with that?

 

SS:                   And that debate happens both in settings like workshops and conferences, and also informally in email exchanges, and –

 

CP:                  And at NTP executive committee meetings, and closed government meetings, yes.  It happens in both forums.  That’s basically the way it moves.  Other ways we do it – change the bioassay, for example.  So, right now, our technical reports on the cancer bioassays, 50% of them, give or take, include physiologically based pharmacokinetic models, where we take the pharmacological data and transform it into something that describes how the chemical gets distributed in the bodies of the animals.  There are easy methods for taking the animal model and converting it to a human distribution model.  So by putting this in our technical reports we’re sort of challenging the agencies to just take what we’ve got, and the cancer findings, add a human component to it, and make a predication across species in a better way than they’ve been doing before.  And hopefully that’s stimulating them to start paying closer attention to pharmacological differences between species.  Things like that.

 

SS:                   Going back a bit to the question of validation – sometimes, in some of my interviews, people talk about validating against the two-year rodent bioassay, because it’s the gold standard.  Other times, when I talk to people, they talk about validating against that same bioassay, because it’s the only standard, and they object to the notion of gold standard.  What is it that makes the two-year bioassay the reference point?

 

CP:                  For cancer.  Well, if you look at EPA’s IRIS website, their Integrated Risk Information Systems, besides our own studies pretty much 85%, I would guess, of the chemicals – 500, 600 plus chemicals – in the IRIS list, the regulations are through rodent cancer studies, so the regulatory community, the legal community, the scientific community have all come to accept identification by rodent models as a reasonable surrogate for setting standards for humans.  Proving that’s right or wrong is an exercise many people have played around with, me included, but I think you can show whatever you want with the types of data that are out there right now, so it’s hard for me to say I believe it scientifically one way or the other.  There’s a comfort level, there’s a degree of parallelism across mammalian systems that most scientists agree is there, and so a hazardous finding like cancer in a rodent is sufficient that most public health-minded people would be concerned about that.  I think that’s what makes it the standard. 

 

SS:                   Is there anything that you feel I should have asked you that I’ve not asked you?  And focus specifically on making sure I understand the trajectory of genetically modified mouse models through the NTP. 

 

CP:                  Where’s it going.  One of the reasons I think that genetically modified mouse models won’t be a major focus of the program in the near future is there’s other ways to do the same thing now. 

 

SS:                   What are those?

 

CP:                  Much better ways, much more efficient, much more effective, much more clever.  Instead of doing a germ line change – do you know what that means?

 

SS:                   I think so – germ vs. somatic cells.

 

CP:                  Right, instead of doing a germ line change in the animal, so that every cell in the animal has this mutation in it, there are now techniques that allow you to block a gene on the fly, to alter the production of a gene on the fly so that, in a mature animal, at any given time, you can start playing games with their genes, if you think certain pathways are involved in certain things.  That’s a much more effective tool.

 

SS:                   What does – just to be very highly specific, what does that allow you to do that’s desirable?

 

CP:                  Well, let’s talk about – I told you a little bit about the AH receptor knockout mouse, right?  AH receptor is a gene, when you turn on the gene, it produces a receptor that then gets its protein and cytosol.  If I can block that gene, I don’t produce the protein.  So instead of genetically going in and removing that gene from the germ line and producing an animal which doesn’t have that gene during development, or any aspect of life, I can develop an animal that always has the gene, and then compare groups that, when exposed with that gene, and exposed without that gene, might get different responses. 

 

And I don’t have to do it, necessarily, from birth.  That’s important for some things.  Take certain growth factors.  Take the estrogen receptor, for example.  You knock that out of an animal, the resulting animal is not the same as the one with the gene in, they’re not even close. Because during development those genes get – have important roles to play that may be very different than the role you’re interested in focusing on later, so for H receptor, we know it has some roles to play in intelligence, in development of the nervous system in the rats and mice.  So, my interests might be in cancer – why do I want an animal that’s got a neuroendocrine imbalance, compared to one that doesn’t, and then I want to do a comparison study?  I don’t know that it’s the dioxin that’s causing the difference.  The gene that’s causing the difference in the adults are the neuroendocrine patterning that went on, that’s causing the difference.  I don’t know what it would be.  By knocking out the gene in sisters, for example, so one has it and one doesn’t, and they were the same up until I knocked it down, it’s a better study – much more controlled.

 

SS:                   Other places that this technology is going?

 

CP:                  Tissue cultures, human gene knock-ins –

           

SS:                   Where you take –

 

CP:                  Humanized mice and humanized rats.

 

SS:                   Where does that phrase come from – humanized rats, humanized mice. 

 

CP:                  They take a gene that’s human and put it in the animal.

 

SS:                   I guess I’m wondering if you know who used it first, or –

 

CP:                  I don’t, I don’t, honestly.  It’s going that direction.  We’re going to start exploring lower-level life forms, they’re faster, they’re easier, they may make good screens for some things.  So we’ll be looking at zebrafish and C. elegans and others, there’s some good reasons for looking at them.  The genetic tools I was talking about before are easy to do in zebrafish and C. elegans, much easier than doing them in mice, so you might use them as a first screen to then beat something into a more complicated and expensive mouse screen.  I think that’s where it’s going.

 

SS:                   All right.  Anything else?

 

CP:                  No.

 

SS:                   Great.  Thank you.          

End of transcript