Dr. Bern Schwetz Oral History 2004

Download the PDF: Schwetz_Bern_Oral_History_2004 (PDF 93 kB)


 

Dr. Bern Schwetz Interview

Office of NIH History Oral History Program

Interviewer: Sara Shosak

Interviewee: Dr. Bern Schwetz

Interview Date:      May 17, 2004

Transcript Date:    May 27, 2004

 

Interviewer:     I’m interviewing Dr. Bern Schwetz of the US Department of Health and Human Services.  He’s also the Chair of the Executive Committee of the National Toxicology Program.

 

Interviewer:     Are you aware of the fact that I’m recording our conversation?

 

Bern Schwetz: Yes, I am.

 

Interviewer:     Could we begin, if you would, by telling me just a bit about your background and your training? 

 

BS:                  Well, I’m a toxicologist and I was with NIEHS from 1982 until 1993, and during that time I was on the team of people who were – within NIEHS were identified as the people doing the NTP work.  So that’s my connection with the NTP, having worked there for those 11 years.  And then since then I have worked with the Food and Drug Administration and during those years continued to be connected with the NTP, and in fact am now serving as the chair of the NTP Executive Committee.  So those are my connections to the NTP.

 

Interviewer:     It sounds like you were at the NTP during its formation, its first years at NIEHS?

 

BS:                  No, I came to the NTP probably in the second or third year after it was formed, because it was started several years before I got there, but I was there in the relatively early parts of it, but not right at the beginning.

 

Interviewer:     Okay.  Could you just, if you would, reflect a bit on those early years of the NTP?

 

BS:                  Well there were several things going on at that time.  First, the NTP had been transferred from a component of NCI into NIEHS, so there was still that transition from NCI into NIEHS as the NTP being a free-standing entity outside of NCI.  This was also a time when good laboratory practices were just being implemented, and the NTP was relatively slow in taking on the good laboratory practices in all of the laboratories where it was doing work.  The NTP was dealing with problems of quality control of the research in the contract labs, because NCI had taken on a commitment to test a lot of chemicals.  That commitment was still there when the program went to NIEHS, so there was an expectation that the NTP would be testing a lot of chemicals for carcinogenic activity.  And it was at that time that they realized that the quality of work being done in some of the labs was not what it should be, not what it needed to be. 

 

So there was a lot of work going on.  Some of the work was being done on conditions that certainly wouldn’t meet good laboratory practices, and there were problems of quality control on in the labs so that some of the lab work was even shut down.  And because I came from industry and had experience with GLP’s, working in a GLP laboratory, one of the first things I was asked to do was come up here to the NTP archive in Rockville, Maryland. I spent months up here auditing studies to find out what were the problems, and how do we deal with enhancing our quality management program within the NTP?  So that was kind of the context. 

 

Another dimension of it was that this -- coming from NCI -- was seen as a cancer program, and the intent within the NTP was that this was a toxicology program, not just a carcinogenesis program.  So how would we expand this program to a context that was larger than just carcinogenesis testing? 

 

Interviewer:     And how was that pursuit?

 

BS:                  Well, there – I mean, one of the reasons that I was hired is because I was a reproductive toxicologist.  So even though I had experience with carcinogenesis testing and metabolism I also had this other dimension and was asked to head up the laboratory – the systems toxicology lab that had pharmacokinetics, reproduction, and other dimensions other than carcinogenicity testing, to fold that into the NTP portfolio, and we had contracts for doing teratology studies and reproductive tox studies.  We had a group of people doing research in-house on development of better methods.

 

Interviewer:     The last thing you said about people doing research on the development of better methods is something that I’ve been trying to understand more.  It seems like it is part of the mandate of the NTP to develop new toxicological methods.  How is that mandate pursued?

 

BS:                  Well, the efforts to develop new methods flowed from intramural research, as well as contractual mechanisms whereby we had information about what test methods, for example, we wanted to have further developed and validated, and so work might have been done in-house or we might have been aware of work that was from the published literature and developed a contract that was then competed -- and work was done in other laboratories other than NIH – NIEHS – to validate the studies.  So we had everything going on, from some developmental work to validation studies, and then bold conferences to evaluate – to get the audience, to get the community brought into the discussions of the predictability and the precision of these methods as tools for predicting whatever the endpoint was.

 

Interviewer:     You were at the NTP when genetically modified mouse models first were created and written about in the scientific literature.  Do you remember when you first heard about them, and what you thought?

 

BS:                  Well I don’t remember when it would have been, but Kurt Harris was talking for a long time, from NCI -- about P53 knockout animals. So I would assume that was the mid ‘80s, and Ray Tennant picked up on that and was interested in developing models that could be used on a screening basis, as opposed to a one-laboratory research animal.  So it was in that context, and from other researchers who had been looking at genetically-modified animal models that Ray Tennant, in the genetics portion of the NTP, began to develop models that we could use as a standard screening model.

 

Interviewer:     At that time, when Ray started working on those models, what – do recall what your hopes or kind of – or concerns were about their use?

 

BS:                  Well, the hope of the community was that sooner or later we would have short-term in vivo models for screening for cancer.  We had already gone through a large number of genetic tests that were more in vitro kinds of tests with the hopes that we would come up with the Ames test to give us enough information to make decisions.  When it became clear that the answer wasn’t there, then it would be the Ames test plus other short term tests, either in vitro or in vivo, and it became clear that none of those was obviously better than the two year bioassay.  So then when these in vivo short terms models came about that were the genetically modified animals, where hopefully an answer could be gotten in six months rather than two years, that still held promise that maybe we would have animals that could still replace the two-year bioassay.  Well, then it also became clear, as we knew more about cancer mechanisms and the animal models, that you would have to have a large array of genetically modified animals to reproduce the whole animal in terms of susceptibility to chemical carcinogens. 

 

Interviewer:     Could you tell me more about that?

 

BS:                  The genetically modified animals – mice – were modified based on known mechanisms of carcinogenicity.  If you changed one gene or a small number of genes in a mouse to make it very susceptible to carcinogens, the only carcinogens that you would expect to pick up in that model were carcinogens that work by that mechanism.  So in that sense you would have a mouse that’s ultra sensitive so chemicals that cause cancer by this one mechanism, but no more sensitive to any other carcinogens working by different mechanisms than any other mouse would be.  So it didn’t offer much advantage, and when you recognize that there were dozens of mechanisms by which chemicals cause cancer -- you’d have to have dozens of animal models in order to have a short-term in vivo model for screening for chemicals of unknown effects.  That became untenable to think that we would have dozens of strains of mice at a time when we still didn’t have rats that were genetically modified. We still thought that if you’re going to give an answer about risk you should have data from more than one species, not just mice, or not just a genetically-modified mouse to make it ultra sensitive.  So the prospects became dimmer as we went through the years.

 

Interviewer:     Do you recall around when that was – that you – at about time you believed the prospects were dimming?

 

BS:                  About early ‘90s -- dimming in the sense that it was likely that we were going to be able to throw away two-year bioassays and replace it with something that was less expensive and would take less time and would give us definitive answers.  The prospect that that was going to happen began to diminish quickly. 

 

Interviewer:     And at the same time research continued on those models, correct?

 

BS:                  Yes it did, because there were some of these models that were considered to be good from the standpoint of giving some relief to the requirement to do cancer testing in rats and mice, because there were a lot of people throughout the world that didn’t like doing cancer studies in mice.  One government after another said that they wouldn’t require it.  It was still required in the US, but there was an interest then in the prospect that you would continue to do cancer testing in rats because everybody accepted that that was a good model, but instead of doing two year bioassays in mice you would do the standard rat bioassay and do a transgenic animal instead of doing mice.  So that held up the promise that perhaps the testing could be less cumbersome if we didn’t have to do both species in two-year bioassays.  In fact, when the ICH guidelines came out for pharmaceutical agents, there had been an agreement reached that for those international settings where the ICH guidelines were the basis for deciding what testing had to be done, now there was permission to just do rats and an appropriate other test. That was considered to be a transgenic model, if it could be justified as being relevant for this chemical, based on what properties this chemical had.  So there was still an effort then to validate transgenic models, and at that point other nations had research programs looking at transgenic models -- in Japan, in Germany and wherever.  And that information continued to be gathered, and in those years -- in the middle ‘90’s – ILSE [spelled phonetically] had developed a program to try to bring this to closure in a more formal validation and comparison exercise within ILSE.  So that kind of brings us up to the middle and late ‘90s.

 

Interviewer:     Okay.  Could you tell me more about what happens after the middle and late 90’s?

 

BS:                  Well, what happened was that I began to change career directions, and at that point I followed it less than I used to.  So exactly what the status is today I’m not sure. Except that within the FDA there continued to be agreement within the Center for Drugs that the ICH process was reasonable and that a cancer study in rats and one other test system would be sufficient for drugs because that had been agreed upon through the ICH process. But the other centers of the FDA -- the Centers for Foods, the Centers for Veterinary Medicine, those – Center for Devices, which never required two-year bioassays anyhow.  But the Center for Foods and for Veterinary Medicine still didn’t accept just data in rats and some other system.  If they required cancer data at all it was in rats and mice.  So that’s kind of where I left it and stepped out into doing different kinds of things rather than cancer testing. 

 

Interviewer:     In what ways, if any, have you encountered questions about these genetically-modified mouse models in your role as the Chair of the Executive Committee of the NTP?

 

BS:                  Well, just from the standpoint that data continued to come forward from the NTP programs, and I think only brought to us for discussion of whether or not this should be a priority within the continuing research program of the NTP or whether they should be working on some other things.  But that, I think, was just one of a large number of things that was brought to the Executive Committee for further discussion between all of the agencies that are represented on the Executive Committee. 

 

Interviewer:     One of the things that interests me about these models was that, as you noted, the Center for Drugs at the FDA came up with one kind of relationship to them -- whereas the other FDA centers have not begun to use them, and similarly the EPA has never incorporated them in its review process.  Can you help me understand how these different orientations come about?

 

BS:                  Well, for example, within the Center for Foods you still have the Delaney Clause from 1958 hanging back there that says that cancer in any model says that this chemical shouldn’t be used for any food given to humans.  That was written in the context that we had standard rat and mouse bioassays. Because the ICH guidelines applied to drugs and nothing else, the other parts of the agency were not compelled to follow the ICH guidelines, and because the Delaney Clause was there saying that cancer in any model from a chemical precludes the use of that chemical for humans, the feeling was that there are these unvalidated test models out there, the transgenics and others, and please don’t test your chemicals that might be food additives or food contaminants – don’t test them in these models because if they’re positive -- we don’t know what that means but it will trigger that this chemical will not – cannot be used in foods.  And the only data we want to see are data from standard bioassays; and that was kind of the opinion that was being expressed in the food area. 

 

The Center for Veterinary Medicine is not far different from that, because the question there is whether or not any residues that are in food components of animals represent a carcinogenic risk to humans.  So whether it’s in liver or muscle or whatever components of an animal that people eat, the question in the Veterinary Center is whether or not that poses risk to humans, consuming those products, not a question of whether they’re carcinogenic to the animals.  So, in that sense, the Center for Foods and Center for Veterinary Medicine are dealing with the same issues.  Both different than drugs, where there is no Delaney Clause.

 

Interviewer:     And could you help me understand at all what EPA’s difference is in this regard?

 

BS:                  Well, they have a different regulation, FIFRA, which again pretty much says, as I recall, that if you have to do two year – if you have to do cancer testing to support a registration of a pesticide, it has to be done in rats and mice.  So they have a different regulation than Food, Drug and Cosmetic [inaudible].

 

Interviewer:     The two models that were developed most intensively at NIEHS -- p53 and the Tg.AC model, as you know, were both developed by Ray Tennant’s group in the Laboratory of Environmental Carcinogenesis and Mutagenesis.   I’m wondering if you would reflect a moment on the position of that lab in the larger NTP project -- what was it about that lab such that developing these models became their project?

 

BS:                  Ray Tennant, was the lab chief with a background in genetics and mutagenesis and model development; by the application of that knowledge in predicting carcinogenicity; was the one who grabbed down to these models and said, “This could be a better way to predict whether or not chemicals would be carcinogenic,” and it would be more mechanistically-based than just the empirical process that we used of testing chemicals in rats and mice.  So, Ray being the genetics program within the NTP and having grabbed on to these models, it was a natural fit for that genetics branch to pick up the responsibility for introducing this into the general testing scheme of the NTP.

 

Interviewer:     That’s very helpful, thank you.  At the most general level, what would you say are the challenges which confront folks like Ray and others who are interested in developing new toxicological bioassays?

 

BS:                  Well, the challenge is that if you develop a new method it has to be better than what we’ve been doing before and better has the dimensions of faster and cheaper.  It has to be at least as reliable as where we have been, and if it is only as reliable the difficulty of getting people to change suggests that they probably won’t change if it’s only as good as what you’ve done before, even if it’s less expensive.  So even if you have something that’s better and less expensive there’s a huge amount of inertia out there to overcome to get people to accept something new.  And part of the risk -- and because the NTP is not a regulatory agency they are at the mercy of EPA and FDA saying, “Yeah we’ll accept these data if they are produced by Industry.”  So, you’ve got Industry sitting out there saying that “I’ve got this new drug or this pesticide or a material that would be in food wraps” -- whatever the case might be – “I’ve got a new product” and I don’t want to take a chance of developing a database and have EPA or FDA say, “We don’t like that.  You have to go back and do the two-year bioassays anyhow.”  So there was no incentive for industry to use short term tests with the risk that the regulators would say, “Well that’s good, but you have to do the two-year bioassays anyhow.  Go back and start over.”  When in fact the fastest thing was just to start the two-year bioassays as soon as they could and get them over with.  

 

And if the shorter term studies could be done to help provide insight into the response of the standard bioassays they are more useful in that sense than they are as a screen and then they have to run the two-year bioassay afterwards anyhow.  So it led to the process that we have to go back to what it is that we traditionally require as regulators, EPA and FDA, and figure out how to use the other tests as complimentary information. Rather than find yourself in the situation where you end up doing what you always would have done anyhow -- now you’ve prolonged it over more years because you learned after you were part way into your testing program that that’s what you have to do anyhow. 

 

Interviewer:     The idea that these models could be used as compliments along side a traditional two-year bioassay is often described as movement towards mechanism-based toxicology.  Could you talk to me about how you envision or understand the possibility of regulatory decisions being based on mechanism-based toxicology? 

 

BS:                  Again, there were several examples within the Center for Drugs where -- there were examples within the NTP database itself where you run a two-year bioassay, let’s say in rats and mice, and you end up with a marginal response -- not clearly carcinogenic, not clearly non-carcinogenic -- and you’re stuck with uncertainty.  You could repeat the two-year bioassay with the hopes that you wouldn’t get the same answer, that it would either be positive or negative, but that’s not a given and it would be three or four years down the road.  So it’s not a very useful solution to the problem of having completed a study and have non-definitive information.  So what could we do to either rule-in that this is positive or rule-in that it’s negative?  That’s where these complimentary studies come into play if you have some understanding of it’s either not mutagenic, so you go to the Tg.AC model, or it’s mutagenic so you repeat the study in the p53 knockout, which if it is a mutagen it should be picked up there.  So if you run either or both of those based on what you know about the chemical you should be able to get a more clear answer about whether or not this chemical was carcinogenic.

 

Say that it is something that’s weakly mutagenic and you run the p53 and it’s negative.  It would be helpful to just say, “Well, the response in the mouse is probably not one that we need to put a lot of weight in.  The two-year rat study was negative.  We did the more sensitive study for genotoxic carcinogen and it was negative, therefore this database tells us with the weight of evidence approach that this chemical has little or no carcinogenic potential.”  Now if you – and that was the example with phenolphthalein.  If you have a chemical in contrast where you had a definitive study of negative in the rat and an equivocal response of just a few liver tumors but not enough to really say there was clear evidence in the mouse and you ran the p53 and it was positive; it would say that this is still probably a weak carcinogen but overall we would consider it to be positive.  So they are helpful in that sense. 

 

Interviewer:     Are they currently being used in that way?

 

BS:                  Yes.

 

Interviewer:     Okay.  Sometimes when I interview people on these topics they talk about the fact that though the two-year bioassay is the standard, it itself it was never validated; it’s kind of a default standard rather than a gold standard.  Could you help me understand a bit about its history and how you see its eventual rise as the gold standard?

 

BS:                  The models came out of NCI, because the NCI tested a large number of rats and mice to find out their response to known carcinogens.  In the ‘70s there was still a lot of confusion about what species should be – what strains should be used for cancer testing, for regulatory purposes -- for research that was a different question, but for screening for products that would eventually be submitted to the EPA or FDA there was a lot of confusion.  Laboratories were using their own inbred strain of animals that they raised themselves and it was not easy to look at data that were coming from one company and compare it to data coming from another company or another company because the animals were not standardized.  But one of the things the NTP program created was an agreement to all use the Fischer rat and the B6C3F1 mouse, because that was the animal of choice for the NTP.  And that helped to create consistency throughout the testing community because there was a historical database being developed now and there was a lot of experience from the standpoint of a body of pathologists who had looked at all the data in a more sophisticated way than was commonly being done in industry.  And as a result you ended up with animal models where the uncertainty about a response began to diminish because there was so much experience with it.  

 

Well, when I was at the NTP we talked about trying to validate the two-year bioassay models.  First of all, it was very difficult to get agreement on known human carcinogens that could be tested in animals because a lot of the chemicals that were known human carcinogens were mixtures or there were things like vinylcholride that was a gas that would have been very expensive to use in a two-year bioassay to validate it in multiple laboratories.  They were processes that were carcinogenic as opposed to a single chemical.  So we ended up with a list of chemicals but they were more laboratory curiosities that would have been known human carcinogens and falling within a very narrow set of mechanisms.  So that was one limitation. 

 

The other one was that if we were to validate the model we would have consumed several years of budget of the NTP to do that, and it didn’t pass the lab test.  For us to go back out to the community and say, “We’re going to validate the well known Fischer rat and the B6C3F1 mouse against a list of -- we think -- agreed upon, known human carcinogens. And it’s going to mean that for several years we are not going to be able to test unknowns but [unintelligible] carcinogens that we could test in the animal models to make sure that non-carcinogens didn’t show up as carcinogens.”  It was very hard to reach agreement because many of the non-carcinogens had only been minimally tested; they hadn’t been tested repeatedly in laboratories.  We had vehicles like corn oil or rat chow but we didn’t have chemicals that were agreed upon non-carcinogens. 

 

We finally said, “It’s more important to continue to use the model that we have familiarity with, have experience – we have a lot of control data.  We know of the responsiveness of this model to a lot of different chemicals” and by that time we said, “It’s more important for protecting the health of the public to continue to test unknowns than it is to interrupt the testing of the NTP for several year. to do something that”,  in retrospect, everybody would have said, “You did what we knew, and you wasted a lot of money, and worse yet, you failed to test some chemicals that perhaps were of major public health importance during that time.”  So we never did the validation. 

 

The model is only about 70% predictive of carcinogens anyhow.  Even rats for other rats or rats for mice.  So when you try to validate something that’s only 70% predictive you’re going to end up with a fuzzy answer anyhow, even if you went through all of the exercise.  It isn’t as if it’s 90-95% predictive, where with a reasonable number of chemicals and a reasonable number of laboratories you can test this blind and the answer will be pretty clean if it’s that predictive.  At 70%, there wasn’t much hope.

 

Interviewer:     Looking back on the development of the genetically modified mouse models within the NTP and thinking about the challenges posed by validation and new test development what would you say, if any, are the lessons learned about developing new forms of toxicology testing?

 

BS:                  Well, we confirmed that it’s very expensive.  We confirmed that it’s controversial. 

 

Interviewer:     Controversial in what way?

 

BS:                  Controversial in the sense that we ended up in situations where there were several laboratories trying to develop better methods.  Methods for predicting carcinogenesis, methods for predicting teragenicity, for immunogenicity.  People become attached to their method.  They become very defensive -- they treat it like their child.  So you end up with a good set of data to evaluate one method and then somebody else says, “Hey, you’re overlooking my method.  It’s better than theirs, and as a result you ought to spend government money evaluating mine.”  It’s very difficult to pick methods off a list without creating enemies out there in the community, because everybody who was investing in efforts to develop a better method wanted theirs to be tested by the government and we couldn’t do that.  We had to justify the small number that we could look at.  So we confirmed that there were a lot of mechanisms by which agents caused cancer, caused birth defects, caused changes in the immune system -- a lot of mechanism -- which reinforced that if you have mechanism-specific tests you’re going to have to have a lot of them to screen, and we moved the process of validation forward in a very formal way -- that was a good thing because now we had good models for how you go about validating a test method. 

 

A lot of that experience has now been transferred over to the Ichiban process, where you have agreement among federal agencies on a process for reviewing and evaluating and perhaps validating a test method with a built-in mechanism. It is there for acceptance or failure to accept by government agencies, as opposed to just taking it to the doorstep and saying, “This looks pretty good.  Think about accepting data from this method,” which is what it was before ICCVAM.

 

Interviewer:     Another technology that I’ve been interested in is the CDNA microarray, the development of toxicogenomics at NIEHS.  And as I’m sure you know, the NIEHS is also working with the National Academy of Sciences on enrolling kind of agency overview and participation.  How is that process different than – or related to or complimentary to working with Ichiban?

 

BS:                  I’m not sure I can give you a clear answer to that.  I know ICCVAM continues as a process, not as a type of test.  It’s a process by which whatever the test method is could be received and evaluated and have an opinion rendered about whether or not it’s useful for its stated purpose.  That’s ICCVAM.  The process to develop better methods for genomic data is only an effort to review how this one endpoint of genetic information gives us information about cancer or some other endpoint, some disease, so it’s not an ICCVAM process at all – totally unrelated to accepting a method for regulatory purposes.  It’s more so an effort  to collect information at an early stage of development within a field, to shorten the length of time and the randomness in the process of laboratories all over the world collecting information, and much of it not fitting together to allow anybody to make any decision about the usefulness of this endpoint.  So Ray Tennant, again, in his efforts to look at genomics and the efforts of tying it together with something in the Academy is an effort to reduce that randomness and provide some structure to moving this concept forward to determining its limits of usefulness. 

 

Interviewer:     Okay, that’s very helpful.  One more related question and then – one more and then I’m at the end of my list.  One of the purposes for which genetically modified mouse models have been explored is looking at variations in susceptibility to environmental exposures. There was a hope that if an environmental polymorphism that made somebody susceptible to a given chemical could be identified then a mouse model could be built to imitate that human susceptibility, and I’m wondering if you would just comment on that aspect of these models.

 

BS:                  It’s potentially useful.  It could be helpful in a confirmatory sense.  It could be – and that has a dimension of risk, because if you confirm that based on the construct of this animal model and that response that we’ve got, and suggest that there is some risk to people having this polymorphism -- that’s the good news.  The bad news is you don’t know how to translate that into level of risk.  So even though you might be at greater risk for developing breast cancer, let’s say, based on knowledge in this animal model, it doesn’t tell you if that increase is 1%, 10%, 50%, and even if you could say that, “Well, this looks like a significant increase in risk,”. We don’t know how to compare that to the inherited susceptibility of an individual in a known family with breast cancer versus in a person who doesn’t have any family history of breast cancer. 

 

So when it comes down to using this information to give advice to someone about whether or not to have surgery, whether or not to have children, whether or not – you know there are a lot of things that go through people’s minds when they know some piece of risk information.  We’re not at a stage where we can give such guidance that it would make – that it should be used as a determinant by people for certain things.  If it has to do with diet -- well you probably should have a low-fat diet anyhow; you should probably limit your calories.  And those are things that are common sense.  We don’t have to have knowledge of genomics to reinforce that your risk of colon cancer is pretty high, and you ought to watch your diet -- well, you’ve got to watch it anyhow.

 

Interviewer:     And is it also true then that the knowledge has not progressed to a point where recommendations could be made. Not just at the level of individual behavioral change but also in regulatory decision making so that we could say, “20% of the population is extraordinarily susceptible to this chemical, therefore the regulation – the action level should be adjusted in a particular way”?

 

BS:                  Let me answer that by going back 40 years.  When I took pharmacology for the first time back in veterinary school, and when I took it when I was a graduate student in medical school, genetic polymorphisms were already well-known then.  That was part of the lecture series, that we have fast acetylators, we have slow acetylators, and the people who are fast acetylators are not going to get much of the benefit of the drug, those who are slow are going to show toxicity.  I remember Jim Fouts going through that series of lectures way back when I was in school.  So that’s not new knowledge.  And that got translated into a growth of information about the safe and appropriate use of drugs.  It was part of the label information, and there was knowledge that this was important to the safety and efficacy of drugs.  So that body of knowledge had already gone through a curve of growth and understanding in translation into labels.  So now you’re up here.  Now, all of the sudden we have some pharmacogenomic data.  We have metabolomic data, and it tells us a little bit more about what we knew 40 years ago about fast and slow acetylators, or other examples of pharmacogenetic subsets.  So it isn’t as if all of the sudden a door opens and we know how to protect people.  It gives us more information but it’s already at a curve that’s pretty high, and now there are some additional things that we know but it’s a refinement as opposed to a brand new finding.

 

Interviewer:     Okay, that’s very helpful.  Is there anything I should have asked you that I have not asked?

 

BS:                  No, not that I know of, but what I would be interested in is where are you going with this?

 

Interviewer:     Sure, since I’m less interested in myself than –

 

BS:                  [?]…and retired and is still consulting as a pathologist, lives there in the Raleigh area, so Gene McConnell, as a pathologist, is someone whom you might consider talking to.  Jack Moore was the director of the program in the NTP when I was hired.  He wasn’t the director of the NTP, David Rall was, but Jack was the head of the program, he was the one who really made things work.  Jack now lives here in the Washington area and is still involved in consulting and whatnot, and Jack is another person who is, like me, a little more distant from the process whose opinion you might seek. 

 

Interviewer:     Great.

 

BS:                  Among the 30 or so people that you’ve talked to, do you find that we’re widely divergent in out interpretations of what happened and the importance of it?

 

Interviewer:     The major divergence is around the sorts of purposes to which these researchers wanted to put the models; so folks like Ken Korach, having one set of understanding of what was important and what is important and interesting about these models, and people who worked on the bioassay development having a different vision of that. 

 

Then within the folks who worked on bioassay development there is some divergence.  There are people at the NTP who express real skepticism, if not criticism, of the development of ever-more refined models who think that the bioassays work well and that basically – that programs like this divert millions and millions of dollars out of the testing program, so I hear those sorts of critiques as well.

 

BS:                  Well, if you want to hear more of that go to CPSC.  As one of the consumers of data, but a federal agency that doesn’t have any testing capability of its own, they have fought this repeatedly, saying, “Just test more chemicals and we’ll deal with that rather than waste our taxpayers’ money looking for better methods and not testing chemicals.”  So that was the clear message from CPSC from day one.

 

Interviewer:     The only name I have at CPSC is Marilyn Wind.

 

BS:                  Yeah, Marilyn is the most vocal one.  So if you’ve talked with her, you probably got that message. 

 

Interviewer:     Are there other perspectives or opinions you would have expected me to have encountered that I haven’t mentioned?

 

BS:                  Well I would expect that the mechanism-based people, like Ken Korach, would think that this was interesting but not very useful and the money would have been better off doing basic research, because Ken doesn’t have an appreciation for regulatory decisions or the importance of having a better -- a more reliable method to reach a conclusion.  In the minds of the mechanism-based researchers, there’s only one thing that’s important and that’s understanding more mechanistic information.  But for industry, for regulators, there’s a search for what is the right answer here.  Is this chemical carcinogenic or not?  Does it represent any risk to humans?  And the mechanism is incidental.  It may help to understand that, but you have to defend your decision based on data that are reliable, not speculative, so I would expect that the people who were mechanistic in their orientation would have said, “Well, it’s probably a big waste of money to have done that and I’m not surprised that it isn’t useful.” 

 

Then you’ve got the people who are methods-oriented who would say, “There was great promise here,” and somebody would have checked it out, and there was nobody better suited to check it out, so to speak, than the NTP.  And there was nobody better at the NTP to do it than Ray Tennant.  So I think you would,  independent of whether or not they’re useful today, we had to go through that exercise to find out if they were useful. And you know, they weren’t as nice as sliced bread when it came into being I guess. Nonetheless if this had been evaluated by under [unintelligible] and NIH grant from NCI, I think with almost total assuredness, the data wouldn’t be as good as they were under the public scrutiny of what the NTP did, because it would just be one laboratory’s opinion. 

 

Interviewer:     That makes a lot of sense to me.

 

BS:                  And the program that the NTP is a very transparent program.  We never did anything in secrecy, Ray was always out there talking about what he was doing, seeking advice, seeking criticism, and got it all; as opposed to somebody at Cornell University having a multi-million dollar grant year after year after year, having a response that didn’t translate across to any other laboratory.  So I think you would find that the people who are interested in finding better ways of getting information to make regulatory decisions, feeling that Ray probably took us down a pretty good path.

 

Interviewer:     And the other thing I’ve heard that relates to what you’ve just said is that it was a valuable exercise for the NTP also in terms of identifying challenges in the validation process.  So Chris Portier feels like he has – based on this experience, has a new set of questions about how genetic and genomic technologies can be and will be reviewed and validated by ICCVAM, so that it seems like there’s kind of incremental knowledge as well.

 

BS:                  There’s another important reason.  That whole process of methods evaluation was important in maintaining and building the credibility of the NTP.  The NTP is a testing program and the results are extremely important to commerce.  Therefore, the credibility of the program is important, because if it’s a program that is seen as only turning out data that are not credible or that are challenged at every turn -- and all these people do is test chemicals and don’t know anything else. That’s a program that is destined to be weak and not have much of an influence, especially if some other government agency, whether it’s EPA or NAICS or whomever, kind of picks up and competes.  Having the testing component and the research component within the NTP, giving credibility to the leadership of the NTP helped to protect the testing program.

 

Interviewer:     That’s very interesting.

 

BS:                  Because you have people like Ray who are seen as being very credible among peers.  It isn’t a given that every researcher has the right answer, that you could anticipate there’s exactly the animal model that’s going to do it.  We didn’t know that.  Ray didn’t know it.  But he was bold enough to step forward and say, “Here’s the best shot.  We’re going to do it the best we can.”  That protected the credibility of the NTP during a time when industry was very wary about any positive result coming out of two-year bioassays.  They were sure, because of the history in the late ‘70s where the quality of the data weren’t very good, all you had to do was show that the chemistry support wasn’t right. The method of doing the analyses weren’t quite right; we’ll argue about hepatomas and mouse liver -- they don’t mean anything; all of those were points at which the industry could unravel any one of the NCI studies as having been poorly conducted.  I mean, you ran – you only ran 30 animals in a test group and when you look at the tissues you could only find 20 of them, and you’re saying this is something that should be important in making decisions?  So the NTP developed a whole new level of accountability and set a standard. 

 

So in a number of ways the credibility of the testing program was protected by virtue of quality and quality of work and quality of thinking. 

 

Interviewer:     That’s very helpful.  Thank you.  Is there anything else?

 

BS:                  No.

 

Interviewer:     All right.  Thank you for talking with me. 

 

 

End of transcript