Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Dr. John Bucher Oral History 2003

Download the PDF: Bucher_John_Oral_History_2003 (PDF 76 kB)

Download the MP3: johnbucher.mp3 (mp3 63.44 MB)


John Bucher                                     

November 14, 2003                           


TAPE 1, SIDE A

It’s Friday, November 14th, and I’m interviewing Dr. John Bucher.


Sara Shostak:              You’re aware that the tape recorder is on.

John Bucher:               I’m aware.

Shostak:                      Thank you.  Would you begin by telling me what the NTP is and when it began?

Bucher:                        The National Toxicology Program is an interagency program.  Its mission is to provide toxicological information for public health decisions.  It had its origins in the National Cancer Institute Bioassay Program, the rodent two-year bioassay program, which was developed in the ‘60s and ‘70s.  The NTP  was created in 1978 and incorporated elements of the National Cancer Institute Bioassay Program and then  expanded its capability to carry out toxicology evaluations in other areas of interest.  It was moved to NIEHS in 1981 and  has since developed strengths in the area of genetic toxicology, reproductive and developmental toxicology, immunotoxicology, as well as cancer biology and issues related to the development of methodology for risk assessments.

Shostak:                      Who are the primary end users or clients of the NTP?

Bucher:                        Well, we are actually doing a study currently to look at the places where

NTP is cited in the regulations of a variety of agencies.  The Food and Drug Administration uses our information, the Environmental Protection Agency uses our information, the Consumer Product Safety Commission uses our information.  OSHA also has listed NTP as the source of data that support a lot of its standard setting.

Shostak:                      I remember from a conversation you and I had two summers ago that the two-year rodent bioassay that the NTP works with is considered a gold standard or the standard in the field of toxicology testing.  Can you give me a little bit of the background about how it became so?

Bucher:                        The Bioassay Program originated from a declaration that Richard Nixon made that he was going to develop a program to determine environmental causes of cancer.  Arguments related to whether the environment causes cancer or genetics causes cancer or viruses, or other biological agents cause cancer,  wax and wane throughout recent history.  In the Nixon era,  the publication of Silent Spring  had increased the awareness of environmental issues and environmental chemicals and how they might affect human health.  President Nixon declared a war on cancer, and as part of this, the National Cancer Institute was charged with developing a screening technique that would allow one to determine whether one might want to do epidemiology studies or follow up on certain chemicals that showed up positive in this screening assay as to whether they caused or perhaps could be a cause of cancer in humans.

            So the NCI spent the decade of the ‘60s refining this assay that became the two-year rodent assay, and they ran a number of chemicals through the assay that were known human carcinogens by virtue of their being chemotherapeutic agents and were known in humans to cause secondary cancers. These rodent studies showed that you could pick up a number of human carcinogens in these assays, although high doses had to be used and long periods of exposure had to be used.  But out of this effort, the two-year bioassay designs were pretty much put into, not stone, but fairly well codified.  We have picked up those assays from the National Cancer Institute and modified them some through the years, but the basic design is still the same,.  So while the rodent bioassay has never been, in essence, validated against human carcinogens, most substances that are considered human carcinogens are, most, if not all, have been shown to be positive in the bioassay.

            The area that has not been evaluated is whether human non-carcinogens are not carcinogenic in the bioassay, and that’s because there’s simply not enough evidence from human studies to show with certainty that specific chemicals do not cause cancer in humans.  So if you’re looking at the bioassay from the standpoint of its validation status, it has been validated only in historical and informal ways.

Shostak:                      And at the same time, it seems like the two-year rodent bioassay is the standard against which alternative methods are compared.  Is that correct?

Bucher:                        That’s correct, although perhaps only by virtue of there being nothing else.

Shostak:                      Is it also by virtue of the regulatory agencies’ acceptance of the two-year rodent bioassay?

Bucher:                        Well, that’s part of the overall scientific and public health acceptance of the assay.  Right.  Regulatory agencies have accepted it, the international community has accepted it through the IARC activities, and the NTP accepts it for use for listings in the Report on Carcinogens.  .

Shostak:                      When did transgenic mice models come to the attention of the NTP?

Bucher:                        Well, in the early ‘90s, Dr. Rao, who was with our program at that time, and Dr. Tennant put out some publications where they looked at some mice that had been genetically modified with a number of oncogenes that were driven by something called the mouse mammary tumor virus promoter.  There’s a particular virus that infects mice that promotes the formation of mammary tumors, and most of the mice that we use, in fact all of the mice that we use in our bioassay programs are specifically selected to not have this particular virus.  But you can take a portion of that virus and hook it to oncogenes, cancer--causing genes, make transgenic animals, and then use these in what could be a rapid screen for the development of mammary cancer by treating them with chemicals to see if in fact the chemicals enhanced the development of mammary cancer.  So that was really the first application of transgenic animals within the NIEHS that I’m aware of in the sense  of developing alternate cancer bioassays.

            Now, these failed because in fact the mammary cancers in these animals were so aggressive and they developed so quickly that you couldn’t tell whether the chemicals were able to modify the progress of this disease very well.  So those models were really abandoned in the early ‘90s.

            .

            The next model that we dealt with was the Tg.AC that Ray Tennant developed or adapted from the Leder laboratory at Harvard, and you’re well aware of the story.  Ray has documented the development of that in the literature fairly well.

            About the same time as that, the p53 mouse had been developed in the laboratory of Donehower at Baylor, and there was some interest in that mouse from the very beginning.  There’s always been a lot of interest in the p53 mouse because of the conceptual basis for the fact that that mouse might be a good cancer model has been so strong.  This is based on the fact that so many human cancers have p53 mutations, and it’s known that the p53 gene governs apoptosis, and alterations in that could very easily fir conceptually within the cancer process.  So that has been a very promising model from the very beginning. It has shown to be positive with very strong mutagenic agents, ionizing radiation, things of that nature, so  strong carcinogens will show up in that model.

            The ras models are a little different.  The ras gene -- and the Tg.AC is a ras model, as is the ras-H2 model that’s been developed in Japan –  the distinguishing feature here is that the ras gene is involved in enhancing cell proliferation, so when it’s turned on, it tends to increase cell proliferation, and that is an important component of the cancer process.  But it perhaps may not always be as important as the p53  gene is in the cancer process in the p53 mouse.  So the linkage, with ras genes and human cancers, ras gene alterations in human cancers- is perhaps not quite as strong as the linkage with p53 in human cancers, although there are a lot of human cancers that have been shown to have alterations in ras genes.

            So you have two models that are developed at about the same time that are conceptually tied to the cancer process.  They’re advertised as being quick, cheap, using fewer animals, and a lot of hype has been generated.

Shostak:                      And what’s going on underneath the hype?

Bucher:                        The dynamics actually are kind of interesting because of the involvement of the drugs group at the Food and Drug Administration, CDER, Center for Drug Evaluation and Research.   They were involved in a process called the International Conference on Harmonization, which is, has as its basis the harmonization of cancer bioassay and other toxicology assessments around the world such that drugs that are developed in one country or in Europe or Japan can have a single set of studies developed that would be applicable for regulatory purposes around the world.  So this began actually -- I think it was the third or the fourth meeting of the International Conference for Harmonization of test requirements.   And at that time they were debating whether the  two-year mouse assay had any relevance and was really helpful in determining the  carcinogenic risks of drugs.  And the reason for this is that some very high percentage of the drugs that are currently in the Physicians Desk Reference have positive mouse bioassay findings;  liver cancer in mice was a very prevalent finding with a lot of pharmaceutical agents.

            This lack of an obvious link between the signal that was produced by this mouse bioassay and the pharmaceuticals and any subsequent clear elevation in cancer risk among patients who’d been taking all these drugs for so long obviously led to the suggestion that maybe the mouse wasn’t the best model to use.   The rat assay alone seemed to be picking up most of the chemicals that appeared to be potentially dangerous.  So the thought was  we’re wasting money and we’re creating a lot of red flags that don’t need to be created.  That was the thinking at the time.

            So some people were uncomfortable with the complete abandonment of the mouse. -- In fact, I know that I entered into discussions with Joe Contrera back in the early ‘90s sometime when this international harmonization effort was coming about, and we decided that it would be a reasonable idea to introduce the concept of, not abandoning the mouse, but to actually introduce one of these alternative models into the ICH process so that we could generate data on a lot of new chemicals that would be involved in pharmaceutical registration; we’d be generating the rat two-year bioassay data along with this alternative mouse information; and we’d have another data set for comparison, and it seemed like a good way of accelerating the creation of data that would allow one to evaluate whether these assays were going to be any good or not.

            I don’t know whether he suggested it or I suggested it.  He probably suggested it.  I thought it was a great idea.  And he may have been talking about this to other folks as well at that time.  I’m not really sure how extensively this idea was floated before it was actually brought up at the ICH.  But my understanding is that he was the originator of this proposal, and he took it to the ICH and it was eventually adopted.  So Contrera is one that you need to really talk to about the early years and the development of this from the standpoint of the pharmaceutical industry.  He was a big driving force.

            The Environmental Protection Agency and the Food and Drug Administration Center for Food Safety and Applied Nutrition, for example, had never been very interested in the transgenic assays.  Certainly at that time they were not interested in transgenic assays because of the perception that they would be supersensitive, and that’s been one of the dominant factors from the very beginning.  These assays were conceived to be assays that were just on the verge of cancer, and anything you did to them, whether it was real or not, would tip them over the edge and they would go on to develop cancer.  And  I think there was a fear from the standpoint of the regulatory agencies that there would be a lot of false-positives created, certainly from the standpoint of the Center for Food Safety and Applied Nutrition.  There was fear that the Delaney clause would be invoked and there would be massive disruption of the food supply, and other aspects of what they regulate would fall under this clause.  There was a fair amount of apprehension about that.

            There was also apprehension about that from the pharmaceutical industry.  So the fact that this effort that came out of the International Conference on Harmonization was able to pull the pharmaceutical companies into an evaluation was remarkable.  That was very -- that was a fascinating and quite an amazing development.

Shostak:                      Has the EPA’s stance on this has changed in any considerable way over time?

Bucher:                        The EPA’s stance has changed in certain aspects of the their programs.  I don’t think it’s changed from the standpoint of the Office of Pollution Prevention and Toxics, which is the group that administers TSCA, because they have not, to my knowledge . . .  Well, they don’t often actually ever get around to requiring a cancer assay be run on anything by any industry group.  They’re more interested in developing data sets on toxicology endpoints that are much more immediate; in other words, acute toxicity, ecotoxicology, toxicology to flies and worms and, you know, and things of that nature, and developmental toxicity screens in rodent assays that are fairly short.  So they tend to focus on that aspect to try to create a database to identify acute hazards.  Their databases on cancer haven’t been as grounded in experimental data so much as they’ve been grounded in structure-activity considerations and projections or predictions of the possibility for cancer.

            But the Office of Water has taken a different stance   reflecting their regulatory perspective and what they have to deal with.  Whereas the Office of  Pollution Prevention and Toxics can look at a chemical and assign responsibility for testing to a manufacturer in many cases or a trade  group or something, . the Office of Water has to deal with things like bacterial contamination in drinking-water supplies and the creation of byproducts from chlorination reactions or things like that that are going on in water.  So it’s something that obviously can’t be removed or assigned to industry to study. What you need to do is  try to figure out what risks are there from the various drinking-water disinfection products and contaminants. There are thousands of chemicals created during the process of chlorination of drinking water or ozonation of drinking water.  And nobody has responsibility for these chemicals because they’re created during a public-health-improving process.

            So what you need to do is you need to figure out ways of assessing risk and managing risks of those chemicals, because you can modify the different kinds of disinfection byproducts in water by how you treat it, what pH you use, how much you filter it before you chlorinate it, whether you use chloramine or chlorine gas, or ozone.  All these things can change the byproduct profiles.  So the Office of Water actually nominated a number of major drinking-water disinfection chemical classes to the NTP and asked us to evaluate the use of transgenic assays in running short-term screens on those materials to see if in fact they can be used as a way of at least setting relative risks among different classes of chemicals -- not to establish an ultimate numerical risk of the amount of these chemicals in drinking water, but to at least say, should we care if halogenated acetic acids areformed versus some other class of compound?

Shostak:                      Have the studies been completed?

Bucher:                        They have been completed.

Shostak:                      May I ask what the outcome was?

Bucher:                        Unfortunately, they have pointed out one of the biggest blind spots of the transgenic assays, and that is that transgenic assays almost invariably fail to respond to small halogenated compounds.

Shostak:                      Okay.  Can you say more about that?

Bucher:                        Well, interestingly, if you go back and you look -- now, these would be compounds that would have a chlorine, a bromine, fluorine, and iodine, so most of the compounds that we’re dealing with here with regard to disinfectant byproducts are chlorinated compounds, and if you looked back at the history of the bioassay programs and where you have the most controversy and the most potential public health impact, it is the fact that so many halogenated compounds have shown up positive in the bioassay programs.  Often they cause liver tumors in mice and maybe some other kinds of tumors in rats.  But there are a wide variety of potencies of these things.  They range from very, very weak signals of cancer to exceptionally strong, multipletumors or multiple tissues having tumors in both rats and mice.  And many of the brominated compounds are in fact the worst actors in the bioassay.  But it’s been our experience so far -- and it’s something that I pointed out in 1998  in that paper that you have -- that they just don’t show up positive in the transgenic assays.  So it’s a whopper of a blind spot for the transgenic assays.

Shostak:                      In what ways, if any, does that limit the utility of transgenic assays for the National Toxicology Program?

Bucher:                        Well, this kind of gets back to the approach that we have to take, and our interest has always been in using assays that erred on the side of public health, which in this case would be erring on the side of producing a false-positive rather than a false-negative, because the public health consequences of a false-negative could be much higher than the public health consequences of a false-positive.

            This isn’t always the apparent stance of other regulatory agencies or other groups.  I don’t think that the industry -- I won’t speak for the industry.  But in any rate, if we were to abandon the two-year bioassay at this point for testing certain chemicals, I would only be comfortable using transgenics if in fact they had been shown to be responsive for one or two members of a class of compounds, and this is exactly the approach that we took with the drinking-water disinfection byproducts.  We felt that we would take the most likely carcinogens among all of these classes of chemicals that they wanted us to look at and run them through the bioassay, and/or run them through the transgenic assays.  If they came out positive with the most likely one, then we would go back and run subsequent members of those classes in those assays.  And since some of these chemicals that cause multiple tumors in multiple sites in different organs in the, two-year assay showed up negative in the transgenics we had a problem.

Shostak:                      Were there studies to evaluate the transgenic bioassays prior to this request from the Office of Water?

Bucher:                        Sure, there were two major efforts that we entered into before we began using the transgenics in the way we used them for the drinking-water disinfection program. Initially we simply wanted to take the transgenic models, the p53 and the Tg.AC, and place them in studies in a contract laboratory setting, which is the way we do all of our studies.  So Skip Eastin, who I think you‘ll be talking to, and I arranged a group to design a series of studies that was ultimately published, I think, in 1998 whereby we would take some known human carcinogens, some known rodent carcinogens, and some known rodent non-carcinogens and do fairly circumscribed studies to look at the outcome in a contract laboratory setting.  And these studies were published in Tox Path in, I think, 1998, and the results were promising.  We felt that the results were pretty much  what we expected, and they formed some of the basis for the statements that we made at the NTP review in 1998. 

            The second large effort was participating in the ILSI effort, and most of that work took place before we began to use the transgenics for the drinking-water disinfection byproduct program.  I don’t need to explain that program to you, I suspect, because it’s all well laid out in the publications.  But our component of that primarily revolved around the Tg.AC mouse.  We didn’t do any of the p53 or any of the other assays as part of that consortium.

            We did look at the issue of whether we could use the mouse with oral administration studies in addition to dermal administration studies because the dermal route of exposure, which is the way the Tg.AC is run because it’s a skin papilloma model, doesn’t provide a lot of opportunity to study chemicals that would normally be in the diet or would normally not be soluble or would roll off the skin or something like that.  So we wanted to see if we could use the Tg.AC mouse with oral administration.

            The forestomach also gets papillomas in this model, and it was considered that that would be a reasonable endpoint, just like looking at the skin papillomas,  we looked at  forestomach  papillomas.  And in fact some chemicals do produce  forestomach  papillomas, but ultimately papillomas seemed to arise  at very, very low incidences, and sort of at random in control animals  making it difficult to really sort out whether this is a good endpoint or not.

             That was our contribution to the ILSI program, and I think that, as you read the information from that program, you’ll find that there were some decisions made based on that, that tended to conclude that the Tg.AC mouse was probably really only good for the dermal route of administratio.

Shostak:                      What is the official NTP  Board of Scientific Counselors position on the use of those assays at this time?

Bucher:                        Well, they haven’t really ever come out and said  because they haven’t really been asked.  And I think it’s really beyond their -- it’s beyond our ability to give them the information that they would need to allow them to reach a decision that would be that useful at this point.

            In the 1998 review, their opinion was that the models had promise.  It was appropriate for us to continue to try to develop them as either adjuncts or replacements for portions of the bioassay.  They liked the p53 because they thought they could understand it, and they didn’t like the Tg.AC because they didn’t really understand how it was working.  And that’s a situation that still remains, even in the face of an increasing amount of information about the p53 models that says that it’s really not picking up many carcinogens. 

            So the board has only been making decisions as we’ve brought studies to them for their evaluation.  We have a Technical Reports Review Panel that’s a subcommittee of the larger Board of Scientific Counselors, and we periodically bring the results of transgenic studies to them in a new series of technical reports that you have probably heard about that are labeled  NTP GMM.  GMM-that’s genetically modified model.

Shostak:                      Some of the executive summaries from those reports are online?

Bucher:                        Yes.

Shostak:                      I’ve looked at the short versions.

Bucher:                        We originally brought several, the results of several transgenic models to the Board of Scientific Counselors as part of the blue book, the traditional blue book or NTP Technical Report series, and they were  rejected.  They were rejected from inclusion in that series, as we thought they would be.

Shostak:                      Can you help me understand why?

Bucher:                        Well, the first models that were brought, the first studies from models that were brought to the Board of Scientific Counselors, Technical Reports Review Committee, were Tg.AC studies of several acrylate compounds, and we had no corresponding 2-year bioassay for comparison. .  We simply brought the results from these very positive findings in the Tg.AC mouse to them and said, “Okay, here we are.  You want us to continue to develop these models.  The only way we can understand how to develop them is to see how they’re received and accepted by the scientific community.  And one way of doing that is to have workshops, which we’ve done, and you have all that information about the workshops.  The other way is to take the results to our boards and say, “Okay, here they are.  Can we interpret these data in the same way that we interpret two-year bioassay data?”  So we put the results in the same language as we did for the two-year bioassays, and they said no.  They rejected them.

Shostak:                      Because?

Bucher:                        Because they did not understand how the Tg.AC assay worked.  And they did not understand the extent to which one could assume the results that were positive results in the Tg.AC would represent positive results in the two-year bioassay even though most of the work up to that time had indicated that  therewas about 80 percent accuracy in the discriminating power of the Tg.AC between carcinogens and non-carcinogens as compared to the 2-year rodent bioassay, which is not bad.  Almost all of these alternative assays, if you get up to 70 to 80 percent predictivity [sic], you’re in pretty good shape.  It’s hard to get higher than that.

Shostak:                      So the rejection of the Tg.AC data is because how it works, how it produces those results, is difficult to understand?

Bucher:                        Yeah, and I’m sure that Dr. Tennant and Dr. Cannon, will explain to you that Ray [Tennant]  has had a research program underway since the early ‘90s that has tried to figure out how the particular genetic construct that was used to make this transgenic mouse, which is called a zetaglobin promoter-driven vHaras gene, a(zetaglobin is a form of fetal hemoglobin) actually works. 

            Chemicals that cause tumors in the Tg.AC do so by virtue of the fact that they activate a promoter from a fetal form of hemoglobin, and that activates a Ras gene and that causes cell proliferation, and that tends to result in papilloma _formation____.  Well, nobody understands what the relationship is between chemicals that activate the zetaglobin promoter and chemicals that are carcinogens because there’s conceptually no scientific link.  It’s not at all like chemicals that mutate an oncogene or delete a tumor suppressor gene. 

Shostak:                     It’s unclear how it works.

Bucher:                        It’s just unclear how it works.

Shostak:                      So, let me step back one step from that and ask, what, in general, are the requirements for an alternative bioassay?  It sounds like it needs to be conceptually clear.  It needs to have a certain degree of concordance with a traditional bioassay.  What needs to be demonstrated to establish, for example, transgenics as a reasonable alternative?

Bucher:                        Well, I think that if you had an assay that nobody knew how it worked but it was 100 percent accurate, people would ultimately, eventually accept it. But if it’s shown that it’s wrong at all and one doesn’t understand  the basis  of its action, then I think it’s never going to make it.

Shostak:                      And the p53 mouse has been received very differently?

Bucher:                        Right.

Shostak:                      And is this because the mechanism is well characterized, well understood?

Bucher:                        Mm--hmm, well, fairly well understood.  The best example of that has been the phenolphthalein story that June Dunnick has developed, where the chemical in Ex-Lax was shown to be a carcinogen in two-year bioassays.  There was a lot of resistance to the acceptance of the data because the drug had been used for 70 years, and there was no real clear link epidemiologically to cancer, although nobody had ever really looked to see if long-term users and abusers did develop cancer . .This is a chemical that has become to be been used for weight control recently by women, girls, teenage girls, who would just abuse this substance remarkably as part of these programs to purge theirsystems.   There was a fair amount known mechanistically that would suggest this was a genotoxic carcinogen or a mutagenic carcinogen.  Although the 2-year bioassay was positive,there was a lot of resistance from the manufacturers to even considering the fact that this chemical might be harmful, and the Food and Drug Administration at that time said, “Okay, well, just gather more information.  Let’s run a p53 assay.”  We ran the p53 assay, tumors showed up.  Jef French analyzed the tumors and showed actual loss of whole segments of the functional p53 gene that was remaining.  Not only was the remaining P53 allele  mutated, it was gone!  The Food and Drug Administration just basically said, “Well, we’re giving you notice that we plan to take action against registration of phenolphthalein in the use of Ex-Lax and similar drugs.  The pharmaceutical industry  took it out immediately.  They didn’t even wait for FDA to issue a final rule.

Shostak:                      Are there other noteworthy examples of p53 or Tg.AC being used by the regulatory agencies?

Bucher:                        The most noteworthy example of the p53 being used by the regulatory agencies is the recent information that came out of the latest ILSI review, which showed that for 18 genotoxic chemicals for which p53 assays had been run that were submitted to the agency for the purpose  of drug registration, 17 of those were negative.  The only one being positive was phenolphthalein.  And the agency continues to accept p53 data for drug registration.  So it’s a negative, it’s a reverse situation where you have fairly overwhelming evidence that the p53 assay is not showing up positive with drugs that have been developed to the point of having to run the preclinical studies, which is what these bioassays are part of.  Yet, knowing that there’s some positive gene tox signal in their background, they’re still willing to accept the data.

Shostak:                      How would you explain that?

Bucher:                        I don’t know.

Shostak:                      You said something earlier about the hype surrounding these models.  Does that hype have a role in maintaining ongoing interest [in these models] despite these apparent failures?

Bucher:                        Sure, I think so.  But all assays have their proponents and detractors, and . . .

SIDE B

Bucher:                        It’s intellectually attractive to the scientific community to be able to deal with models where they have a little bit of understanding versus models where they have little or no understanding, and I have to emphasize that.

            In the two-year bioassay for example -- there have been a lot of theories that have been raised about why rats get kidney tumors and why rats get thyroid tumors and, you know, all these different models that have been proposed and experimentally have garnered some support to suggest that there in fact may not be any relevance to the particular way in which these tumors are arising given what’s known about human physiology.  Most of that’s been pretty half-baked and it reflects the hubris of the scientific community in that we always like to talk about what we know rather than what we don’t know.  And in the area of safety assessment methodology, we know very little about what the endpoints really are saying that we’re looking at.  So all of these things are a weight of evidence that one pulls together, and that’s what the Food and Drug Administration did in the case of phenolphthalein.  They just waited until the weight of evidence was such that it caused them to go ahead without any hesitation.

            I don’t think I’m remembering what your original question was at that point, but we drifted away from it.

Shostak:                      I’m wondering how it is that models that seem to have such significant limitations continue to be developed and pursued.

Bucher:                        Well, I think I answered that to the best of my understanding.

Shostak:                      Recognizing that this is a speculative question, what do you think the future of these transgenic assays are within the National Toxicology Program?

Bucher:                        If you’re going to use a transgenic assay, I think one has to be willing to accept the limitations of the assay, so we’re faced with this all the time.  When you have a question that is so clear that you’re sure that the result that the transgenic gives you is going to be definitive and it’s going to be public-health protective, we have a  possible use. . .  For example, I don’t know if you know what the compound dioxin is.

Shostak:                      I’ve certainly read about it.

Bucher:                        Okay.  For some reason, it causes a remarkable response in the Tg.ACmouse.  Now, there are some, over 300 different, but structurally somewhat related chemicals that cause some aspect of dioxin-like action.  In fact, almost all of them are halogenated, so this is one of the few cases in which transgenics do seem to be very responsive to halogenated compounds.  But these are big compounds, kind of bulky compounds, and they’re clearly working through receptor-mediated processes as opposed to small halogenated compounds, which often work through being metabolized to reactive intermediates and cause some cellular damage or some molecular injury.  But there’s been  a  data set developed on these 300 or so chemicals that has  allowed them to be ranked  in terms of potency for their ability to create the same effects in rodents in very short assays, as dioxin does, the most powerful of this class. These are being used to predict carcinogenic potency for many chemicals that have not been adequately tested.  So we have actually, as part of a larger program on evaluating the applicability of those short-term results to predict long-term outcomes, used the Tg.AC along with using the traditional bioassay, rat bioassay, to determine the relative carcinogenic potency of these kinds of chemicals.  So that’s a situation where we have a transgenic assay where (a) we know it works, and we have an application for it.

            One of the other areas that I think that the transgenics might well play a role is in site-specific tumorogenesis, where we know that our models don’t respond, our two-year models don’t respond.  For example, we have almost no brain carcinogens in our 2-year bioassay results.  Very few chemicals cause brain tumors --  yet there are lots of brain tumors occurring  in the human population -- and similarly, no tumors have ever shown up in the prostate glands of our rodents.  We know that human prostate cancer is a huge problem.  It may not have environmental causes, but we can’t tell that from our rodent bioassays.

            So there have been attempts.  June Dunnick has worked on some transgenic brain tumor models.  .  Bob Maronpot has studiedseveral of the prostate cancer models.  So those would have applications in areas where there’s a specific question related to the response in a particular tissue suggested from an epidemiology perspective.

Shostak:                      So it sounds like the transgenic assays would serve as adjuncts to the traditional bioassay program rather than replacements.

Bucher:                        At this time, yes.

Shostak:                      And again, kind of stepping back a level, I’m interested in the ways in which NIH DIR and the NTP interdigitate on these programs, on this research.  Are there ways in which the relationship between the National Toxicology Program and the National Institute of Environmental Health Sciences have been significant to the development of the assays?

Bucher:                        Well, , the experimental data from a lot of the early testing of the Tg.AC mouse model came out of Ray Tennant’s lab, and a lot of the studies were done in-house, or under contract at a local contractor  .  We at the National Toxicology Program, again, we wanted to see how these responded in a contract laboratory setting with the NTP  standardization of  testing procedures .  So the NTP and NIEHS DIR efforts were complementary, I guess is what I’m trying to say.  We were developing data, in some cases on the same chemicals, in two , slightly different ways.

            .

Shostak:                      From a somewhat different angle, when did you come to the NTP?

Bucher:                        Nineteen eighty-three.

Shostak:                      And what was your original role here?

Bucher:                        I was a, what we called at that time a chemical manager, or a study scientist, we call them now.  Basically I designed and monitored and evaluated toxicology and cancer studies.

Shostak:                      What other major efforts to develop adjuncts to or replacements for the bioassay have happened during your tenure here?

Bucher:                        There have not been many efforts to develop animal assays that would be adjuncts or replacements.  There have been some evaluations of things that have been done elsewhere.  For example, Bob Maronpot carried out an evaluation of what was called the strain-A mouse that had been developed in other institutes that basically looked only at lung tumors. , These were adenomas generally,  actually, you could  score_____ a lung by just looking at it because you could see the tumors’ on the surface of the lung.. This model was not found to be very concordant with the traditional bioassay results.

             Many of the efforts up to the point of looking at these transgenics were using models that spontaneously developed tumors in a fairly short time. , and so from that standpoint, the evaluation of the transgenics wasn’t that new.  The only thing that was new about it was that we thought we knew something about why they were getting tumors.

            So the strain-A mouse was evaluated.  There are also various initiation-promotion assays that have been evaluated.  We did some of that work here, looking at mouse skin in what’s known as the SENCAR mouse or sensitive- to-carcinogen mouse.

            There was an effort in Japan that developed a very sensitive initiation-promotion model using rat liver foci as the end point. .  That was never really evaluated here because we had always taken the view that we’re looking, not for promoters, we’re looking for complete carcinogens in our assays.

Shostak:                      Tell me what a complete carcinogen is?

Bucher:                        That would be a chemical that would enhance the development of spontaneous mutations that would lead to cancer and would cause those spontaneous mutations and promote tumorigenesis by, creating conditions that would favor cell proliferation and development of a malignant cancer..

            So in these initiation-promotion studies, what you do is you give a very strong initiating substance, and then you’d follow it with a chemical of interest to see if it enhanced the onset of those tumors that you expected to develop.

            And the problem, of course, with that is that the regulatory agencies have never known what to do with promoting agents.  And if you think about it from that standpoint, the transgenic assays could be considered promotion assays.  Because there is a genetic insult imposed on those mice, and we’re simply looking at the chemicals that do something to that, to hurry that process along.  That’s one of the major objections to the Tg.AC mouse, is that it’s just considered a skin tumor promoter assay.

Shostak:                      You mentioned that understanding the molecular mechanism is of interest to scientists, but is not a necessary component of the traditional bioassay. 

Bucher:                        Right.

Shostak:                      How much pressure, if any, is the NTP or the regulatory agencies under to develop a mechanistic understanding or a molecular understanding of their own work?

Bucher:                        Well, actually, that’s an interesting issue because I think it’s been more in the interest of industry to work on those issues. .  In fact, they got together back in the ‘80s and sort of assigned different problem endpoints to different companies to work on, so the mouse liver would be worked on by one company, the rat thyroid would be worked on by another company, and on and on and on, the rat kidney; and with an aim towards developing data that would highlight any kind of a physiological or biochemical difference in the way that the rodent  responded to what was  thought to be happening in humans.  So it’s been  in our interest to  develop the same kind of data because our interpretation of the data may be quite different.  And, in fact, we may do studies that show 99 percent similarity between chemical effects in rodents and humans, and they’re going to focus on the 1 percent difference.

Shostak:                      Am I hearing you correctly, then, that it seems that the current bioassay program, the two-year rodent bioassay, is more protective of public-health protective than molecular models or mechanistic data would necessarily be? 

Bucher:                        I think you’re right, and I would say yes because if you remember, say conservatively there are, just for the sake of argument, 20 different ways to produce a tumor, and if during the course of dosing an animal for two years you put at least 15 to 18 of those pathways at risk of having something happen to it, then the outcome, I think, is more health protective than if you take an animal that has one crippled pathway and you try to influence that one pathway to cause a tumor quickly, and you kill the animal before you have the opportunity to express cancer through any of these other mechanisms.  So by virtue of the fact that you’re dealing with a defined mechanism, you’re always going to restrict the response of that mouse model_, and you just don’t know how.

Shostak:                      Okay.  So what, then, what’s your opinion on the transgenics programs in terms of the public’s health?

Bucher:                        Well, my opinion is stated in print: Things that cause tumors in transgenic animals need to be taken seriously.  Things that don’t cause tumors in transgenic animals need to be assayed in a more comprehensive assay.  Right now we’re at the stage where not everything that’s positive in transgenics is considered to be potentially a human hazard, health hazard, and I think that needs to change.  On the other hand, I think that the reliance on assays that have track records that is as poor as the p53 in picking up rodent carcinogens shouldn’t be used as the sole assay for humans.  Simple.

Shostak:                      What pieces of this story have we not touched on?  What have I failed to ask you that is important?

Bucher:                        I think  you’ve asked all the right questions.



Shostak:                      Okay.  I’m glad to hear that.  Are there particulars that I’ve unwittingly glossed over?

Bucher:                        Well, you’re going to get a lot of what I didn’t cover from other people,   And I think that from the standpoint of perceptions in the National Toxicology Program, insofar as I can articulate them, I think we’ve covered them.

Shostak:                      Okay.  That’s great.  Thank you.  I’ll turn this off.

END OF INTERVIEW