Dr. Ronald Melnick Oral History 2004

Download the PDF:  Melnick_Ronald_Oral_History_2004 (PDF 89 kB)


Ron Melnick, National Institute of Environmental Health Sciences

Interviewer: Sara Shostak

January 20, 2004

 

Sara Shostak:              You’re aware that the tape recorder is on.

Ron Melnick:              I’m aware.

Shostak:                      Thank you. I actually wanted to start with the question of when you came to NIEHS and what brought you here.

Melnick:                      I came to NIEHS in 1980, interested in biological research, including environmental health research.  This is a leading institute in environmental health research.  I was actually hired by the National Cancer Institute when the Bioassay Program was located in Bethesda.  However, in 1980, it was undergoing a move, the major component, from Bethesda, and NCI to NIEHS. So, although I was hired by NCI, my first position in the government was here at NIEHS.

Shostak:                      I hadn’t realized that you were part of that transition, but that is also something that’s of historical interest to me.  Can you share with me your perspective on the movement of the Bioassay Program from NCI to NIEHS?

Melnick:                      Yes.  I was never a part of NCI.  I was interviewed at NCI in 1980.  I had come down here earlier and inquired at this institute about positions. I was told that there were positions available at NCI, in the Bioassay Program, so I contacted a person up there and was offered an interview.  From a historical perspective, there was a government freeze -- there’s probably a lot of government freezes -- but there was a government freeze at the time. So, although they wanted to hire me sometime around March, the freeze didn’t lift until around August.

Shostak:                      That’s a long freeze.

Melnick:                      Right.  And the economy does funny things. So I came down here in September of 1980.  That’s when I joined NIEHS.

Shostak:                      And were you then involved in establishing a bioassay program.

Melnick:                      The Bioassay Program was being created at the time down here.  There was the NCI Bioassay Program and the National Toxicology Program. David Rall was expanding [it] at that time.  I was hired as a chemical manager.  At the time we were called  chemical managers or study scientists, who had responsibility for a variety of chemicals to obtain their toxicity-carcinogenicity information and also to monitor the contract activities.  These agents were not studied in-house.  They were studied through contract laboratories.  So we were also involved in monitoring the activities at the contract laboratories to make sure they were doing a good job.

Shostak:                      And, again, just thinking historically, how would you describe the changes in the NTP over the 20-plus years that they’ve been involved with it?

Melnick:                      There have been many changes.  When I joined, the NTP work was done through a prime contractor, an outside group that had the contract to monitor the NTP studies.  When I joined NTP, it was undergoing a transition where the government would take the direct responsibility for the work.  So in the early years of the ‘80s, a lot of this transitioning was occurring.  Tracor Jitco was the prime contractor. There were project officers at Tracor Jitco who visited the laboratories, and I also had responsibility as the government representative for one of these laboratories. It was a combination of government and Tracor Jitco scientists who would monitor on a quarterly basis each of the contract laboratories. In terms of the activities, in those years there were a lot of chemicals that were coming into the program, perhaps 50 a year; these were distributed among the contract laboratories that were deemed qualified to conduct long-term toxicology and carcinogenicity studies in rats and mice.  One change that has occurred over the years is a decrease, a significant decrease in the number of chemicals that NTP studies for carcinogenicity.  Whereas in the early ‘80s there were about 50 per year, in the ‘90s it drifted down to somewhere around 10 per year.

Shostak:                      What accounted for that decrease?

Melnick:                      Probably a combination of factors.  One is we were relying on nominations from outside sources for chemicals. Many of the high production volume chemicals and suspicious chemicals had been studied in the late ‘70s or early and mid-‘80s. There became greater and greater requirements on the conduct of each study.  GLPs, Good Laboratory Practices, were introduced around the early to mid-1980s.  This led to a greater paper trail.  The studies became more extensive in terms of endpoints that were evaluated.  Costs continued to rise.  I don’t remember the cost of early studies.  A real guess would be about a million dollars in the early ‘80s for inhalation study or feed study, whereas now they’re somewhere in the $4 to $5 million range.  So, there was a substantial increase in costs on the types of studies we performed and the studies were more extensive than those done in the early ‘80s.  Lastly, the dollars coming into the program for NTP work has been fairly flat even though the budgets at NIH have increased.

Shostak:                      What was the goal for introducing GLPs?

Melnick:                      This was for maintaining better records.  This was introduced by FDA in the early 1980s to verify all records of a study.  There were certain procedures that were introduced in terms of signing documents to ensure responsibility for all phases, including analyses of chemicals and maintaining standard operating procedures.  There was greater documentation and verification of all tasks. This was done so that someone could go back to the study records and trace everything that actually happened. For example, how solutions were analysed; dose solutions were analysed;  what were the analytical procedures?  What were the numerical values (the actual raw data and types of calculations)?  All these records were kept.  If an error was made, it would be documented, and you could trace to see how that error was handled. So, if a dosing solution, for example, was incorrectly prepared, there would be documentation showing that it was incorrectly prepared, and that it was discarded or replaced. Thus, someone could go back and trace the whole history of a particular study.

Shostak:                      Were these were motivated in part by any lawsuits or litigation?

Melnick:                      I don’t remember any lawsuits at that time.  There were groups that would go back to examine NTP data, but this wasn’t done just by NTP.  I think it was in the whole field of toxicology, where certain questionable actions were done. For example, if an animal died in a study, the laboratory might introduce a new set of animals to replace the original animals on the study. This introduces a new variable that we wouldn’t want.  For example, if three months into a study, 10 animals died and the laboratory bought the same age animals and put them into the study, they would have introduced a new variable.  This might not be apparent at the end of the study if the documentation wasn’t there.  In certain cases, the tissue might be sampled for histopathology, and the rest was thrown out. However, we wanted to save all of the tissue in case we wanted to resection it.  There were these kinds of things that came up that required better documentation.

There were concerns in some laboratories of what we call slide-block match-ups that were not perfect.  This was verification that a slide was made from a particular block of preserved tissue. Let me backtrack a little bit.  You start with an animal in a study, and when the animal is sacrificed, the tissues are removed and preserved.  A numerical identification is given to the embedded tissue. Slides are then prepared of sections of that particular tissue.  How do you know that that slide came from the animal that you thought it did?  To avoid discrepancies greater attention was made to slide/tissue block matches to be sure that the tracking was as perfect as possible.

Shostak:                      You said that another change was that there was an increase in the number of endpoints?

Melnick:                      Right.

Shostak:                      Can you tell me more about that?

Melnick:                      A lot more clinical chemistry, for example, which wasn’t included previously.  Initially the program was largely looking for tumors at the end of a certain period.  A lot more attention was later paid towards the non-neoplastic diagnoses. So, these studies became more chronic toxicity evaluations.  The sub-chronic toxicity were improved including clinical chemistry evaluations and greater attention towards the histopathology.

Shostak:                      Okay.  Before I switch topics, are there other changes in the evolution of the NTP that have been significant?

Melnick:                      Well, yes.  In the late ‘80s, there were some changes made in the directors or deputy directors of the NTP, who initiated changes in the organization of the program.  There was a period there where I myself was not particularly pleased with the direction that the program was going.

Shostak:                      How did it change course?

Melnick:                      I think there’s always differences of opinion among people on the focus of the NTP; changes at that time made the program more pathology-oriented.

Shostak:                      As opposed to . . .

Melnick:                      More than a toxicology-oriented type of program.

Shostak:                      Help me understand how you conceptualize the differences.

Melnick:                      Well, the pathologists were placed in charge of running the program.  I was supposed to report to pathologists rather than toxicologists.  There were also personal issues that were going on at the time.

Shostak:                      As a sociologist, my understanding of the difference between these two fields constantly evolving, but when I hear you say that, what I think of is the difference between kind of an accounting of outcomes and lesions on the pathology side, and more of a kind of focus on how things work on the toxicology side.  Is that the kind of distinction you’re making?

Melnick:                      That’s pretty good.

Shostak:                      Okay.  Good.  I’m learning.

Melnick:                      So actually, I left the NTP around that time and worked in the Laboratory  of Molecular Carcinogenesis for a few years.  Personally, I took a little deviation, and at that time I focused more on pressing issues of understanding relationships between toxicities and carcinogenicity, cell proliferation, and the potential role of  alpha-2u globulin in  kidney carcinogenesis. In the mid-1990s, George Lucier became head of the program and he wanted me to return to NTP and I did.  So there was about a four- or five-year period where I was semi-separated from the program; I still retained a few of my NTP chemicals and I worked on those as well, but not as a member of the NTP.

Shostak:                      And the Laboratory of Molecular Carcinogenesis was Carl Barrett’s lab.  Is that correct?

Melnick:                      Yes.

Shostak:                      And were there any explicit links between that laboratory and the NTP, or just . . .

Melnick:                      Informal.

Shostak:                      Okay.  My perception is that Dr. Barrett was one of the folks who was most interested in introducing mechanistic studies to the NTP program, that he had a vision of toxicology that included . . .

Melnick:                      Yes.  I think Carl probably wanted to see a more mechanistic perspective within the toxicology program rather than just looking for histopathologic-defined endpoints.

Shostak:                      And where would you say, how has the NTP kind of changed in the past, let’s say, 10 years since you’ve been back in the program?

Melnick:                      I believe George became head of NTP around ‘94.  I’m not sure of the year, ‘93, ‘94.  He had an interest in bringing toxicokinetic modeling into the program.  This was for the purpose of characterizing the internalised dose or the organ-specific dose, rather than just using exposure as a measure of dose. This interested me quite a bit. I was working with modelers at that time in terms of how to provide this type of information.  That was, to me, an exciting new addition to the program.  We now include toxicokinetics in NTP studies.  This was not done in the early ‘90s or at all in the 1980s. If there’s a justification, additional studies are included, if you have an idea of what to look for. These endpoints are typically added to the study design to examine factors possibly contributing to the toxicity or carcinogenicity of the agent under study. For example, these have included evaluations of cell-proliferation.  For inhalation of particulates, lung levels of particles during the course of the study and after exposure have been measured to see how fast they are cleared. We have also performed a few studies on animals exposed during gestation to see whether that might contribute to a greater lifetime risk of cancer.

Shostak:                      That’s so interesting to me, and, again, just as a relative outsider to the field but an avid observer of it.

Melnick:                      The program was largely established to address occupational and environmental health risks. From the occupational perspective whereby exposures could be higher, the experimental design was meant to mimic to occupational exposure scenarios.  So for inhalation, this included exposures of five days a week, six hours per day.  The age at which animals started was approximately six weeks, when they’re entering puberty and continuing for two years. This brings them close to a 55- to 60-year-old, or near the retirement age.  So the window of exposure was largely the working lifetime of an individual. The issue of gestational or neonatal exposure is also very important, but this can expand the size of a study significantly.

Shostak:                      So you would study the gestational exposures both independently and then in conjunction with . . .

Melnick:                      You could.  In some cases yes, in some cases no.

Shostak:                      And when we talk about the studies in the animals, are you referring consistently to a two-year rodent bioassay of some sort, or there are a variety of different kinds.

Melnick:                      My focus has been mostly on two-year studies of carcinogenicity.

Shostak:                      Okay, great. So, changes in the NTP that are the focus of this research project are the initiatives in transgenic mouse models or genetically modified models.

Melnick:                      The program has always been interested in looking at alternative methods, other short-term tests that might provide indications of risk.  These are extremely valuable in screening and prioritizing chemicals.  So, for example, if you had 50 chemicals and you could only study 10, which ones would you choose?  The thinking in the program is largely based on indications of human exposure.  For chemicals with evidence of human exposure, these would be your top choices.  For another chemical, perhaps one with less information on exposure, if there are indications that it might be a bad-acting chemical, that could bump it up to a higher priority.  So, for example, if the Ames salmonella test showed evidence of genotoxicity, and because salmonella-positive chemicals have an 80 percent or so likelihood of being carcinogens, this finding might raise the level of concern for that chemical. Over the past several years a number of short-term tests have been evaluated for screening and prioritisation purposes.

Shostak:                      I’m sorry.  Would you mind just kind of, for my own education, going through them.  I’m familiar with the Ames salmonella tests.  What were the other tests that have been significant?

Melnick:                      There are other tests.  These include tests for induction of chromosome aberrations, sister chromatid exchanges, micronuclei, or cell damage.  Most of the short-term tests are designed to identify indicators of genotoxicity.  This is probably largely based on the understanding that cancer is a clonal disease with some genetic component to it. If a chemical alters the genes in a cell such that it can grow out independently, this could be an important event in the cancer process. Therefore a chemical which causes genetic damage has a higher probability of contributing to the cancer process. It was also found that for about 50 percent of the chemicals that were negative in salmonella, were positive in the cancer bioassay. Thus, the salmonella mutagenicity test was a decent predictor of carcinogens, but the salmonella negatives raise a bigger concern; we couldn’t say much better than 50-50 that a negative was a non-carcinogen. This is probably based on increasing knowledge that there are other mechanisms of carcinogenesis, such as those involving receptor-mediated pathways.  So most of the short-term tests have focused on genotoxicicty as a predictor for cancer.

Shostak:                      And how did the transgenic models fit into this lineage?

Melnick:                      Transgenic models, are typically mice that have an alteration in particular genes. And every cell in the animal has that alteration.  So, of the various transgenic models that have been developed, the two that the program has used are the p53 and Tg.AC models. The p53 model is a plus/minus for the p53 gene; that is it has the wild-type functional allele, and a damaged non-functional allele.  Since we know the p53 gene product has various roles in carcinogenesis, this was considered to be a model for studying cancer induction. If an agent affected the good allele, then those cells with that damaged good allele would be at risk of developing a tumor; that is, a developing population of minus-minus cells that do not have that p53 function.  So it makes sense that this might be a useful model for chemicals that affect the p53 allele.  So if a chemical doesn’t damage or produce cell without that allele, then there would be no reason to suspect that it would also be a carcinogen in this model.  So you might see where a genotoxic chemical might be more suspicious of affecting a p53 cell animal, and a non-genotoxic carcinogen might be negative in the p53 model.           The other model that has been used is the Tg.AC model, which has a mutated ras oncogene on a promoter construct. If an agent affects the promoter region and allows cells to express the mutated ras gene, tumors might also develop in this model. My negative feeling on p53 model is that it’s not that much different than the Ames salmonella test other than it’s conducted in animals as opposed to bacterial cultures.

Shostak:                      And is there any particular value to treatment in an animal versus treatment in a bacterial culture?

Melnick:                      Yes, you have the full range of absorption, metabolism, distribution of the agent in tissues, and elimination.

Shostak:                      You were described to me as a skeptic in regards to the value of transgenic models, and it would be helpful to me to understand . . .  Well, first, if you agree that is an accurate description of your position, and then, if it is, what the nature of the scepticism is.

Melnick:                      For some genotoxic chemicals it produced carcinogenic effects as expected.  However, there are also genotoxic chemicals which were not carcinogenic in the p53 model.  So it’s not perfect in the sense of detecting genotoxic agents. There are certain tissues which have not demonstrated tumor induction in the p53 model, even though the expectation might be that tumors should be produced wherever genotoxic intermediates are present. For example, the liver is an organ where metabolism of an agent produces genotoxic intermediates. And, a lot of carcinogens produce tumors in this organ in conventional rodent models. However, in studies in the p53 model, liver carcinogens are not being identified. As I said before, the model is not suitable for non-genotoxic carcinogens.  So as a screen, a negative result would not be reliable.  So I would not recommend testing a non-genotoxic chemical in a p53 model. So I’m opposed to using the p53 model as a screen for environmental carcinogens. Why bother studying an agent in an inappropriate model?  Yet, we have done this. So I may come across as a skeptic, because I think we harm ourselves if we report out as we have done negative findings of non-genotoxic chemicals in the p53 model.

Shostak:                      And that makes sense; I understand. What about Tg.AC?

Melnick:                      Well, studies in Tg.AC were presented to the NTP board, and I agree with their conclusions on the value of that model.  This is a model that will respond if an agent is working through this particular promoter construct. This is the same as a reporter gene assay to screen for agents that act that act through a particular promoter gene. The promoter gene in this model does not regulate ras expression in animals or humans. Thus, it is an artificial system and not a valid cancer model. An agent will produce a response in this model if it activates the artificial promoter and thereby induces the expression of the mutated ras oncogene. An agent may produce tumors in this model, if it acts through the construct that allowed expression of the ras oncogene.

Shostak:                      So it’s mechanism-specific.

Melnick:                      Right.  Well, all of the transgenic models are mechanism-specific.  The p53 model is mechanism-specific.  If an agent doesn’t affect p53, whether it be through a genotoxic or non-genotoxic mechanism, if it doesn’t affect that gene, it will not produce a tumor response in that model.  An agent will produce tumors in this model if it acts through this pathway. As far as I know, we don’t fully understand the basis for the expression the ras gene in the Tg.AC model or how to translate a positive finding to a mechanism of carcinogenesis independent of the specific promoter construct on which the Tg.AC model was created.  So if you get a positive, what does that mean?  You may detect tumors and can make statistical correlations. But is this a reliable mechanism of the carcinogenesis process to relate to effects in animals to cancer risk in humans without that particular promoter construct?  I don’t think we can answer that.

Shostak:                      So, then, let’s move to the policy implications of these models, which you said is the topic that you expected me to ask you about.

Melnick:                      Before moving on you should be aware, there are many other genetically modified rodent models.  It’s not just the two that NTP has used.  There are other available models, including transgenic and knockout mice. I think these models can be very valuable for focused research questions.  For example, knocking out the estrogen receptor gene is informative on the function of the estrogen receptor in normal processes and abnormal or disease conditions.

Shostak:                      Who has done that?

Melnick:                      Ken Korach, at NIEHS, developed the estrogen receptor (alpha) knock-out mouse. With this model his lab group has studied how an animal behaves and responds to agents when it does not have this estrogen receptor.  So you can address questions that help you understand the role of the estrogen receptor in processes that are estrogen dependent. I think any model may have a value in understanding the role of the particular genes that have been modified; however, I also put a little caution in that sometimes you may over-interpret results because there are interactions between genes.  Products of one gene may influence the expression of another gene; so, if you knock out a particular gene, you’re also influencing other genetic processes which were dependent on the product of the knocked out gene. Gene-gene interactions need to be taken into consideration when trying to understand the results of any study involving a rodent model with a modified gene. Getting back to the cancer models, depending on the question and the agent, the model may be valuable in terms of making a determination, if  an agent is a carcinogen in that particular model.  You may also get the right answer for the wrong reason, because of the way the model responded.  If a known carcinogen test positive in such a model one might claim that there was a good correlation.  But if you look at the data, did it truly correspond to the conventional model, which is now using risk assessment, in a way that the information can be used similarly to estimate human risk? As an example, consider an agent that produces  a lung tumor in the conventional mouse model and skin tumors in Tg.AC mice. There is a correlation in that the agent was carcinogenic in both models; however, can regulatory agencies use the skin-tumor response to estimate risk to the lung?  If an individual in the workplace developed lung cancer, and all we had was the Tg.AC model available, the argument could be made that no study ever demonstrated lung tumor carcinogenesis by the agent to which the worker was exposed.  In this example we wouldn’t have the lung tumor data, and that concerns me.  And, not only do I want to know whether or not an agent was carcinogenic in both the transgenic and conventional rodent models; I also want to know if the same organs were affected.  That’s not saying that I would require organ-organ correspondence in for human risk assessment, because there are other biological reasons why you may get differences in response.  But if you see site correspondence, it strengthens the linkage between animal and human extrapolations.  Thus, I would not want to have just skin tumors and have to deal with human risk of cancer in the lung and kidney, or leukemias, etc. I  feel uncomfortable with a model such as p53, if it demonstrates a positive response.  But then again, how would we use that information for risk assessment?  To me, a critical issue is the utility of these models for risk assessment because it’s at the risk-assessment level where decisions are made in terms of allowable human exposures.

Shostak:                      And what has been, to date, on the utility of these models in risk assessment?

Melnick:                      In one case, with phenolphthalein, we provided FDA with tumor data in conventional animals. When a tumor response was also demonstrated in p53 mice, they felt that that provided confirmatory evidence. To me, the evidence in conventional mice was sufficient for them to act. I don’t think we have adequate statistical methods -- not that they couldn’t be developed -- on how to use transgenic data for assessing human risk. 

Let’s backtrack a little bit on this issue.  The conventional animal carcinogenicity studies involve exposures for two years; that’s about two-thirds of an expected life span.  When determining human risk, what you’re looking at is tumor response versus dose.  But the response is a rate.  It’s tumors per number of animals per unit of time, and the time factor is extended to lifetime of exposure.  If you use a transgenic animal, the study duration is typically six months instead of two years.  That would imply that if you saw the same percentage of animals with tumors in six months versus in two years, that the agent was much more potent because it produced the same tumor effect, but in a much shorter time.. So whereas the conventional might show zero tumors at six months, if the transgenic mice show 30 percent response at 6 months and the conventional mice shows 30 percent response at two years, then the potency of the effect is substantially greater in the transgenic mice. You may see how arguments could develop since the carcinogenic process has already been started in the transgenic models. I haven’t seen anyone develop a quantitative risk assessment using cancer data from transgenic mice and then making a public health decision based solely on that data.  So it concerns me that if we provided cancer data from transgenic animals alone, would this information be used to make a public health decision?  The model has to be fully accepted by the regulatory community before we can consider using transgenic mice in place of conventional rodent models. We can’t just provide transgenic data and say, “Here, we’ve got an answer.” To me, the transgenic data has to be as good or better than what we currently provide. Right now the conventional animals are a default model for assessing human risk. These models are accepted by the regulatory communities, but not necessarily by those who manufacture the products that are being regulated.  As a default model, rats and mice are accepted by the regulatory communities for estimating human risk, and  we have methods to do this. Methods for estimating human cancer risk from transgenic studies have not been developed.  Until acceptable methods have been developed , I’m skeptical on just reporting transgenic data.

Shostak:                      Is anyone from the NIEHS or the NTP working on the acceptance of this data with the regulatory agencies?

Melnick:                      In terms of quantitative determinations, I haven’t seen that.  Some at FDA have indicated at one time that they would accept transgenic mouse data.

Shostak:                      Right.  I talked to Joe Contrera about this.

Melnick:                      And the rat.

Shostak:                      Right.

Melnick:                      What concerns me then is, would they do a quantitative risk assessment on the transgenic mouse data, or would they use the transgenic mouse as a qualitative confirmation of the response in the rat. If they perform a quantitative risk assessment on only the rat, then this has taken the conventional mouse out of the risk assessment picture.  This is a concern because we have seen instances where chemicals behave very differently in rats and mice; for several chemicals there are substantial differences between the rat and mouse in terms of sites that were affected and potency of response.  If we didn’t have the mouse data and had just the rat data with qualitative confirmation in transgenic mice, then decisions based on dose-response effects in the rat could have a very large adverse influence on public health.

Shostak:                      Because you’d be missing sites.

Melnick:                      Right, because you’re missing data.  Because all of the quantitative information for assessing human risk would be based on the rat.  So I would want to see quantitative tools developed for the transgenic mouse and know that they have been accepted for human risk assessment before eliminating the conventional mouse, as a second default model. Right now, data from both the conventional rat and mouse are made available, and typically the most sensitive site is the one which is used for risk assessment.  So we don’t want to lose valuable information for risk assessments.

Shostak:                      Right.  I see what you’re saying.

Melnick:                      If we do quantitative assessments on just the rat and qualitative assessments on transgenic mouse data critical information could be lost.

Shostak:                      Are there ethical issues involved in the development of transgenic models?

Melnick:                      From the animal-rights people or . . .

Shostak:                      From whatever perspective seems reasonable or salient to you.

Melnick:                      Not that I know of.

Shostak:                      Okay.

Melnick:                      But there might be.

Shostak:                      Okay.  People often bunch ethical policies, so I was trying just to disentangle.

Melnick:                      In terms of number of animals being used or other reasons?

Shostak:                      Or people talk about, there are some people who have concerns about the modification of life forms; there are some people who have concerns about the patentability.  And these are all things that the people bring up in different ways.

Melnick:                      I haven’t really read much on them.  I imagine it exists.

Shostak:                      But not from where you sit.

Melnick:                      I haven’t, no.

Shostak:                      Okay.

Melnick:                      Now, I’m not saying that I would not want to see any transgenic models used in toxicity or carcinogenicity studies.  For example, I’m designing a study on cell phone radio frequency radiation and we’re intending to use the conventional animal models. For cell phone use brain tumor risk is an issue of particular interest.  Jef French has been on my design team, and we’ve talked about the possibility that there might be a transgenic mouse model sensitive to brain carcinogenesis.  So if there was to be developed a model which was demonstrated to be sensitive to brain carcinogens, I have an option built into our RFP to include that particular model in our studies. At this point I’m looking to see if there is a cancer risk at any organ, but we want to really focus on the brain because that’s the organ of greatest concern by the public.  So if there is a transgenic model that may be more susceptible to brain tumor induction, I would use it because there’s no good hypothesis on why radio frequency radiation should cause a biological effect that would lead to cancer.  So I would want to test that hypothesis by using a more susceptible model, that is to see if there are biological changes linked to carcinogenesis that show up in a more susceptible model.  But we’re conducting studies in the conventional rodent models as well.  We’ll see what happens. I wouldn’t exclude transgenic models.  It’s how you use them and how you interpret the data that matters.

Shostak:                      The distinction, at a general level, that I hear you making is between using transgenic models to ask specific questions, to look at specific mechanisms, to tap . . .

Melnick:                      Yes, it’s different than using them in place of a conventional models. For many agents I don’t believe you can get the same information from the two particular transgenic models, that is the p53 and Tg.AC models.

Shostak:                      Okay.  I think I understand the distinction that you’re making.

Melnick:                      Okay.

Shostak:                      You mentioned using animals that may have particular susceptibilities.  One of my curiosities about these transgenic models has been their utility for looking at human susceptibilities to environmental agents.  From your perspective, what’s their potential in that regard?

Melnick:                      Well, there are humans who have some of the genetic defects that a transgenic model may have, such as being heterozygous for the p53 gene.  However, it’s also important to understand risks throughout the distribution of the human population. I don’t think we have a model that would exclusively indicate risks in a sub-population and be capable of being interpreted back to the general population.  I think you want to know that as well.  I believe there is a value in using transgenics to evaluate various human susceptibilities. You can also use genetically modified animals to look at the effects of various metabolic pathways.  For example, if you knocked out particular detoxifying enzymes, such as a GST isoenzyme, you can then study how the metabolic elimination of a particular agent has been affected.  Is there a greater or a lesser tissue concentration of the toxic intermediates?  Or, have other isoenzymes taken over the place of the missing one so that tissue concentrations are not that different?  We can ask these kinds of questions, and I think that’s valuable information because we can use that information to evaluate human variability in tissue dosimetry. The use of genetically modified animals depends on the purpose of a particular study.  If you have a goal and say, “I want to understand the effect on dosimetry of a carcinogen because I know there’s a sub-population that is lacking one of the GST isozymes,” and then create a mouse model that lacks that isozyme, I could then look at the dosimetry in a conventional animal versus one that lacked the gene coding for that isoenzyme and come to some determination of the importance of that isozyme in the clearance of the toxic intermediates.  So to me, that’s valuable.  But that’s using a model to address a specific question. To use a transgenic models to screen for carcinogens in susceptible subpopulations could be very expensive because of the numerous genetic susceptibilities that may exist.

Shostak:                      Right, right.  That is one of the proposed uses of these genetically modified models.

Melnick:                      Yeah.  But then the question would be,  which agents go on for long-term cancer studies in conventional models?  The ones that were positive or the ones that were negative? I’m concerned about the negatives responses and some people might say we should fully evaluate the negative chemicals.  But then what do we do with the positive chemicals if the studies were not adequate to do a risk assessment?  You have to study them as well.

Shostak:                      Right.

Melnick:                      So that has to be worked out. If the purpose of transgenic studies is to screen chemicals to select the negatives for further study, that’s fine, as long as we know how to deal with the positives.

Shostak:                      How have the NIEHS and the NTP interacted around the development of these models?

Melnick:                      In terms of the resources, I’m not sure.  I believe NTP money is used to support some of the work on determining the feasibility of transgenic models.  The models that have been used by NTP were not developed at NIEHS.  I don’t know if NIEHS money was used through the Extramural Program or if NTP money was used to develop these models.  I really don’t know.

Shostak:                      Are there any initiatives -- the three I’m thinking of in particular are transgenics, environmental genomics, and toxicogenomics -- that have significantly changed work practices at NTP at this time?

Melnick:                      Well, in terms of the transgenics, we’ve started a new series of technical reports on transgenic models, or genetically modified models.

Shostak:                      Modified mouse.

Melnick:                      I don’t mind if there’s a separate series of reports of studies in alternative models. My bigger concern is whether or not we are doing the right experiments.  If we’re reporting out cancer studies of non-genotoxic agents in the p53 model, then I think we could have saved our money by not conducting them.  Again, the same question that I just indicated before is, if it’s used to screen chemicals to test negatives that’s fine, but we also need to know how to use the information from positives.  I don’t think we can just conduct a study in genetically modified animals and say we’ve done our job, because to me, that’s not adequate.

Shostak:                      I’m intrigued by the phrase you used:   how do you define your job?  What’s the full content?

Melnick:                      Our job is to provide the science for basing policy decisions to protect public health.  That’s what our job is, to provide scientific support for public health decisions.

Shostak:                      And that job in part is what makes the NIEHS and NTP so interesting to me, because there are very few of the National Institutes of Health that involve themselves in regulatory processes.

Melnick:                      Yes, our studies are used in the regulatory process, but they also relate to the issue of disease prevention, which I think NIH does not do enough on.  I think a lot of resources go into treating with a disease conditions, but if you ask the average person on the street, would they be more interested in a drug to treat a cancer or research to prevent that cancer from having developed, I don’t think there’s any doubt what the answer will be.

Shostak:                      Right.  No, no.  There’s no question.

Melnick:                      So within the NIH, I think the NTP does probably more than any other program on issues related to disease prevention.

Shostak:                      What you’re saying is that there’s a public health mission -- right? -- in the NTP . . .

Melnick:                      And it’s something that you can’t quantify very easily.

Shostak:                      Right, right.

Melnick:                      How many lives were saved by reducing human exposure?  That’s hard to determine with accuracy.  But we know that there are carcinogenic agents in our environment and workplace, risks are elevated, and it is our goal to provide scientific information that can be used to reduce human risk from environmental agents.

Shostak:                      How does having that more public health-oriented mission shape the NTP?

Melnick:                      That is, as far as I’m concerned, the major mission of NTP mission. It’s public health orientation is to provide the science for good public health decisions. Our role is to provide the science so that decisions are made that are protective of public health.

Shostak:                      And that necessarily bring NTP into interaction, not only with regulatory agencies, but with industry, right, the people who are producing . . .

Melnick:                      Industry and the rest of the health science community. We try to do the best science possible to identify environmental disease causing factors and characterize relationships between exposure and disease outcome.

Shostak:                      Even when one set of options is being investigated intensively, there are also other options that are possibilities.  So there’s been initiative in transgenics as a way of understanding mechanisms by which chemicals cause certain biological responses.

Melnick:                      Which is different than using it to determine whether or not an agent poses a cancer risk to humans, and that’s where it runs into greater uncertainty and problems.  But I agree with the statement that you just said, the use of transgenic is to understand mechanisms by which chemicals cause certain biological responses.

Shostak:                      And I appreciate you kind of returning me to that distinction, because my question for you is basically, are there other approaches that the NTP is pursuing to answer these same questions?

Melnick:                      Well, toxicogenomics is the new area. There has been a lot of heavy salesmanship on the utility of toxicogenomics for determining disease risks; time will tell whether or not this approach lives up to its expectations. Toxicogenomics provides a lot of numbers on multiple gene expression and gene products, but my concern is whether the information obtained from these analyses do as good a job as the conventional animal models for estimating human risk.  Until it can be demonstrated to provide reliable information for risk assessment, I’ll remain sceptical of the value of this new technology. To me, any consideration of new approaches to assessing environmental health risks must do as good a job as current methods, and we need to provide this information in a way that can be used by regulatory agencies.  Right now the conventional mouse and rat models, although not perfect, are used to assess environmental health risks.  To do this various default assumptions are used to extrapolate findings in animals to estimations of human effects at environmental or occupational levels. The application of a different system of information would require a new set of assumptions. I would not want to trade one set of assumptions for another set of assumptions if the second set has not been demonstrated to do as good a job in protecting public health.

Shostak:                      For a new model to be acceptable for a risk assessment, what are the things you want to see that it can do?

Melnick:                      The new models, based on toxicogenomic information, will lead to predictions of human risk that can be compared to predictions of human risk based on information from the conventional rodent models.  So we need to test the predictions from new systems compared to predictions from the conventional models.  Are they predicting similarly or are they predicting very differently?  There may come a day when we will know that for certain pathways, toxicogenomic information is well predictive of human risk.  But because there are multiple pathways of disease causation involved for different agents, it is unlikely that one shoe will fit every condition. We will need a lot more information before judging whether toxicogenomic information is predictive, in a quantitative sense, of human risk. To me, patterns of gene expression that resemble those of other known carcinogens are not good enough.  I would not want to know only that a chemical is predicted with a high degree of confidence to be a carcinogen, because I can get a high degree of prediction from a salmonella test.  I would want to know what is the site where I would expect a cancer response, and the dose-response relationships, because that is the type of information that is needed for public health decisions.  So if that information could also be obtained from toxicogenomic data, great.  But, as far as I can see, it’s going to be a long road to reach that goal. Another issue concerns the sensitivity of the methodology.  Initially an effect on gene expression could not be distinguished from controls unless the difference was at least a twofold.  I think that’s improved to about a 30 percent difference.  But if a gene is up-regulated by 20 percent, which is not distinguished from background, over a lifetime this could be a very important biological response in the disease pathway that would be missed.  So there’s a lot of issues that need to be worked out: consistency of data, sensitivity, which tissues to analyze, and when to sample. Time of sampling is critical because responses are influenced by circadian rhythms and are dependent on animal age.

Shostak:                      Right.

Melnick:                      So, how many tissue samples would be needed to conduct a thorough toxicogenomics study instead of an animal study, that is to obtain sufficient site-specific dose-response information?  In a cancer bioassay we look at 40 organs per animal.  Would you look at all of them in a toxicogenomics study?  But also consider just the lung, it’s composed of many different cell types.  Is it necessary to distinguish genomic changes for each cell type, because not every cell type will develop a tumor.  So should you analyze changes only in cells which may go on to tumor as opposed to those which have little or no likelihood of producing a tumor? Changes in gene expression occur over a 24-hour day, so how many samples do you need to be able to distinguish natural changes from chemically induced changes?  If you’re going to look for persistent effects, you must take samples as a function of age - did changes develop early, and how are they affected by dose?  If you start multiplying all of these factors together, you wind up with thousands of samples. And at the end, all that you have is information on messenger RNA levels.  You still have to translate that information into activities that led to cancers, and that’s not a trivial exercise.

Shostak:                      Are you, is the NTP working with the folks at NIEHS who are developing toxicogenomics?  So these are questions that…

Melnick:                      Yeah.  We need to challenge each other so that when toxicogenomics analyses are proposed in NTP studies they are considered to be feasible and informative.  But once people start talking about using toxicogenomics data to replace the animal bioassay it must be first demonstrated that this approach can do as good a job at providing information as useful as that from exposing animals for two years.  A major challenge will be to validate predictions that come out of these alternative approaches.

Shostak:                      As a sociologist, part of what’s so interesting to me about this site is this back-and-forth between the NTP and the NIEHS and in what ways, if any, the two programs kind of shape each other, push each other’s development in various directions.  Have you seen examples of that over the years?

Melnick:                      Not enough.  Last week a toxicogenomic studies were proposed to an NTP group. These were studies related to early changes in the development of a liver tumor response. Several important suggestions were made that should lead to a modified study protocol. 

Shostak:                      A very speculative question, which is actually my last.  You’ve been working in the field of toxicology for 20-plus years and you’ve seen it change in multiple ways, and we touched on some of them.  I guess it’s a double question.  It’s, what do you see in the future of your field, and what is the role of NIEHS or NTP in creating that future?

Melnick:                      In terms of the future, I suspect there will be greater use of molecular research and genomic information in the NTP.  Hopefully, as more information is obtained on disease processes more reliable predictions can be made. However, the NTP disease endpoint information will still be needed to test predictions. To me predictions by themselves are of very low value unless you can demonstrate that your predictions provide reliable answers.  So the NTP has been a great resource of data that would allow the utilization of molecular approaches to understand and characterize environmental disease events. A major focus of research in the non-NTP part of NIEHS is on studies of intermediary pathways of normal and disease related events.  The NTP provides the environmental component for much of the intramural community at NIEHS. Rather than studying only a couple of chemicals which produce a large response in a pathway that someone is investigating, the NTP studies many other chemicals or agents that may also influence that pathway. Although these agents may not be the most potent in affecting a signalling pathway, they provide an opportunity for linkage between pathways studied at NIEHS and environmental exposures.  And as I mentioned before, the NTP conducts toxicokinetic studies. From these data we can estimate get tissue levels of the parent compound or its metabolites. With this information we may be able to bridge the gap between NTP and the rest of the NIEHS intramural community. For example a proposal has been made to have NTP postdocs work in intramural laboratories to conduct studies on relationships between environmental exposure, toxicokinetic models of tissue concentration, and cellular or molecular events that are being studied in the rest of the NIEHS, as well as outside of NIEHS.  These types of collaborations will bring more environmentally relevant chemicals into studies at the Institute. Another issue is that although many NIEHS scientists are studying the pathways that are believed to be related to disease processes, there’s very little research on these relationships.  With the NTP disease endpoint information, you can now start to link these relationships in terms of their predictiveness.  So I think NTP can provide a very valuable component to the rest of NIEHS, making it more environmental and more health related.  After all, that’s what the E and the H in NIEHS stand for.

Shostak:                      Of course.  That’s really interesting. Is there anything you feel like I should have asked you that we haven’t touched on?

Melnick:                      I thought we did a pretty good job.

Shostak:                      I really appreciate your talking to me.

Melnick:                      From what I recall and I heard second-hand, the Bioassay Program developed at NCI in the early to mid-‘70s. In the late ‘70s, Dick Griesemer was the director of the Bioassay Program. Around 1978, Secretary Califano recommended that there be a single national toxicology program composed of the different federal agencies that conduct toxicology research. The creation of the NTP brought together components from NIOSH, FDA, NIEHS, NCI, and CPSC. Dr. David Rall, who was the director of NIEHS became the first director of the NTP. A major change that occurred at about that time was the transfer of the Bioassay Program from NCI to NIEHS.  I think this was about a $50 million program. Also, a number of research staff positions were moved to Research Triangle Park.  Dick Griesemer, led that program at NCI, and Jack Moore lead the program at NIEHS. A conflict soon arose concerning who would be the Deputy Director for the NTP.  David Rall wanted to have both Dick Griesemer and Jack Moore running the program.  Dick Griesemer said either he would run it or he wouldn’t join. Consequently, he left the program and headed biological research at the Oak Ridge National Laboratory, and Jack Moore became the Deputy Director of the NTP.  He held this position until he left for EPA in the mid-‘80s.  Subsequent to Jack Moore, Gene McConnell was Deputy director, and when Gene retired, Dave Rall brought Dick Griesemer back to head the program.  This is part of the early history of the program.

Shostak:                      Would you keep going through the directorship for me?  Who was after Dick Griesemer.

Melnick:                      Bern Schwetz was for a short while, and then George Lucier.

Shostak:                      And then . . .

Melnick:                      Chris Portier.

Shostak:                      Okay, okay.

Melnick:                      I don’t think there was anyone else that I recall.

Shostak:                      Okay.  That’s helpful, too, just in terms of thinking through the interviews that would have to happen to document the NTP’s history.  I will turn this off.

END OF INTERVIEW