Dr. Robert M. Cohen Oral History 2005

Download the PDF: Cohen_Robert_Oral_History_2005  (PDF 93 kB)


Dr. Robert M. Cohen

Office of NIH History Oral History Program

Transcript Date:        February 24, 2005

 

This is Claudia Wassmann and today’s date is Thursday, February 24th 2005.  I’m conducting an interview with Dr. Robert Cohen. 

 

CW:     Yeah I was going to ask you -- as you’re one of the senior researchers here -- If you could just start telling about the early stages of imaging technologies here at NIH and how that evolved over the period of time that you were involved in this.

 

RC:      Well I was mostly involved in PET so I really don’t have very much to say about the MRI technologies as it evolved here at NIH.  So I can only concentrate or speak directly to that issue.  The program started around 1980 and you may already have that perspective.  I think Don Tower, who was the chief, the director of the neurology institute made a decision,  slightly earlier than that, that PET was a very useful technology for brain imaging and he decided to fund several PET centers, this new technology at several places around the country and decided to also have a PET placed in here, in the NIH.  The first and the scanners they bought at that time was the same. All the centers bought basically the same scanner.  It was an E-CAT 1 scanner. You may not be that familiar with PET technology, but the way the technology works is that it’s measuring radioactivity inside the body, the beauty of this method was that it could quantify in a way that couldn’t be done before where radioactively was precisely in space in a very good quantitative way. 

 

Before this most scanners were able to localize somewhat in space but the quantitation wasn’t very good and the reason for that -- the improvement in PET is because it uses a certain kind of isotope as a tracer, called positron-emitting isotopes and that’s why it’s called positron emission tomography.  So basically the way the technology works is that during radioisotope decay a positron is emitted, which is the anti particle of an electron.  So it doesn’t go very far in space before it meets an electron.  They annihilate each other and you form two gamma rays, essentially two light rays almost exactly opposite each other.  Someone runs in this direction and one runs in this direction and so if you have detectors on either side you can determine where along this line the event occurred, so you have these lines in space. 

 

CW:     Was the possibility to quantitate it? 

 

RC:      That created the possibility of quantitating it, because one of the big issues in SPECT is you don’t have these two events you only have one event.  That was the major improvement and that improvement was not done here at NIH.  So there were a number of people who were involved in that.  I don’t know who deserves the most credit for it, but much of the activity development was done at Washington University in Saint Louis and at the University of Pennsylvania.  So the ECAT was the first commercial version of this technology and they say Don Tower, I think,  was the one that was really responsible for deciding to try to get this off the ground.  And so at around 1982 or 1981 maybe the first FDG PET scans were done here. I should go back in time a little bit in the sense that in order to have this useful for brain technology you had to have both a scanner that could localize in space and quantify radioactive isotopes. You also had to have a method or useful procedure that would utilize that information to tell you something about the physiology of the brain.  So there were two separate events in essence, two separate developments that had to be occurring and they did occur somewhat in parallel that eventually met together to produce the first real PET images, important PET images.  The group that was doing the development of the scanner, which was David Kuhl and Mike Phelps and I’m blocking on the person’s name from Washington University.  He’s dead now. 

 

This technology was occurring in development and at NIH with Dr. Louis Sokoloff and his colleagues in the laboratory. They were inventing something called the deoxyglucose method.  This deoxyglucose method was developed by Dr. Sokoloff in order to measure regional glucose metabolism. Why did Dr. Sokoloff get involved in this technology -- the development of this method if there’s no PET scan?  That’s a good question.  It is my understanding from Dr. Sokoloff  that -- you may want to speak to him as well at some point.  He’s a fascinating individual and he has also a great perspective on imaging technologies.  So I highly, if you can, I would hope that you will get a chance to talk to him. 

 

CW:     Yeah I want to talk to him.

 

RC:      He has a great memory and has been here -- was at the University of Pennsylvania originally, but came here at around I guess 1954 or something like that.  So he preceded me by at least 23 years [laughs].  But he had come here from the University of Pennsylvania at Dr. Seymour Kety’s suggestion.  Dr. Seymour Kety was the first chairman of the department of the National Institutes of Neurology and Psychiatry.  In other words when he came there was one institute that covered all the brain disorders.  Whereas today there is a number of institutes -- the Mental Health Institute, the Neurology Institute, the Communicative Disorders Institutes, the Drug Abuse Institute and the Alcohol Institute -- all are interested in brain disorder, but when Dr. Kety first came here around 1952 there was only one institute responsible for all of this.  Dr. Kety had modified and created something called the Kety-Schmidt method for measuring blood flow.  And so at that time when Dr. Kety came here he invited Dr. Sokoloff  to join him and they did some of the first studies here using radioactive isotopes to measure blood flow to the brain and oxygen metabolism, using the scanners that were available at that time, which now you call SPECT scanners.  They’re single photon scanners not PET scanners.

 

What basically they were able to do is they were able to look at the blood flow of oxygen metabolism in the whole brain but not in the regions and what they were able to show at that time in the early ‘50s was that in patients with dementia there was reduced blood flow and reduced oxygen metabolism. They were also interested in measuring other complex disorders like schizophrenia.  When they did or performed their studies in disorders like schizophrenia, they didn’t find any global difference and Dr. Sokoloff was convinced that it was likely that this was because there might be regional changes.  So in other words that might be an area that has less metabolism  and an area that has more metabolism , but when you add it all up for the entire brain it wasn’t any different from a normal individual. 

 

So he had it in the back of his mind that it would be great to develop some method that could be used to look at regionally localized metabolic rates and he can tell you more about his thinking about this, but it took many years to accomplish this. The real trick was thinking about, and having heard from another investigator a chemical, a moiety, called deoxyglucose. which enters the brain like glucose, enters the cells like glucose, but begins the step of being phosphorylated but doesn’t proceed along the metabolic lines. It gets trapped in the tissues, and together with a number of investigators created a mathematical model to convert the amount of this deoxyglucose in a specific tissue or a specific region be able to interpret that as glucose metabolic rate. 

 

So the brain primarily runs on glucose metabolism and it wasn’t a very far leap of the imagination to say that there probably is a relationship between what the brain area is doing and how much metabolic rate is going on in that region. Just as you would say that the faster the car, the further the car is going, the more gasoline it’s going to use, the more fuel.  The argument was that those regions of the brain that would be more active and the metabolic rate would be higher, the blood flow to region would be higher. So Dr. Sokoloff, over a series of very elegant studies, proved just that.  That you could establish by these indirect means the electrical activity of a particular area of the brain. He could do that by showing that if you shine a light into the eye of a rat, for example, the area of the brain that handles light waves, that handles vision, its activity would go up and you could show this using autoradiography. When this method was first started they utilized it in animals with deoxyglucose what they would do is they would give the deoxyglucose to the animal.  After the material was trapped they would sacrifice the animal and then they would slice the brain up and place the slices on film and the greater the radioactivity in a region, the more the film would darken.  It was essentially equivalent to the idea of a negative, negative photography.  So he would get these very elegant maps of rat brain.  The autoradiography technique was very, very precise.  So were talking about essentially how he could look at 40 micrometers resolution, this tiny, tiny resolution. 

 

He also contributed the first one of these results to color map.  He doesn’t like to talk about it but in order to display the rates you could do this at gray tones, but the human brain only appreciates about 80 gray tones or something on that order.   He sort of invented or adapted a color scheme to map out which areas were the hottest activity with red and white being the hottest activity and blue being the coldest, which is still used today.  So he was involved in doing this method, developed this method.  This method was developing over here at NIH.  The technology was developing at the University of Pennsylvania, Washington University in Saint Louis. So these two methods were being sort of pursued in parallel. 

 

You can, in fact -- you can even go back further and argue that this all would not have been possible if it weren’t for some of the developmental projects that were done for the atomic weapons, because up until 1930s and ‘40s it wasn’t possible to make positron-emitting isotopes.  The only reason you can make positron-emitting isotopes was the result of the development of the cyclotron, which really depended on that.  So that occurred a little bit earlier.  Okay, so the two methods were in parallel.  Dr. Sokoloff then got together with people at the University of Pennsylvania and also people at Stony Brook.  There was a -- remembering now the chemist, but there was very good radiochemist at Stony Brook, to see if they could adopt the 2-deoxyglucose method to humans, and what this required was a conversion of the deoxyglucose to a tracer that could be observed by PET.  So the original experiments of Dr. Sokoloff were done with 2-deoxyglucose.  This could be done with a titrated molecule that could be seen by film, but the tritium could not be used in human studies.  So for a positron-emitting isotope they had to add a fluorine on it, a fluorine 18 onto the deoxyglucose . Al Wolf who was at Brookhaven National Laboratories was the one that created the fluorodeoxyglucose as substitute for deoxyglucose.  Dr. Sokoloff was responsible for the model and Mike Phelps and others were scanner people and they did the first, the first injections I think of fluorodeoxyglucose were done at the University of Pennsylvania.  So it was a combination effort of Stony Brook people, the NIH people and the University of Pennsylvania people to do this. I’m sure I’m not giving all the people credit that were involved.  It just happens to be the names that you know come to mind right at the moment, but you can go back and get the papers if you want to look at any particular people. 

 

So that was the first study, the first studies done, and that was done with -- how did they get started on this?  The old E-CAT scanner had what was called a single slice.  So remember I was saying that in positron-emitting isotopes you have to detect events occurring simultaneously opposite each other.  So the old E-CAT scanner created a ring of detectors.  So you could only measure events that were occurring in this slab within this area.   So if you wanted to detect events occurring in other areas of the brain you had to move the patient in the scanner so a different area of the head was related inside this ring, this thick ring.  So it was sort of like this is a thin ring and if you can imagine this is the scanner I can only find events here in the brain.  I have to move the finger relative to this to pick up.. here I’m picking up new events.

 

CW:     Yeah.

 

RC:      Okay.  So you can move the ring but that was awkward so the patient was essentially moved in the scanner.  So a single slice might take 25 minutes to collect and then you would bump the patient, do another 25 minutes, and then bump the patient and do another 25 minutes.  It could take you 2 hours to do about seven slices and the resolution wasn’t very good.  But that was the start of the technique and the first patients for example with schizophrenia were scanned here with PET and Monte Buchsbaum at the time he was here started the first scans of patients with schizophrenia, but there are a bunch of other groups as well who scanned some of the first patients of various different disease categories. 

 

CW:     Can I just ask because you mentioned that when you had the SPECT you could find differences in blood flow but not in a particular region. 

 

RC:      Yeah I was calling it SPECT I don’t know what the device that was actually used.  It was before my time.  So I wasn’t here for any of those studies in the ‘50s.  So I use the term SPECT meaning that I know they weren’t using positron-emitting isotopes at the time and they didn’t have a device to measure it.  So they were using different types of isotopes that only result in what are called single events, but I don’t know what the scanners were called at that time.  Okay, so take it with a grain of salt.  It’s not the modern SPECT scanners that we see today.  SPECT was a very different animal, but I wasn’t around.  I have no idea what those scanners looked like.

 

CW:     When did you come?

 

RC:      I came here in 1977.

 

CW:     ’77.

 

RC:      So this was about 24 years -- 20 some odd years before I arrived and I was still in elementary school at the time.   So I don’t know what scanners were used at the time.  So they were relatively primitive.  The resolution was -- I don’t think they were three dimensional in nature.  So they could probably -- and I know at the time that they were doing these studies they really were only measuring whole brain.  They couldn’t divide up the brain into different parts.  So today’s single photon emission tomography scanners are much more sophisticated.  They’re still not as quantitative as the PET scanners but certainly there’s more sophistication.  Part of this is due to the fact that -- the development of computers, because the algorithms or the mathematics for converting information to three dimensions were really invented 1910 or so. 

 

It’s the same mathematics that’s used for CAT scanners.  The person who, Hevesy, one of the people that won the Nobel Prize for that, he used those algorithms, those ideas about how to take line data if you will and convert it to two or three dimensions, that same mathematics was used not only on CT scanners, but used in the PET and now used in SPECT scanners as well.  So you could go back again to 1910 and say the start of all of this was then. 

 

The development of the CT scanners certainly aided in a way the development of PET scanners.  If you think about it CT scanners came in around the same time period.  As I go back again to my residency and my externships in neurology we did not have any CT scanners.  People don’t really recognize what a giant leap forward that was.  So basically people just got skull X-rays, which you could see very little and then the development of CT scanners was first in the early -- late 1970s / early 1980s about the same time as the PET and the mathematics and the computers required to do this became available in the ‘50s.   A computer would have to fill up this huge amount of space and wouldn’t have very much power.  One of the most important things in the first PET scanners was you had had this huge computer, not powerful by today’s standards, but huge.  It was a big as the scanners.  So now you had this tiny box, but that was part of the deal.  That you had to have computer development occur otherwise you couldn’t have the PET scanner because it has to convert all this data on the fly. 

 

You also had to have the development of these particular sensors that could measure these positron-emitting isotopes and most of that technology was the result of the Japanese company Karamatsu [spelled phonetically].  I think they still have most of the work -- or they get most of the work for the development of these devises to measure the actual gamma rays as they come out, their specialty in light sensing devices.  The Japanese themselves didn’t really develop -- you’d think -- I mean the company that was able to make these detectors they would have been the first to make a PET scanner.  The Japanese as I understand it were very against any kind of radioactive experiments because of the experience with Hiroshima and Nagasaki.  So their laws and regulations restricting the use of radioactivity in humans were so difficult to deal with and in fact Karamatsu always -- I had met the president of Karamatsu in the ‘80s -- their desire was really to try to invent something that could see inside the brain and still is today without using ionizing radiation just using light sources.  That is what they’re interested in developing even though they had the most important technology, which is the method to detect these radioactive events.  So it’s kind of you can throw in Japan in this as well that they were absolutely critical because you had to have -- these devices had to be very small, these detectors were small, because the smaller the detector, the higher the resolution.  So much of the technological development of PET has been the result improving the sensitivity of these detectors and making them smaller and smaller to allow finer and finer splitting of the brain into these tinier and tinier pieces. 

 

CW:     But then it seems that what you want to detect in the brain and what you want the technology to be is also determined by your clinical knowledge that dates before.

 

RC:      Right and certainly that’s a whole other issue.  The question is of course always what is PET good for?  Well PET is good for understanding physiology, but you can only understand physiology in so much as there is meaning to having this measurement.  In other words what can you do with this measurement?  How can you relate this measurement to physiology of the brain or to something important about the brain.  That’s why I emphasized Dr. Sokoloff’s contribution in that the first tracer that was used was a tracer that could look a whole bunch of different diseases because basically looking at the fuel, the general activity of the brain can be interesting for looking at a whole group of diseases and he had a very good model and this is what I emphasized the model is used to take the mathematical data and convert that to some kind of physiologically important rate. 

 

The earliest experiments were done for regional glucose metabolism, but you could measure regional blood flow by the method.  That was done with O-15 water and then subsequently you could look at receptors, which was another big event that I think occurred in 1980.  I think it was 1984 was the first time a PET scanner was used to measure the receptor, the neurotransmitter receptor, in the human brain.  This was done at Johns Hopkins.  So it wasn’t really done here so I really didn’t talk so much about it, but that’s certainly one contribution that was done here contributing to this issue.  

 

An important experiment was done by Mike Chiueh here and Ken Kirk working in collaboration with people in Canada. I don’t remember the name of the people. But they said the first scans were done with fluorodeoxyglucose and the question is what else could you measure that would be very important?  And certainly one of the reasons I got involved initially in PET was thinking that it would be very important in terms of Parkinson’s disease.  And we, Mike Chiueh and Ken Kirk thought that if we could do something similar for the dopamine system as we did for the glucose metabolic system we would be able to try and diagnose Parkinson’s disease better and perhaps even be able to look at people before they get the disease because post mortem, people with even early Parkinson’s disease have a dramatic loss of dopamine neurons so it was felt that there was probably a threshold of loss before you developed clinical symptoms.  So it could be that by a tracer technique you could see people when they had lost let’s say half or 40% of their dopamine and you can try to treat that and prevent the development of clinical symptoms. 

 

And so the issue is what kind of tracer could you use for that.   L-DOPA, which was the normal treatment for Parkinson’s disease is an amino acid that’s a precursor to making dopamine, but because it’s a precursor it gets secreted and it gets metabolized, it doesn’t get trapped.  Plus it doesn’t have a positron-emitting isotope blocker, just like deoxyglucose didn’t have a positron [unintelligible].  So you had to put a fluorine on it somewhere and the question is where to put the fluorine on it.   And there were several possible places to put it on and the trick was you want to put it in a place where it can’t get chopped off and it doesn’t change most of the properties of L-DOPA, but changes enough of it so it get primarily trapped in the brain so that you can then measure it. 

 

So Mike Chiueh, with Ken Kirk, started a series of experiments where Ken Kirk would put the fluorine on different places to determine which was the best location.  It turned out that one of the places they put it on you had tremendous metabolism in the periphery so that it chewed up before it got into the brain.  Then they were able to prove that the 6-FLUORODOPA, putting the fluorine on the sixth position was the position that was most useful and they could show that in the animal studies that it does get in and gets treated like L-DOPA.  And so that information was very useful.  We didn’t have at that time, as funny as it may seem, we did not have a -- which was part of the reason we didn’t get the first fluorodeoxyglucose -- we did not have a cyclotron here at NIH at the time.  At the time when the E-CAT scanner came in the nearest cyclotron was in the Navy across the Anacostia river so fluorodeoxyglucose could be made and shipped out here, but there really wasn’t a group of chemists to do this kind of work.  So they worked in conjunction with a Canadian group at McMaster University and they made the –- actually made the tracer. 

 

The experiment that really proved that this would work was done by Mike Chiueh in conjunction with them.  They had developed, Sandy Markey and he and a couple of individuals had been working with something called MPTP.  I don’t know if you know this story this is another separate sort of story that again science has so any parallel lines.  What happened was that there was this drug that was being -- I should say there was some awful events that occurred in California in which people who were using, were drug abusers, very young, developed a very severe form of Parkinson’s disease.  Without going into the details of that event eventually, some of the materials that were used by these individuals wound there way here at NIH and it was discovered eventually.  Langston was the physician or PhD, I don’t know which it was, in California who was involved in working with these individuals.  He sent material here to NIH for analysis.  Through mass spectroscopy NIH investigators -- one of the people being Sandy Markey, Irv Kopin and another person working on the project -- discovered that the people who had made this drug had inadvertently made a compound they hadn’t intended to called MPTP. 

 

And so this discovery that MPTP could cause Parkinson’s disease lead to its use as a model of Parkinson’s disease in monkeys and it had the virtue that by injecting into one side of the brain through the carotid artery you could create a lesion on one side of the brain but not the other.  So you had what was called unilateral MPTP models or you could even create bilateral by injecting it into both sides.  The monkey experiments were done here to create these animals that were either semi-Parkinsonian, unilateral Parkinsonian, or bilateral.  Monkeys were shipped -- a monkey was shipped from here to Canada and a scan performed and sure enough the 6-floura-DOPA was taken up primarily by the side that was not lesioned and not by the side that was lesioned.  So that was the first experimental proof that fluoroDOPA could be used to in essence diagnose Parkinson’s disease.  So that was an interesting development here.  It occurred about 1984 or so. Firnau invented it.  Firnau, F-I-R-N-A-U, was one of the main investigators at McMaster. 

 

CW:     So that was clearly for you -- you had the disease and the technology was available and you wanted to adapt the technology to the disease you wanted to study. 

 

RC:      Exactly, that was a clear example.   Probably I would guess the first example -- I mean Sokoloff  would argue probably that he had in mind schizophrenia or diseases like schizophrenia with the deoxyglucose method but it was such a generally useful method that it was hard to argue that it was one particular disease whereas this fluoro-DOPA was very specific.  It was really targeted for the disease and I think that’s the first example.  It may still be the only example of a one molecule for one disease, because we don’t really have -- I’m trying to think if there’s any other brain disorder where there’s such a clear relationship between the neurotransmitter abnormality and the disease and I believe that’s the only really one that exist.  The closest other one is, was thought to be Alzheimer’s disease and I didn’t do work on that and it turns out it really isn’t very selective -- as selective as one would think the cholinergic system.  So as far as I know Parkinson’s disease is the one example.  So it could only be the one that was used -- but it was very exciting because it did work that way.   The other -- many of the other tracers that were subsequently developed it was hard to argue that it really was selected for any one disease.  It was more fishing expeditions.  Knowing that a transmitter -- wanting to find out more information about how a transmitter was operating our how it might be involved in a disease, but we really didn’t have any direct evidence. 

 

The people at Johns Hopkins might say -- they set the first ones to do a receptor, able to measure a brain receptor and they used -- they were looking at dopamine-2 receptor and it might be since I didn’t specifically talk about what they had in mind.  The compound they developed is called N-methylspiperone.  It may be that they had in mind schizophrenia for that, but it turns out I don’t think that it is really very selective or that meaningful for schizophrenia.  So I think the fluoroDOPA is probably the only case like that.  There have been many – a number of tracers now that have been geared to looking at the dopamine system.  Both the dopamine-2 receptors, dopamine-1 receptors and other tracers for looking at this presynaptic site like fluoroDOPA, fluoromethyl tyrosine, an example.  One that we’ve used here at NIH and many other groups have used. 

 

So but I think those were exciting points in time and all of this was by the way was done again with a scanner with a single slice.  There are now scanners with over 200 slices, but at the time that’s what we had.  We had a single slice scanner.  So the E-CAT scanner I think stayed in operation through the mid-80s and then there was a whole series of different scanners.  I think there’ve been a total of about 12 scanners here.  Rich Carson will probably know exactly the number of -- he was the person that was responsible.  Each three or four years or so there would be a major improvement in scanner technology so the resolution – okay the resolution of the first E-CAT scanner in the dimension of the slice so let’s say the ring is this way, so the brain like that, and you’re basically in plane -- it’s hard to imagine now.  See the in plane resolution was 1.8 centimeters, centimeters.  The outer plane resolution was on the order of maybe seven -- maybe eleven centimeters.  I don’t know seven / eight, so the resolution was incredibly poor.  Now in plane resolution now with the newest scanners are on the order of 3½ millimeters, so over the 20 some odd years you’ve gone from 1.8 centimeters to 3.5 centimeters and almost the same [unintelligible] dimensions.  So it’s when I think about it it’s really a remarkable improvement and in addition you can cover the whole brain at the same time.  So not only is the resolution up -- and resolution goes up by cube -- the quantitation really goes up as the cube of the resolution.  So you had 1.8 to 3½ centimeters.  You know you had this ten, well at least ten hundred -- you’re talking about over a hundred fold improvement in quantitation and in addition you can now look at the whole brain at the same time, so it’s quite remarkable in that sense.  You could argue to a certain extent that the technology is way ahead of the science.

 

CW:     Would you say that?

 

RC:      Yeah I would probably argue that technology of the science because you have this enormous improvement in technology and we really don’t have an enormous improvement of understanding the brain.  So the technology really improved dramatically.  The problem really is again is that we don’t have a very good way -- where we understand and know this well we really haven’t in terms of the brain we really haven’t isolated specific physiological processes that we can easily measure in the brain in terms of technology.  So the most clinically useful aspect of the PET scanner is in cancer research now and fluorodeoxyglucose.  So the same FDG which was the first isotope used is by far the and really is the only isotope that I know of now -- well maybe people do some -- there’s been an argument about whether fluoroDOPA should be used in Parkinson’s diagnosis or not, but clearly in terms of reimbursement, the reimbursement and the utilization of PET scanners today is primarily for cancer.  Almost every major hospital in the country now has a PET scanner. So it went from five PET scanners in the country to hundreds, hundreds of PET scanners.  It’s used for cancer, which was not the original intent of the fluorodeoxyglucose method, but it’s turned out to be the most useful aspect of it.  Which is one of those great things about science, sometimes you develop things for one thing, the science was very good, but the cancer wasn’t really what it was originally intended to do.  One of the pioneers’ of its use in cancer research was here at NIH, Giovanni Di Chiro who is a bigger than life neural radiologist here.  He started to use the FDG method to look at cancer in the brain.  What he saw was that most cancers were hot.  That is that most cancers or many cancers the metabolic rate of a cancer was greater than the rest of the brain tissue.  So one of the first uses for the cancer research was that you had a patient that would come in and they would undergo a surgical procedure to remove the cancer and sometime later you would want to find out if the cancer has re-grown.  It turned out that a CT scan was not very sensitive to determining whether what was in that area was tissue and debris versus new cancer.  So Dr. Di Chiro would do another FDG injection and if the cancer was re-growing he would show a hot area was starting to come back.  Where as if it was debris or non-cancer tissue that was filling in the space it would be relatively cold, that is more like normal brain tissue or even lower.  So he started using it for that purpose as well as he came up with the idea -- and this turned out to be correct as far as I understand -- that if you look at the metabolic rate of the cancer in the first place you could determine its virulence.  In other words the cancers with the highest metabolic rates were the ones that were most likely to [unintelligible] back a bit.  So in other words it was a kind of staging of -- in pathology you have staging from one to four.  Four being the worst kind of tumor and he was able to show that if you do the deoxyglucose, fluorodeoxyglucose method first as a scan and then after surgery compare the pathology to the fluorodeoxyglucose it would show that the ones with the highest metabolic rates were the grade four tumors.  Glioblastoma is the most common cancer in this category.  It had very high metabolic rates. Over the years it was determined that this higher metabolic rate is observed in many tumors and metastases.  So one relatively good way to look at and stage a person before you do a biopsy or before you remove or resect is to make a decision about resecting a patient. Do a whole body scan and you could see tiny metastases for example if  exist on the lung, on the brain or other places and it’s quite dramatic. 

 

CW:     That is very interesting.  So what makes it so difficult to understand the brain if you say for understanding the brain the technology is way ahead of the scientist understanding?

 

RC:      Because probably the unit, in other words with PET your measuring one substance, one process at a time and there’re so many processes that are going on.  So with PET you’re understanding a single tiny process or you’re understanding the integrative process.  In other words the tiny process would be like how much dopamine is being made during a period of time or you can look at deoxyglucose or blood flow which is the integrative process of everything that’s going on in this region at a time, but it’s very hard to look at anything between. 

 

CW:     So how do you try to get around that difficulty?   

 

RC:      Nobody has gotten around the difficulty, if someone thinks about how to do that it would be great.  See what they’re trying to do for example with electrodes. You are measuring events in the global brain or you have an electrode that you stick it in and you can measure the events in a singly neuron.  It’s turned out that both of those give you some interesting information but it has been difficult to fill in the levels from the single neuron to the billions of neurons.  You want to get in between that and what many scientists have done is to try to use multiple electrodes at the same time.  So you measure 200 or 300 cells at the same time studying them to try to fill in this gap.  So that’s been the issue at the moment -- is that I think it’s been difficult to go from this single neurotransmitter to the electrical in general in the brain or what’s going on in the brain.  So you can’t measure events very rapidly in time, one of the problems with PET and the resolution although sounding very good in terms of 3 ½ millimeters that’s still relatively large compared to a neuron.  So you need some more cleaver ideas about how to use the technology.  I mean there are many new tracers so people have made a tracer for serotonin-1A and serotonin-2C and we made one for an opium, an opiate receptor -- Johns Hopkins made another one for an opiate receptor.  But nobody has really found an illness, unfortunately or fortunately, you’re always hoping that see if you had this tracer that could measure a specific receptor or transmitter process that you might find an illness that was specifically related to it. I don’t know of any evidence that that specifically relates any of these tracers to an illness.  So that’s the problem.  So the fluoroDOPA  and dopamine agents was the pinnacle of this for the dopamine system and nobody else has come close to that, maybe because there really aren’t any or not common illnesses that really relate one transmitter to an illness or one specific physiological process to an illness.  So it hasn’t taken off. 

 

CW:     Were you surprised that it didn’t work for Alzheimer’s?

 

RC:      Not really.  It does and it doesn’t work for Alzheimer’s depending on who you ask.  It’s politically a very interesting issue, but the problem with humans from an experimental viewpoint is that we’re all so different and so everything you measure in a cross sectional there would be a standard deviation of plus or minus 20% or so, 25% in your own blood so you have to have quite a bit of change before you would say the person is anemic.  Now if I had measured your hemoglobin every day and all of a sudden I started to see it fall I would go, “Ah ha! Maybe something is going on here.  You’re either not eating right or you’re bleeding or something else has happened.”  Well the same thing occurs unfortunately for brain metabolism.  The rate of brain metabolism is also variable in people, plus or minus 20 to 25%.  So you do measurements on people and you find, in Alzheimer’s disease, early Alzheimer’s disease that metabolic rate is reduced in the group.  The problem you get into is making the diagnosis based on that because there are so many variabilities and as you age more variability will be introduced.  So we don’t have that baseline, a continuous baseline measurement to determine within an individual what his or her variability is whether or not the metabolic rate in the brain is starting to go down.  I would think that if we had a method, a cheap method, a method that didn’t require so much manpower that we could measure people and get a very good idea what they’re metabolic rate variation is and we follow them over time we might see the start of the process.  We might or might not and the reason we might or might not is because we’re not measuring –- you know Alzheimer’s disease there’s two things that probably take place.  One is that there’s probably some dysfunction that takes place in neurons early on before we recognize it and then there’s also neuronal drop out, neurons die, but the initial part of the process is fairly restricted.  So that it only occurs, or we think it only occurs in the medial temporal lobes to start with and there may not be initially – the neurons dropping out may not result in a very dramatic drop in metabolic rate because metabolic rate is determined in those areas not so much by the neurons themselves but by input.  So one of those things that Dr. Sokolov showed with the deoxyglucose method is that the major driving force for the fuel requirement is not the body of the neurons but the synapses that the efferents going onto the neurons, the communication, because that’s where the electrolyte balance changes.  In other words that is neurotransmitters are secreted you have this change in potassium and sodium currents across the cell membrane and it’s the requirement to re-pump these ions back into the cell that creates most of the fuel requirement.  So there’s sort of a situation where if you lost neurons but other parts of the brain were trying to respond and say, “You’re not -– you know you call somebody and they don’t answer the phone and you keep trying to call it turned out that you get more rings in the house than would if you connected the first time.  So part of the problem here is that neurons may be dropping out but you may have responses in other parts of the brain that keep the metabolic rate up in the hippocampus area or the medial temporal lobe areas, the underlying cortex etcetera so that it may not be so easy to see the earliest stages based on PET, but we don’t really know, because no one has really done that study of saying let’s really take individuals and see if we can detect it early by following them carefully over many years.   In other words it really requires a longitudinal study in people at risk to do this.  So right now it’s really not a practical method to diagnosis it.  Now why are people recommending PET?  There are some centers that do PET FDG studies with people with Alzheimer’s disease.  I guess the notion is that those people that have lower metabolism are more likely to have dementia than those people that don’t have it.  But primarily it probably correlates with their performance on tests.  I don’t know if that it’s really very independent of that issue.  In other words those people that clinically are performing worse one the cognitive test that you’d be more likely to diagnose Alzheimer’s disease anyway are the ones that have the lower metabolism, because what we do know is the metabolic rate within the Alzheimer patients themselves relate to what kind of problems they’re having.  So if you have a person that’s having more spatial problems then they’re more likely to have metabolic reduction in the spatial areas of the brain and one of the investigators here that was involved in that Tom Chase was involved in looking at that as well as Stanley Rapaport.  So I don’t know if it gives you independent information and the problem with most of the PET studies of Alzheimer’s disease is they have not really removed that issue from the study.  In other words say, suppose we just use the information very cognitively how much would the data on FDG really improve the diagnosis.  So I think it’s still -– it’s one of these things where you might do it on somebody that you weren’t very sure about but it might get you part of the answer. In other words if you had a young person in their 30s or 40s, which we occasionally have studied Alzheimer’s disease in and you have your doubts you might want to scan them and see if there’s anything unusual about the pattern.  You get so many different patterns one of the problems with that if there parietal and temporal lobe metabolic losses, but sometimes you see different things occur with visual areas, etc.

 

CW:     Yeah, maybe you could say something about animals studies, because you mentioned the difference in human brains, is that the same when you scan animals?  Do you find the same variability? 

 

RC:      Yes, there’s quite a bit of variability in animal brains too.  That is monkey brains.  There is not as much variability in mouse brain or rat brain but there is still quite a bit of variability.  It is not what you might think.  You might think that in animal brains, since the genes are all the same because you have inbreed strains that you wouldn’t have any variation in the animal brain, but there is.  It goes a lot toward to answering people who think genetics are everything in the brain.  It is not.  So that if you use the 2-deoxyglucose method we’re talking about the method that you do autoradiography. You will find quite a bit of variability even between brothers and sisters, identical twins if you will, animals.  Not as much variation as in humans.  So if the human is plus or minus 20 to 25% the animal might be plus or minus 10%.  You get an idea that yes in fact there are genetic influences, no question about it, but there is a great contribution to in uterine environment and other environmental factors that we don’t understand.  You get variation in human, the same human, if you scan them two weeks later they may be different.  I mean human blood flow, for example, five minutes later your blood flow is not the same.  So there’s constant variation.  Just as like heart rate.  You know heart rate varies all the time. 

 

CW:     Yes.

 

RC:      So in the blood flow to the brain, it’s normal to have these fluctuations.  So we don’t know –- I mean there’s probably a reason for the fluctuations, physiologically normal etcetera, but we don’t know how to get rid of these factors in trying to come up with standards.  What it does is it just leads to large variations.   So what have people done in blood flow studies?  What they’ve done is they said, “We’re going to eliminate absolute blood flow because that’s too variable across subjects.  So we’re going to do relative blood flow.”  So most of the activation studies were looking at which part of the brain activates when you shine a light, and which part of the brain is active when you’re listening to music.  Almost all of these studies have been done with relative rates.   What they’ve said is, “We don’t know what to do with this global rate.  We don’t think it means anything to blood flow changes in the brain on the whole so what we’re going to is we’re going to standardize to a unit.  Where going to say even though you have a normal. Let’s say your normal is 100 and my normal rate is 20 just to exaggerate it, we’re going to multiply me by 2 ½ times to make me 50 and we’re going to divide yours by two to make it 50.  So we’re going to bring everybody’s rate down to a common 50 and then we’re going to look at what the variation across different regions are related to that 50.  So we’re making it proportionate.  That’s how people can handle the variability, they said, “We just  can’t deal with it in a useful way because there’s too much variability in the human” and I said even from minute to minute its variable.  So one time we did a study where we looked at three different blood flow experiments in the same person without doing any manipulations and you can see these changes occurring.  We don’t know why that is but we know it does exist so I that’s just one of the difficulties working with people. Probably one of the joys that we have is our individual qualities and as I said not everything is the gene.  We used monkeys, monkeys are about as individual as humans, because they don’t have the same inbred quality.  So monkey use for a neurotransmitter measurement are very different from each other.   We are doing a study with the aging institute this last year was the end of it, but for few years we were doing some studies of MPTP and the issue of whether or not starving. I don't mean starving, I mean reduced diet -- protects against MPTP toxicity in aging monkeys and we wanted to get a baseline of what their chloramine, we were using chloramine and FMT fluoro-methyl tyrosine as a tracer.  It’s sort of equivalent to fluoroDOPA it has some advantages and disadvantages and you can look at the tremendous variability from monkey to monkey in their FMT.  They're kept it the same environment, we kept them on same diet, vastly different from each other.  So we got a baseline value on each monkey and the primary benefits we did unilateral so the relationship between the two sites is very close.  Even if the absolute amounts, so one monkey might be 100 on one side it would be 105 and the other side 102, one monkey might be 50.  He would be 51 on one side and 52 on the other side.  So the relationship between the two sites was very tight, but the relationships among monkeys was very different.  So when you give MTPT and you want to look at how much loss of dopamine using a general value not very useful but you can use the side to side as –- so you can equate in essence everybody to a standard, but this is the frustration I guess with working with the brain and working with these tracers is that the absolute values are quite variable.  As I said we don’t know the answer to why it’s so variable. 

 

CW:     So you work here with monkeys, rats and mice that are the animal models that you scan?

 

RC:      Right.

 

CW:     Okay.  They not have what’s called and it’s another development that took place, not here but a good deal of development took place at UCLA with Mike Phelps and his colleagues working for a company that he’s developed with Simpalay [spelled phonetically] micro PET which was a device developed for doing rats and mice.  It’s probably better with rats, so you can now scan even rats and mice.  One development -- a micro PET device was also developed here by Mike Green.  It’s not a commercial but we’ve used it here.  It’s a very good machine. It’s in commercial development.  So msst of the containers would be micro PET and Michael Phelps worked with the company with, but we have here in NIH the one that Mike Green made and I think he gave it to a couple of other places as well.  They’ve been very useful for trying to develop tracers because you don’t necessarily have to use monkeys to start with.  The advantage is if you want to measure and compare the amounts you’re getting from the scanner to what you see in the brain you don’t have to sacrifice a monkey which is the major advantage both in terms aesthetically and ethically and also in terms of the amount of work that you have to do and the ability to get animals that are fairly identical to each other.  No two monkeys are very similar to each other, but the mice or the rats can be very similar to each other.  So in the past before you had these micro PETs if you wanted to do experiments. If you wanted to look at the time course of something then you would have to get two rats here and two rats and two rats here for each of the time points every time point you’d use a few animals.  Now with the PET you can get all the time points in a single animal, because you don’t have to sacrifice and slice the brain to measure the radioactive isotope.  So you can if you need to but you don’t have to and again there’s variability between animals.  So if you have an animal that’s only used for the 20 minute time point it’s not the same animal that you can use for the 60 minute because you’ve already sacrificed them so you had much greater variation.  So it’s been a real improvement, but then the question is do you have a model for you illness if you’re looking at from the point of view of developing a tracer. 

 

So what’s the most exciting possible tracer at the moment that might be related to an illness?  Probably a tracer for amyloid and that’s what two groups that made the most progress on that would be a group at the University of Pittsburgh that we’re collaborating with, which is Bill Klunk is one of the people and Chester Mathis is a chemist.  Chester Mathis used to be here many years ago.  Chester Mathis and Bill Klunk had developed a compound that they call Pitt for Pittsburgh, which is purported to bind to amyloid.  This is that structure that is supposedly pathological in Alzheimer’s disease.  UCLA has developed another amyloid tracer that also they say binds to neurofibrillary tangles which is the second component in Alzheimer’s disease and they say that was Mike Phelps.  It was a different compound and then Ann Clone [spelled phonetically] at the University of Pennsylvania has developed a compound for looking at amyloid. The only three have been injected into humans.  There’s question marks about all three tracers.  Bob Innis,  we’re collaborating with Bob Innis, has been trying to develop an amyloid tracer that will improve on the amyloid tracers that are currently available.  So if that will work?  That I don’t know.  What’s the problems with these tracers?  The basic problem is how sensitive are they.  It goes back to the fluorodeoxyglucose in Alzheimer’s disease.  You don’t want a tracer that by the time the tracer tells you the person has the disease you already know it.   Because then why did you spend a lot of money and manpower to confirm what you already know?   So that’s the problem with the current amyloid tracer as far as I can tell that they can detect or seemingly at least as a group you can detect differences between Alzheimer’s disease and normal aged matched controls.  The question that seems to be the most important to my mind is can you predict who is going to develop the clinical centers or not based on what the pictures we’ve seen etcetera.  We have our doubts, but that’s what we’re collaborating with Bill Klunk with, to look at people that we know carry a gene that makes it more likely for them to develop Alzheimer’s disease.  As well as we know that some of these people have low levels of amyloid cerebral spinal cord makes them higher risk.  We want to see if their amyloid tracer will show more amyloid in their brain than the individuals who don’t have these properties.  So we’re going to try and test to see how sensitive the method is.  We wouldn’t be doing the experiment if we were sure one way or the other, but I have my doubts that it will turn out to be this way.  It would be great if it turns out to be that way for everybody but we don’t know yet.  In the meantime working with Bob Mensasis’ [spelled phonetically] groups using -- we have an Alzheimer disease mouse model that we got some years ago from Karen Chow [spelled phonetically] at the University of Minnesota to try to see if the amyloid tracer will work on these mice, but here we found that even mice -- and this is part of the reason I’m not sure it’s sensitive -- even mice who are 24 months old have lots of amyloid, and we can’t really detect by there.  We get a very tiny signal by their tracer.  With the same tracer the Pittsburg.   That may be because there’s something different about the amyloid in these mice or it maybe that the tracer is relatively insensitive.  It’s very hard -- we did one mouse who is 32 months old, that’s the longest we’ve ever had a mouse live with Alzheimer’s disease, and there was a bigger signal, but we haven’t been able to develop any mice that old again.  It’s very hard to get mice to live that long, so we’ve been struggling to get more mice that old to test situation -- and we were trying to develop other models that have more amyloid to get a better idea of how much amyloid was required.  We’re also working on trying to see if we can get a rat Alzheimer to see if the amyloid in the rat is more similar to the amyloid in the human.  We don’t really know why it’s different in the rats.  We’re struggling with that issue.

 

CW:     Well that’s great. I think we --

 

RC:      You can call and you can ask more questions you want at another time, but I really suggest that if you want to get a grounded view than I can give get in touch with Stan Rapoport and  Rich Carson.

 

CW:     Yes, I will talk to them.  

 

End of transcript