Dr. Paula Strassle Oral History 

Download the PDF: Strassle_Paula_oral_history  (116 kB)


Paula Strassle, Ph.D., M.S.P.H.

Behind the Mask

July 12, 2022


Barr: Good morning. Today is July 12, 2022. My name is Gabrielle Barr, and I’m the archivist at the Office of NIH History and Stetten Museum. Today I have the opportunity to speak to Dr. Paula Strassle. Dr. Strassle is a staff scientist at the National Institute on Minority Health and Health Disparities (NIMHD), and today she’s going to be speaking about some of her COVID-19 research experiences and activities. Thank you very much for being with me.


Strassle: Yeah! Thanks for having me.


Barr: In the study that looked at racial and ethnic barriers in intent to receive a COVID-19 vaccine, what does an “intent to be vaccinated” mean?


Strassle: That’s a great question. To provide a little context, this survey was conducted between December 2020 and February 2021 and, as you might know, the emergency use authorization for Pfizer—and I believe Moderna as well—came out in December. When we first initiated the survey, access to a COVID-19 vaccine was relatively theoretical. We specifically asked our participants “If a COVID vaccine were to become available, how likely are you to get vaccinated?” They had a Likert scale question, which means responses went from one to five, with one being not at all likely and then slightly, moderately, very, and then extremely likely to be vaccinated. That’s what we meant about intent. At the time of the survey, or at least when we were designing the survey, a vaccine wasn’t actually available. We didn’t ask about actual vaccination status or how willing they were to get a vaccine because they weren’t on the market. It was more of a hypothetical question at the time, although during this study, receiving them went from being a hypothetical to an actual possibility.


Barr: Will you speak about your methodology and analysis, especially considering all the different variables that you looked at?


Strassle: This analysis was done on what we call the CURB survey, which is the “COVID-19’s Unequal Racial Burden” survey. We conducted it at the end of 2020 and beginning of 2021, and it actually included, I believe, over 120 questions asking everything from vaccination status to financial hardship, job loss, and discrimination.


Barr: Did you help create some of those questions?


Strassle: My lab designed the survey, although we pulled most of the questions from validated survey measures. We tried to pull from other surveys where possible. I actually joined the NIH in December 2020. When I joined the lab, we were just implementing the survey, so I was not around during the design phase as far as choosing which questions to include. The overarching idea was we really wanted to get a picture of how COVID was impacting the lives of specifically racial/ethnic minorities but also the United States as a whole. For the survey design, we set quotas or targets for each of the major racial/ethnic groups, which is unique. As you and others might have noticed, in COVID-related research, there’s often racial/ethnic groups that, due to their relatively small size, get left out of surveys or get grouped into an “other” race category. The goal of this survey really was to make sure that we had enough, for example, [Native] Hawaiian/Pacific Islanders or Alaska Natives/Native Americans so that we could actually conduct these robust and nuanced analyses to really get a picture of what was happening in these groups. Up until this point, relatively little was known about what was going on with them. When other surveys were being conducted, they’re either not included at all, or they’re included but there’s so few of them that you can’t really say or do much with that data.

We designed this study and for every racial/ethnic group, we either recruited a thousand or 500 participants depending on the size. We ended up with an overall sample of 5,500 adults. It’s a really big survey. It’s one of the biggest surveys I’ve ever worked with—which, again, just speaks to its strengths because we all have so much data. The way we did our sampling and our weighting—without getting too much into the nitty-gritty, although I’m happy to do so—was when we do apply these weights, these groups are going to be nationally representative. For example, the thousand Asian participants we have, after weighting and the sampling matching strategy, we have, at least an expectation, that we are going to represent all Asian adults living in the U.S.—so we can make these kinds of large generalizations and get these nuanced understandings of what’s happening as a whole. Especially with surveys when they’re using things like convenience samples—which means you ask the people who are close to you or the ones who are easy to get to complete the survey—you can end up with some bias in there. People who are more likely to participate in a survey may not be representative of the community or the people you’re trying to generalize to as a whole. That’s just a little bit about the survey itself.

As far as this analysis went, we really wanted to look at socio-demographic characteristics and the focus on race, but also looking at age and gender, income status, education, language, and English proficiency—looking at some of those factors and how they appear to be associated with willingness to get vaccinated. At least the initial idea when we were doing the survey was that this data might be able to help create either programming or outreach in order to target communities or populations that might be less willing to get vaccinated to increase their willingness and then ultimately increase vaccination status, in order to help reduce the burden of COVID in the community by vaccination.


Barr: Can you talk about some of your findings, and can you explain some of the slight differences you and your team noticed amongst these different groups that you sampled?


Strassle: Overall in this survey, we found that about 30% were extremely willing to be vaccinated and 22% were not at all willing to be vaccinated, with 48% being what we’ve labeled as “unsure”, meaning they were slightly to somewhat likely to be vaccinated but clearly not a hard yes or a hard no. When we looked at the results, focusing on race-ethnicity, we found that compared to White respondents, American Indian/Alaskan Native, and Black/African American individuals were substantially more likely to be not at all willing to be vaccinated. Then on the flip side we found, for example, Asian and Spanish-speaking Latino participants especially were way more likely to be extremely willing to be vaccinated compared to White adults. Overall, these findings suggest that some racial/ethnic groups have more hesitancy around vaccination. That could be due to historical and potentially continuing mistrust in the healthcare system, concerns with the vaccination, or a lack of resources and information about vaccination. But as we see in the data now, on the flip side we have these groups who, at least initially when vaccines were being rolled out, were less likely to actually get vaccinated despite being more willing—which to us says that it’s an access issue, not a willingness issue.


Barr: It’s interesting—what are speculations of why Hispanic populations were so willing to get vaccinated? They haven’t always been treated so well by the American government.


Strassle: In the survey, we didn’t ask about some of these questions, but we found that English-speaking Latinos and White adults have relatively the same kind of distribution of vaccine willingness. At least traditionally, what has been done in research when stratifying by English- versus Spanish-speaking, you’re trying to slightly get at acculturation, or time in the U.S., for English-speaking Latinos. Our data actually shows this as well. On average, they had been in the U.S. longer, and they were more likely to not be first generation. Spanish-speaking Latinos, compared to the English-speaking Latinos, were more likely to have been more recent residents and less likely to be U.S. born citizens. I’m not sure why they are necessarily more willing, but one explanation could be acculturation or maybe a lack of time, so to speak, to really experience some of the barriers in the healthcare system. The structures that are in place in the U.S. might play a role, but I really don’t have a firm answer on that. It’s really interesting—being able to stratify by language preference and to get this kind of nuance. With Asians as well, but with Latinos, it’s such a heterogeneous group. There’s not a one size that fits all, so to speak, within the Hispanic-Latino population culturally, economically, financially, even as far as where they live—there’s so many differences in that population that by lumping them all together, you kind of lose some of this information, so being able to stratify [is important]. We also asked about heritage in a lot of these groups, or country of origin if they weren’t U.S. born citizens, and we were also able to look into the nuances comparing Latinos from South America versus Puerto Rico or Mexico or Central America, although the samples do get a little smaller when we do those, so the message isn’t quite as clear. We’ve seen consistent differences between English-speaking and Spanish-speaking, not just with vaccination status, but across a lot of the studies that we’re conducting using the survey—showing that these two groups have more differences than they do similarities, in some respects.


Barr: What actionable items came out of this study?


Strassle: What we hoped that this study would do is add to the existing body of literature and better elucidate that a one-size-fits-all approach isn’t really appropriate to try to increase vaccine willingness and actual vaccination among racial/ethnic minorities in the U.S. Racial/ethnic minorities have different reasons for why they were unwilling or why they’re not vaccinated, and so you really need to use these culturally appropriate and tailored kind of interventions and programming. Some populations, like American Indian or Alaskan Natives had higher levels of concerns and were less likely to be willing. Maybe education campaigns and outreach would be a good thing there, whereas with Spanish-speaking Latinos, who were some of the most willing participants we had for vaccination, when you see lower vaccination rates in that population, it’s not so much willingness but probably an issue of being able to access the vaccine. Things like being able to take off work for side effects or physically get to the locations of where the vaccines are being done. Having vaccine hubs or centers nearby these communities would probably increase vaccination more than trying to educate them about the vaccine because they’re already willing to do it, you just need to help them actually be able to.


Barr: How do you think the analysis or even your survey would shift as we’re now almost two and a half years into the pandemic, considering the trajectory of vaccines for COVID and a change in political and social landscape since your initial survey? Do you have any interest in reassessing?


Strassle: We actually, as part of this CURB Survey, did conduct a six-month follow-up. In August 2021, we did reach out to all 5,500 participants again and asked them to complete another survey, and in that one we asked about actual vaccination status as opposed to willingness because the vaccines were out, and we included information there. We’re still in the process of finalizing the analysis looking at initial willingness and subsequent vaccination status. We did generally find that their willingness back in December is highly associated with their status six months later, which was somewhat surprising initially because there was so much effort put out to try to increase access and increase education. We’re finding, especially among those who were not at all willing back in December, that only 10% or 11%  [Note: after final analysis, 31% of those not at all willing were vaccinated at follow-up; 44% were still not at all willing to vaccinate] of them had actually gotten vaccinated, which is incredibly low, and which shows that unfortunately some of our efforts have been somewhat ineffective. I wouldn’t be surprised if the politicization of the vaccine has played somewhat of a role and has really hindered our ability to educate and, I won’t say change the minds of people, but encourage individuals about the benefits of vaccination so they feel more comfortable making that decision for themselves.


Barr: Ten or 11% in 5,500 is so low!


Strassle: It was that initial 20%, and most of them didn’t get vaccinated. The vast majority didn’t get vaccinated. In comparison, among those who were extremely willing, almost all of them have been vaccinated. Among the individuals who were unsure initially and were in those middle categories, it was close to 70%-75%. Obviously, we saw differences across demographics, and I’m sure access plays a role as well. Individuals made up their mind back in December—when the vaccine was largely unavailable and theoretical—about whether or not they’d be vaccinated. Six months later, when they can see actual people getting vaccinated and they have six months of real-world data watching people get vaccinated and watching them have relatively minimal side effects—besides maybe feeling a little sick a day or two later—that doesn’t change their mind. Moving forward, for COVID and otherwise, you’re really going to need to think about how we try to help provide information and encourage individuals—and really not make it political because I don’t necessarily think that helped matters at all.


Barr: Did you look at people who may have gotten one shot but not the second or people who got the initial doses and just didn’t get the booster?


Strassle: When we did the survey the boosters weren’t available yet, so we just asked about the [initial] vaccines. We did ask what kind of vaccines they got—whether they got Johnson & Johnson, which meant they only needed one dose, or if they’d gotten Pfizer or Moderna, or there are others floating around, although not in the U.S. We asked if they had one or two doses. The vast majority—I want to say 90+ percent—had both doses at the time of the survey. The rest of them we were unable to figure out if the survey was conducted between their doses or they had chosen not to get a second dose—although we could potentially look more into that, because we did have some open-ended questions where they could potentially have commented on that. My guess is that we just happened to catch the majority of those people between their first and second dose when we administered the survey.


Barr: When did you begin coordinating your study that looked at COVID-19 related discrimination amongst minorities and marginalized communities in the United States and what were your main objectives? How is this survey supposed to differ from some of the others that have been put out there by other groups?


Strassle: We use the same survey—this CURB survey—to conduct this assessment looking at the prevalence of COVID-related discrimination. Our main objective was to provide additional evidence—because at that point there had only been a few studies actually trying to measure this. There had been a few surveys and the vast majority of the evidence, or lack thereof, that we had about COVID-related discrimination was somewhat anecdotal evidence or stories about people individually experiencing discrimination. If you hear several stories in the news about discrimination, it’s kind of hard to determine how common it actually is or if they’re cherry-picking these individuals out. Or if it’s even more widespread than what we’re seeing in the news and who’s being impacted. Up to at least when we were conducting the survey and still as we continue on, the focus has really been among the discrimination experienced by Asian and Asian American adults. We wanted to try to get a more nationally representative estimate of what was going on, both among Asian adults but also among other racial/ethnic minority groups, because to date these surveys had really only included Asian, Black, White, and maybe Latino adults. Like I said, there was very little information on Native Hawaiian/Pacific Islanders, Alaska Native/American Indians, or multiracial adults. There was really a lack of information about what they were experiencing, and because of the unique design of our study and large sample size, we were really able to dig into it. We thought that it would be important to report not just among Asian adults in our survey, but among all racial/ethnic groups. That was our driving motivation behind this analysis.


Barr: What were some of your findings and what were some of the public health implications from those results?


Strassle: We found that actually all racial/ethnic minorities were more likely to experience COVID-19 related discrimination. Somewhat unsurprisingly and unfortunate, we found that Asian adults were most likely to experience discrimination, with I believe about one in three [Asian] adults saying that they experienced COVID-related discrimination since the start of the pandemic. However, we also saw that American Indian/Alaska Natives, and Latino adults were also experiencing high prevalence of COVID-related discrimination, with about one in for adults reporting that they experienced discrimination during the pandemic. From what our data showed, we don’t believe these are one-off experiences. Across the board among those who experience discrimination, about half of them reported that it happened frequently. It’s not just a one-time occurrence. For a lot of these individuals who say they’re experiencing COVID-related discrimination, it’s occurring fairly frequently when they’re out and about in the world. In general, it just really shows that it’s more prevalent than what people expected. There are groups that are experiencing it that we didn’t necessarily anticipate and that haven’t really been discussed. We also found that COVID-related discrimination beyond race/ethnicity was also associated with having lower education, lower income, and having limited English proficiency. Individuals who might not speak English or who speak limited English were more likely to be experiencing discrimination. Those with lower levels of education or who had lower household income levels were more likely to experience this discrimination. Part of it may be due to the fact that education and income are associated with job status and these individuals are more likely to have had service jobs—meaning that they were still working in the community and in public during the pandemic as opposed to being able to work remotely or from home. They just had more interactions with the public. They’re also still being targeted. It means there’s opportunity to experience it, but also it goes to show that as a whole we could do better to show kindness—the pandemic has been incredibly stressful and scary, but to not take it out on others and to not blame others for a pandemic that is largely out of individual control.


Barr: What have been some challenges in both of these studies that you and your team experienced?


Strassle: Generally, one of the biggest challenges of conducting COVID research right now is that the landscape, as you mentioned, changes so quickly, right? We administered this survey, and it only took us six weeks to collect the data, which is very quick. We analyzed it within a few months. By the time we had gotten our vaccine analysis complete, vaccines were widely available—about 50 percent of the U.S. had been vaccinated. While this data is important to provide, as researchers we’re really starting to have to understand the data and how to balance conducting well-thought-out and nuanced studies while also being able to get information out there quickly. It takes time to conduct a survey. It takes time to run these models and these analyses and write up our results. Moving forward, it might be in the researcher’s best interest—especially in something like COVID where things are changing so quickly, and information is needed so rapidly—to try to figure out how to get the information out there even quicker so it can be used most effectively. For some of our other analyses, while important to conduct, the speed at which it needs to be done maybe makes it less so. It’s been an interesting conundrum. While we’re trying to do the best work we can, on the flip side, we’re also trying to get the message, knowledge, and data out there so that policy makers, healthcare systems, and individuals can use that data to take action to try to help people. That has been something that we have really tried to tackle and at least for me has been a challenge to try to reconcile.


Barr: Have there been any opportunities for your team as a result of doing these studies? Have you learned things that you will maybe apply to some of your other work, or are there any connections you’ve made with others?


Strassle: Personally, trying to learn and be better about science communication. Effectively communicating and disseminating our results is something that I’m constantly still learning how to do. As a researcher, at least traditionally, I was trained to do these analyses, write the paper, publish, and then we let others find the work and amplify it. Especially nowadays with social media and everything, there’s more of a push to try to do it yourself and get it out there, and that’s something that I’m still learning how to do—how to “toot my own horn”, so to speak, or talk about some of my work in a positive light. Even doing an interview like this talking about my work is not something that I necessarily thought I would be doing when I was going through my training—but things like this are incredibly important to try to reach the public and the communities you’re trying to help, and just increase awareness and understanding of science. The process is not straightforward and it’s not quick, as much as we would like it to be. There can be a disconnect there between those who are inside the science world and those who are not.


Barr: Definitely. What are your thoughts on data sets being produced around COVID-19 in terms of reproducibility and bias? I know that’s very much a passion of yours, and there’s a lot of that going on right now.


Strassle: Something that I am always thinking about is the reproducibility, as you mentioned, and then the potential biases or issues that are in big data generally, not just in surveys. Especially when you’re thinking about health disparities research, as I do, and being able to improve health equity in racial/ethnic minorities and other marginalized populations, you really have to think about who’s being included in these data sets and who’s not. As I was mentioning, an easy example is when you conduct a survey. If you use certain sources, like you just post it on social media or you send out emails or letters, who’s more likely to respond? Who are you missing? Sometimes it’s really easy to see. If you do the survey and you notice you don’t have any Native Americans in the study, then you can’t say anything about them because they’re not included. Some of it is a little more nuanced. For example, doing the survey, we conducted it in both English and Spanish, which is why we were able to get such a large Spanish-speaking Latino population. But then again, we didn’t conduct our survey in any other languages, so there’s definitely people we were unable to capture if they didn’t have a reading proficiency level and they didn’t feel comfortable conducting or answering questions in English. You have to think about that. Then with other data, like healthcare data or health insurance data, you have to think about who has health insurance, who doesn’t have health insurance, and who’s more likely to have health insurance. Generally, not just with COVID, but especially with COVID because things are happening so rapidly, we really need to think about when we get a data set. Regardless of how amazing our analysis is or how amazing the project has been designed—because you can put a lot of thought into those things and try to conduct the best study you can—we really need to think about who is being included in whatever data we have and who isn’t, and why. And try to solve and fix that in order to be able to make our studies as a whole better fit the communities we’re trying to reach in the U.S.  Historically and traditionally the groups that are hardest hit, have the worst outcomes, and are more likely to be impacted by COVID-19 or really anything else, are going to be less likely to be able to get into your dataset for a variety of reasons. Leaving them out needs to be thought about critically because it’s really important to try to include individuals in your studies who are most impacted by what you’re trying to study. They’re the ones who will probably need the help the most and might need the help first.


Barr: How do you go about doing that?


Strassle: There’s no good answer. It’s going to depend on the type of data you’re collecting. You try to do the best you can. When it comes to conducting surveys, you need to be thoughtful about how you recruit. When it comes to healthcare data, you need to be thoughtful about what the restrictions are to being included in that dataset, whether it’s insurance claims, a healthcare system like electronic medical record data, or a surveillance system. Who gets left out? Just be up front about it—who’s missing, why they might be missing, and the implications that might have on your study. Doing a survey and then saying that we found 10% of people experience X, Y, or Z, but not mentioning that this population might not be representative of the U.S. as a whole and really only represents this specific group—people might interpret [the incorrect thing] as the truth. In reality, it’s just the truth for a subset of people, which is fine and important to report, right? We don’t always need to include everyone in every study, but you don’t want to incorrectly apply results or findings to groups that weren’t included and assume that things are going to be equal.


Barr: How do you make choices like what languages to translate your studies into? I imagine you would love to do as many as possible, but that’s also difficult in terms of getting your study out there. I’m sure it takes time to translate.


Strassle: Yeah, it definitely takes time to translate. While I wasn’t there for the English-Spanish translation for our survey, I do know that it took quite a bit of time, several people, and several iterations. Not only trying to make sure that we translated it into Spanish, but that our Spanish and English surveys were asking the exact same questions and the question phrasing didn’t change so much that it was interpreted differently. For our vaccine question, we don’t want to accidentally have one individual being asked if they are vaccinated and then another one asked if they are willing to be vaccinated because of a translation error. Making sure the questions are as identical as possible is incredibly important. It’s just going to be a balance of what your goals are. Our team is specifically interested in studying the Hispanic-Latino population, so we wanted to make sure that we were able to capture Spanish-speaking Latinos in our survey. If we could have translated it into every major language that’s spoken in the United States, that would have been ideal—but we’d probably still be translating right now, and we wouldn’t have any data to talk about.


There’s going to be a balance between trying to identify populations you want to include in your survey or study and what is needed to access them. If we wanted Spanish-speaking Latinos and only conducted our survey in English, they wouldn’t have responded and they wouldn’t have been in the survey. It was really a necessity for us to translate into Spanish in order to be able to include them. If there’s a subgroup or a population that you really want to include in your survey, then translation or targeting or somehow getting them to be included is going to be critical. You just have to acknowledge your limitations. We definitely mention in every study published with this data that while it was conducted in English and Spanish, obviously individuals speak other languages—there are hundreds of other languages spoken in the U.S. alone. If they don’t have an English proficiency where they feel comfortable responding to a survey in English, then they’re going to be less likely to participate and the results might not really give a good idea of what they’re experiencing.


Barr: This is a side question: Have you ever done surveys in different dialects? Every language has a lot of different dialects and things like that. What kind of results come from those sorts of studies, and how is people’s willingness to speak when they feel like it aligns more to how they speak or write?


Strassle: That’s obviously something that’s important to think about when doing translations. I know for sure this survey was translated into a kind of general Spanish that could be interpreted by several dialects. We actually had Mexican American, South American, Central American, and Puerto Rican individuals look at the survey, and we made sure they were all reading and interpreting the questions the same way based on their family dialect. That’s definitely something to think about. Using a colloquialism or slang term should generally be avoided, or you should provide several examples of the slang term. Throughout the study you’ll see where we ask if they’ve experienced harassment and then in parentheses will put “for example, people yelling at you, etc.” and try to provide more details and examples. That way hopefully everyone’s interpreting it the same way regardless of what language they’re taking the survey in or what their dialects are. Even in English there’s different dialects—even in the U.S. But generally speaking, when we read, there is a kind of common ground there. I don’t speak most languages, but generally you can get to a place where at least reading-wise you have a more general and accepted language and where you try to avoid some of the slang and colloquialisms and go with more standard phrasing. Making sure that you include individuals from different backgrounds and with different dialects and having them read and review your survey before completing it, is incredibly important. We piloted our studies. Before we recruited actual participants, we had individuals basically “fake take” our survey. They came back to us and let us know which questions were confusing or where they were unclear on what we meant. We would change the phrasing. We did that with the surveys across age groups, gender, race, ethnicity, and English and Spanish speakers to make sure there wasn’t any kind of mismatch as much as possible.


Barr: Did you also translate things like defining “discrimination”? Words like that are just so big. You said you did that for “harassment”, I was wondering if you had to do that for other words?


Strassle: What we tried to do in the discrimination analysis was to actually ask several questions. We asked, “Has anyone ever said threatening remarks to you because they think you might have COVID?” Then we asked, “Has anyone ever bullied you because they thought you might have COVID?” We asked several different iterations of that question, and then combined it into a single measure because we found that people who consistently said yes to one were saying yes to the other. Then we were capturing the same theme, which we broadly describe as discrimination, but obviously there are specific actions. Generally, that’s important because obviously people are going to interpret things differently regardless of what language they’re taking the survey in. Regardless of anything. We all bring our personal biases and backgrounds into everything we do. Generally, we’re trying to use measures where it’s not just a single question, but it’s several questions trying to get at a single concept. That works better. Then again, our survey was 120 questions, and you have to try to balance trying to really get at what’s going on with how many questions we can ask before people get tired and don’t want to answer anymore. You have to prioritize a little bit which ones you get to ask five or six questions about in order to really dig into the concept or topic, versus which ones you only ask one or two questions about and try to get these broad overarching measures for.


Barr: Have you been involved, or do you plan to be involved, in any other COVID-19 initiatives?


Strassle: Moving forward, COVID is just generally going to hang over everything we do. Even looking at things like access to healthcare and stuff like that, COVID’s still having an impact. Some of the research questions we asked in this survey were asking about inability to get healthcare or whether you’ve gotten cancer screenings or went to the doctor for a routine check. We’re seeing that people skip those things because either the doctor’s office was shut down or they weren’t accepting patients. Or there was a time where hospitals were only doing emergency services. Those are going to likely have long-term consequences on things that are not even related to COVID. We might see changes in cancer rates or different chronic diseases, both because COVID and potentially long-COVID increase their risk, but also because people weren’t getting the healthcare they needed during COVID. This was due to hospitals shutting down in order to care for individuals who got sick. As far as COVID specifically, we’re still analyzing this survey. We probably have 10 to 15 projects going on between our trainees, our lab, and our collaborators who are trying to dig into this data. We’re hoping to soon publish things on financial hardship, job loss, childcare responsibilities, inability to get healthcare, and mental health during COVID, really trying to dig into a lot of the other things that happened during COVID besides the infection itself. Those things are important. They impacted everyone. They’re likely to have long-term consequences as well.


Barr: Can you talk a little bit about your role and expertise in managing all these different analyses and the launching of these studies?


Strassle: For the CURB survey specifically, so far, I’ve been acting as the data curator and one of the lead statisticians on the project. I’ve either conducted the analyses myself or supervised and checked all of the analyses that everyone else is conducting. Something that took a large part of my time was taking all of the survey data for 120 questions, cleaning the data, and creating measures. Going from four or five questions on different factors related to discrimination to creating a single factor takes a ton of time. We’ve had to do that for several different things like mental health measures or financial hardship measures, going from raw data to analytic measures where we can really use them for research. I’ve spent a lot of time and effort cleaning the data and doing that, which I enjoy since my background is in big data, data handling methods, and how to analyze it. I’ve been well suited for the role of having a dozen different individuals coming to the data to ask really interesting and important questions and then helping them design their study and their analysis, and plan around how to best answer those questions.


Barr: In addition to being a scientist, you’re also a person who’s been living through COVID-19. How has COVID impacted you as an individual these past two and a half years?


Strassle: Like most people, it’s been hard. I’ve lost family members. I’ve been stuck at home for a long time, only just starting to go back into the office. I joined NIH in December of 2020, and I didn’t meet a single co-worker in person until probably a few months ago. It was all virtual. And while everyone has been incredibly welcoming and warm, it’s definitely not the same. On another personal note, I had to postpone my wedding twice. We finally did do it in March of this year, but I was originally supposed to get married in January 2021, and it did not happen. Then in January 2022 it did not happen again.  


Barr: Congratulations!


Strassle: Well, thank you. It’s good to finally be done with it. I can definitely appreciate and feel for everyone else who has tried to plan a wedding, large gathering, or major event during a pandemic. Just the uncertainty of knowing if and when it will end. In addition to the wedding, we are wanting to see people. I haven’t seen my family as much. I haven’t seen my friends as much. Once the vaccines came out, that definitely helped me feel more comfortable. In the summer, being able to be outdoors has been nice. Finally, it’s slowly feeling like things are maybe reverting a little bit back to the way they were. In general, some things will never fully go back. I’ve definitely been lucky that I’ve been able to work from home throughout the pandemic and my partner, my husband, has been able to work from home as well. We haven’t lost our jobs or anything like that, so we’ve been lucky in that respect. I definitely don’t think there’s anyone who can say they haven’t been impacted by COVID.


Barr: Definitely not. Is there anything else that you’d like to share about your professional or personal experiences?


Strassle: Not that I can think of. I’d just generally like to give a shout-out to the other people on this survey: My supervisor, Anna María Nápoles, Ph.D., M.P.H., who’s the scientific director at NIMHD and who has definitely been the lead on designing this survey and the project; as well as Anita Stewart, Ph.D., who’s at UCSF [University of California San Francisco] and helped design the study. And the trainees in our lab, from postbac to postdoc and summer interns, who have been doing great work with this data and have come up with some really thoughtful research questions. It’s definitely a team effort. It’s not just me. We all play an important role in both this CURB survey and doing our important work looking at the impact of COVID on these high-risk and marginalized communities and racial/ethnic minorities in order to try to improve health equity.


Barr: Thank you for all your efforts and work. I look forward to seeing the results for more of the studies.


Strassle: Thank you so much for having me. It’s been great chatting.