Bias in Healthcare AI
In late 2019, Meredith’s routine mammogram showed an area of concern. Both her doctor and AI — an artificial intelligence program — read her mammogram. Her doctor looked at the images and knew she had cancer, while the AI reading wasn’t so clear.
Listen to the episode to hear Meredith explain:
- Sponsor Message
how AI is taught to read and interpret a mammogram
- Sponsor Message
the factors a doctor considers when making a diagnosis versus the factors an AI considers
- Sponsor Message
how bias is introduced into AI
- Sponsor Message
why she wants a doctor to read her mammogram rather than AI
Scroll down to below the “About the guest” information to read a transcript of this podcast.
Meredith Broussard is a data journalist and AI researcher who is associate professor at the Arthur L. Carter Journalism Institute of New York University, research director at the NYU Alliance for Public Interest Technology, and the author of several books, including “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech” and “Artificial Unintelligence: How Computers Misunderstand the World.”
Updated on December 21, 2024
Welcome to The Breastcancer.org Podcast, the podcast that brings you the latest information on breast cancer research, treatments, side effects, and survivorship issues through expert interviews, as well as personal stories from people affected by breast cancer. Here's your host, Breastcancer.org Senior Editor, Jamie DePolo.
Jamie DePolo: Hello. As always, thanks for listening.
Our guest today is Meredith Broussard, a data journalist who is associate professor at the Arthur L. Carter Journalism Institute of New York University, research director at the NYU Alliance for Public Interest Technology, and the author of several books, including More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, and Artificial Unintelligence: How Computers Misunderstand the World.
In late 2019, a routine mammogram showed an area of concern. Meredith then had a diagnostic ultrasound. Both her doctor and AI, an artificial intelligence program, read her image results. Her doctor looked at the images and knew she had cancer, while the AI reading wasn't so clear.
She joins us today to talk about AI in healthcare and how it discriminates, as well as offer some questions people might want to ask their doctors about how AI is being used in their healthcare. Meredith, welcome to the podcast.
Meredith Broussard: Thanks, so much, for having me.
Jamie DePolo: So, I want to go right into, because we are Breastcancer.org, and this is The Breastcancer.org Podcast, I want to talk about how AI read your mammogram, and how did you know AI was used, and also did AI also read your ultrasound? Did it read all the imaging?
Meredith Broussard: Well, you have to remember, it was 2019, and we had not had the explosion of interest in AI. You know, there was no ChatGPT at that point. And I do happen to be an AI researcher, but I also am a breast cancer survivor. I mean when you get diagnosed, you freak out. And the way that you freak out is very consistent with your personality, right? So, the way that I freak out is I read absolutely everything I can find on the subject.
And so, I read everything I could find, including my entire electronic medical record. So, tucked away in kind of a footnote, in somewhere on the record, I found this line that said your scan was read by your doctor and also by an AI. And I thought who wrote this AI, what did the AI find? What is the bias embedded in this AI? And I had these questions, again, because I'm an AI researcher. But because I also had cancer, I just forgot about it.
So, then, a few months later, after I was recovered, I’m very grateful to the medical professionals who treated me, I went back and I tried to learn more about AI in breast cancer diagnosis. So, I designed an experiment where I would take my scans, which I knew showed cancer, right, because it had been, you know, I had had a mastectomy by this point.
Like, I knew there was cancer in the scans. I was going to run these scans through an open-source AI in order to write about the state of the art in AI-based cancer detection. So, the way that these programs work, right now, the majority of them, is they are programs that evaluate static images from your mammogram. So, they're not reading ultrasounds. They're mostly reading mammograms.
Jamie DePolo: Okay, and could you talk a little bit about your experiment, because I read, you wrote about it in an article for Wired, and it's, as I said in the intro, it sounded like the AI reading wasn't as clear, and I, if I'm remembering correctly, I believe your doctor did not have good things to say about an AI reading of a mammogram.
Meredith Broussard: Well, I was really confused, at first, and you know, I went to one doctor, and then I went to get a second opinion, and I actually went with the doctor who scoffed at the AI, right?
Jamie DePolo: Okay.
Meredith Broussard: And I went to this doctor, and I said, okay, well, this you know, I saw a note that said an AI read my scans, and this doctor said, you don't need an AI to read the scans, it is so obvious that this is the spot of cancer. And I looked at the scans, and it just looked like blobs to me, but I thought, all right, well, the doctor knows what they're talking about.
This is an extremely experienced, expert person. I trust them a lot more than I trust some random AI program written by some random person. And so, I decided to go with the doctor who was a little bit more skeptical about the AI.
So, since 2019, there's been a lot more interest in AI, and every year, the technology does get a little bit better. One of the things that I have heard, recently, is that now people are not only getting a report of what the AI found, if the AI is being used in their to read their scans, but people are also being offered an add-on, like an upcharge, for another $40, you can get an AI to read your mammograms, which, to me, is not particularly worth it because I do not think that the technology is quite as advanced as people would like us to believe.
One of the interesting things is that, actually, this kind of technology that reads the scans after the doctor reads the scans and identifies areas of concern, this has actually been around since the 1990s, and breast cancer doctors have not found it hugely useful since then.
Jamie DePolo: Okay. So, tell me a little bit about your experiment. I know it was a different AI program than the one that originally read your mammogram, but from what I remember, too, it sounded like the AI wasn't working the way you thought it was going to work.
Meredith Broussard: Oh, yeah. I had so many misconceptions about how this AI would work, which I wrote about because, actually, we all have misconceptions about AI.
Jamie DePolo: Absolutely.
Meredith Broussard: Most of us, when we think about AI, we think about Hollywood, right? We think about the Terminator, and we think about WALL-E, and we think about Star Wars, or Star Trek, which makes sense because our brains are actually better at recalling stories than they are at recalling facts and statistics, right, and Hollywood tells terrific stories.
So, we have these Hollywood ideas about AI, about sentient computers, and what have you, and we expect, when we use AI, that it's going to feel special, right? That it's going to feel somehow transformative, but the reality is that AI is just math. It's very complicated, beautiful math, and what AI programs are doing is they're doing very, very complicated statistics, you know, statistical analysis.
They're doing statistical analysis that shows how likely it is that there is an area of concern in a particular scan, which is way more boring-sounding than, oh, the AI is, you know, is like doing something transformative to my mammogram, right? So, it's really important to dwell in the realm of what is real about AI as opposed to what is imaginary about AI.
So, when I did this experiment, when I really dug into what I expected, I expected that I was going to take my entire medical record, feed it into this program. It was going to do some super sophisticated analysis, and then it was going to give me like, you know, a pop-up, or something, that says congratulations, you don't have cancer, or uh-oh, you probably have cancer, here it is, right?
So, I had these unreasonable expectations. So, one of the things that I wrote about in the Wired story and in my book, More Than a Glitch, is about what the realities of AI-based cancer detection are. So, what I did was I took my scans, and I fed them into this program that I downloaded from the Internet. It was by a very prominent breast cancer researcher, you know, used top-notch technology, and I found a bunch of inconsistencies.
So, first of all, there wasn't much documentation for the program, and so, the first time I put my scans in, I was really surprised that it was not taking my whole medical record, that it just looked at one or two static images. And it turned out that the version that I had was really low-resolution, and the program required high-resolution scans. So, I tried to download a higher-resolution version, right, because if you've been in your electronic medical record, and you've done the, you know, you hit the download button, and you're like, oh, I have the files, I can take these somewhere.
No, they're not sufficiently high-resolution to be used in high-end medical diagnostic systems. So, I tried to get higher resolution versions of the scans, and it turns out that I could only get them on CD, right? Now, I don't know about you, but I do not have a CD-ROM drive connected to any computer that I own because I have modern computers. But you know who has CD-ROM drives? Doctors' offices.
Jamie DePolo: Doctors' offices and lawyers' offices.
Meredith Broussard: Yeah. So, this was a really interesting and very retro problem. So, I had to use the mail to get a CD. I had to buy a CD-ROM drive in order to get these high-res versions of my own scans, which was pretty shocking that there was this very, you know, very low-tech part of this process, and part of the reason there, is that nobody does this, right?
We do a lot of self-quantification things, right? We have a Fitbit, and we have our Apple Watches, and like we look at the graphs of, you know, how many steps we did, but if you get any more complicated than that, like if you try to, you know, mess with the pedometer inside your Apple Watch, like, you would not be able to access it, right? They deliberately design these things so that you can't get at this information.
And I was the first person to do this kind of self-quantification experiment. I was the first person to write about running my mammogram through an open-source AI, in part because computational breast cancer researchers tend to be men, who tend to not have their own mammograms, right?
Jamie DePolo: Right.
Meredith Broussard: And you know, I was just stubborn enough to want to like, you know, machete my way through this process. So, another thing I learned in this experiment, you know, after I fought through this issue with file formats, is I thought there was going to be some kind of big announcement, but no. It just drew a circle around an area of mammogram and gave me a score with the likelihood that the area was malignant.
And then I talked to the researcher who built the program, and he said, no, it's just a score. Okay. So, the reason for this has to do with the medical and legal landscape behind AI and also the money, right? It's all kind of tangled up together. So, if the AI gives a diagnosis, then it is a different category of thing, right? So, medical devices, for example, are regulated by the FDA.
And so, if it gives a diagnosis, well, then you have to kind of you have to think about it as a different kind of, say, diagnostic device, whereas if it's just a score that the doctor can or can't use at their own discretion, then we're talking about a different legal framework. Again, it's 2020 by this point, and the FDA had not developed its rules around AI and medical diagnostics, right? So, it's still all legally ambiguous.
Jamie DePolo: Okay.
Meredith Broussard: So, here's the meaty financial part of it, too, that I also find really interesting. When a doctor at a hospital reads a scan, the hospital gets paid for the doctor's work. When an AI reads a scan at a hospital, the hospital does not get paid. Let's think about who is incentivized to have AI versus doctors reading scans. Well, it's the insurance companies who, you know, are the ones paying, for the most part, for the doctors to read the scans, right?
So, the push toward using more AI in healthcare is not just about technological progress. It's also about various actors being interested in cutting costs. And personally, as someone who was diagnosed with cancer and went through this whole process, I do not want more AI being used, right?
I do not want us to go toward this, you know, this utopian, or this situation that a lot of people imagine, where AI does the diagnosis, and then you just like get a little note in your chart, saying cancer, or no cancer. Like, I want to talk to a doctor. I want to talk to a medical professional. I want them to help me and coach me through it.
Jamie DePolo: Well, let me ask you this. It sounds like -- I'm not sure about the original program that read your mammogram -- but the open source that you then ran your scans through, it was only looking at images. So, it took nothing else about your medical history into account, whether you had a family history of breast cancer, whether you had, say, a genetic mutation linked to breast cancer, all things that could potentially influence how that might be read or whether there should be more follow-up because…And I don't know this for sure. Obviously, I'm not a diagnostician, but if somebody knew they had a genetic mutation linked to breast cancer, and there was an area of concern, you would be more interested in that because of the higher probability of somebody having breast cancer in that case.
Meredith Broussard: Absolutely, and this speaks to the difference between how a machine diagnoses and how a doctor diagnoses, because a machine is not actually diagnosing. A machine is just looking at a static image, you know, the kind of half-circle image of a breast with some blobs in it, and it's identifying areas that are statistically likely to be out of the range of normal.
And that is totally different than what a doctor does. A doctor does look at the scans, and they also look at your entire medical record, and they talk to you, and they think about your genetics, and they think about your risk factors. So, the AI is not doing all the same things that the doctor does. It's also not making the decision in the same kinds of ways, and so, it's this discontinuity that most people don't think about.
Most people think, oh, the AI is superior to humans, right, because we, you know, have this Hollywood idea. But actually, when we dig into it and we look at what is happening, step by step, the reality is quite different than what most people imagine, and it is not quite as sophisticated as most people imagine.
Jamie DePolo: Well, and the other thing I want to ask you, you're an AI researcher, and my impression is an AI program is only going to be as good as the data that goes into it. So, if the data does not reflect mammograms of society as a whole, then the ability of the AI to detect cancer, or highlight an area of concern rather than detect or diagnose, is going to be flawed because it doesn't have all the information.
Meredith Broussard: You're exactly right. So, there's two issues here. One is, well, there's a number of issues here. One has to do with the diversity of the training dataset, and the other has to do with kind of up-to-date-ness of the dataset, right?
So what's happening in a machine learning situation, right, most of the AI we use nowadays is machine learning, is you take a whole bunch of data, and you dump it into the computer, and you say, computer, make a model.
The computer makes a model, which shows the mathematical patterns in the data, and then you can use that model to do all kinds of cool things, like predict, or make decisions, or generate new text, images, audio, video, what have you, right? So, that's what's happening in ChatGPT. That's what's happening in predictive text in your email or in your Google searches. That's what's happening when machine learning evaluates you for a bank loan. Like, it's the same underlying process.
So, then we can think about, okay, what is the data that is being used to train the AI model that is identifying areas of concern on mammograms? The particular AI that I used was an incredibly extensive dataset. It was the largest dataset at the time of mammogram images, but, as we know, datasets of medical images do not necessarily have sufficient diversity in them, right?
They're generally gathered from one hospital system, not from multiple hospital systems. Well, where the hospital system is located is going to influence the composition of the population who ends up in that dataset, right? Because, especially in the US, we tend to have residential segregation. We tend to have, you know, homogenous populations in certain places. So, we don't necessarily have the diversity in the dataset that we need.
The other factor has to do with the newness of the dataset. So, when you train an AI model, you train it on the data up until point X, right? So, ChatGPT, the first iteration of it that got popular, was trained on data up until September 2021. So, it had no data on anything that happened after September 2021.
So, if there's a newer kind of cancer that gets discovered, it's not in the dataset, or it's not necessarily labeled and identifiable in the dataset. So, you need to go back and retrain the whole thing to adapt to this new information, which is totally possible. It's just that it's extremely expensive, and labor intensive, and we all know how good people are at updating their computational data, which is to say not very good.
Jamie DePolo: So, that makes me ask this question. I guess I was under the impression if you have an AI program that new data was continually being fed into it. But from what you just said, it sounds like that is not the case. There is a cutoff point, and then the program is used and not necessarily updated, per se, if that's the right word.
Meredith Broussard: Yeah. You have to update it, I mean, not manually, but basically, AI programs require a lot more care and feeding than most people imagine. It's a human process. It's not like the AI is just slurping up new data on everybody who gets a mammogram, or everybody whose data gets run through it. I mean unless it's specifically constructed like that, which it's generally not, because, you know, HIPAA, etcetera.
Jamie DePolo: Okay. Well, I read a lot of breast cancer studies, and in many cases, data is coming from the National Cancer Database, and it always strikes me that perhaps it needs to be broader because the National Cancer Database comes from I think it's, I don't remember, it's a number of National Commission on Cancer accredited hospitals across the country. But I'm assuming that some of the local community hospitals where people with very low incomes likely get their care are not accredited in that manner. So, as you said, there's whole groups of people whose data is not reflected in these AI programs. Am I understanding that correctly?
Meredith Broussard: Yeah, and there's also just so much we don't know about cancer. There's so much we don't know about genetics, about environmental factors, and one of the kind of points of hubris around AI is the idea that we can make this machine that knows everything that we know right now and can make decisions, right? But when we are in the field of cancer, when we're in like, when we're thinking about immune systems, when we're thinking about genetics, there is so much that we don't know.
And in an area where things are profoundly unknown, and the science is changing every year, well, it doesn't necessarily make sense to codify what we know today in code because the code system is actually less flexible than the human brain, in certain senses. The code is not necessarily going to be updated with the latest information, right?
So, it's a lot easier to update your brain than it is to update a multi-million dollar machine learning system. Now, the way that AI is generally used in diagnostics nowadays, is as an assistant to doctors after the doctors have entered their diagnosis, right? So, if I am a, say, a radiologist who's reading a mammogram, I read it, I enter in my notes, you know, my visit summary or whatever, and then, after that, I would get the score from the AI, or the AI's evaluation, right?
You can set this up different ways, and it's possible that, you know, that different health systems are doing things slightly differently, but from my research at the time that I was writing, doctors were getting the AI results after they entered in their own evaluations.
And one of the interesting papers I found looked at different kinds of cancer docs and looked at what they were doing. And the breast cancer doctors mostly ignored the AI results because they were, you know, they found them unhelpful, whereas the lung cancer doctors mostly said, oh, this is so great, it's the AI validating what I already thought, which was super interesting to me because you tend to think about everybody using the computer the same way.
No. People use computers differently, and you know, the AI's not necessarily valid or helpful on certain systems. Whereas maybe it's more helpful or more useful in other systems, other body systems.
Jamie DePolo: Sure. Now, in your latest book, More Than a Glitch, you talk about bias in AI and technology. Do you think there are ways to remove the bias from AI?
Meredith Broussard: So, it depends on the AI, and it depends on the context, right? So, one of the examples that I write about in the book is an experiment that some journalists at The Markup did, looking at automated mortgage approvals and such, right? So, these systems that would take your financial data and say yes or no, you can have a mortgage.
And what they found was that automated mortgage approval systems were 40% to 80% more likely to deny borrowers of color as opposed to their white counterparts, and then in some metro areas, the disparity is more than 200%, right?
So, there is a lot of bias in AI because there's a lot of bias in the world, and that pre-existing bias gets embedded in AI systems, right? So, the automated mortgage approval systems were replicating centuries of financial discrimination.
Now, we know it's easy to understand how this may be happening in finance, right? Because we're looking at data on who's gotten mortgages in the past, putting that into a computer, and making a model, and you know, make the model makes decisions based on who's gotten mortgages in the past, right? It's pretty easy to see.
In breast cancer, way more complicated. It's not quite as easy to intuit what is happening, and so, I like using the kind of simpler examples as starter examples so that people can think through all of the bias, potential bias issues. So, in the mortgage situation, we could change the AI so that it offers more mortgages to borrowers of color, right? That would be a way of putting a thumb on the scale, right? Is anybody doing that? I'm not aware of it, but you know, it is theoretically possible.
So, sometimes, it is possible to reduce certain kinds of bias in AI, and then other times not. So, one of the studies that I came across when I was looking into breast cancer diagnosis is the study that found that AI results that were trained on data from a particular hospital system got this very high degree of accuracy.
But then when the researchers added in data on the patients' race, the results became really, really inaccurate for certain groups, right? So, we had differential accuracy based on race. Now, why is that? Nobody knows because these are scans, right? These are scans of internal organs. To the human eye, there's no difference, but what machine learning is doing is it's looking at the mathematical patterns.
So, it could be looking at something as simple as the patient name or like whether the name of the hospital is the upper-right corner versus the upper-left corner. Like, that could have some, you know, some influence on its diagnosis. We just don't know because it's using millions and millions of mathematical data points that human brains cannot comprehend.
Jamie DePolo: But as you said, it's interesting because race is not biological. It's sort of a social construct. So, the fact that when race got added in, the results became inaccurate, maybe it is just that the hospital name was in the upper-right corner, but it's very interesting to me that that happened.
Meredith Broussard: Exactly. Exactly. So, race is a social construct. It is not a biological reality, but it often gets used as shorthand for biology. It often gets used as a kind of cognitive shortcut for doctors, and again, then, it sometimes gets embedded in these technological programs as if it were a biological reality, right? And so, there's this complicated interplay between genetics and predisposition to certain conditions, right?
Like, that is a thing we know to be true. Well, we often use race as a proxy for genetics, which it's not, but in medicine, all of these things are tangled up.
So, my background is that I am mixed race. My dad was Black and my mom was white, and my son looks white, and when he was a baby, I took him to the doctor for this rash. And the doctor said, oh, you know, it's strange that he has this rash, you usually only see this in Black babies. I was like, well, we're Black, and the doctor said, oh, okay, sorry, and you know, prescribed the ointment, and you know, looked kind of embarrassed.
So, I always think about this moment when I think about the way that race is included in diagnostic medicine because when I was diagnosed with breast cancer, I wondered what does this do to my, you know, my survival chances, right? Because Black women are 40% more likely to die from breast cancer than white women.
Well, okay, am I considered white, or am I considered Black for the purposes of that particular statistical likelihood? Which, well, actually, like it doesn't make sense to use race that way because it's all kind of messy. So, I think it's really interesting to look at these issues, to unpack them, and to use AI as a site for unpacking all of these complicated technical, and social, and biological issues.
Jamie DePolo: Right. It's making me think about, too, you mentioned a stat about Black women, 40% more likely to die from breast cancer than white women. Well, we also know that Black women are more likely to be diagnosed with triple-negative breast cancer. So, is it, is there some biological thing that we just don't know about that is grouping these women rather than race, and as you said, race is a shorthand for whatever that biological thing is?
It's interesting to think about because, obviously, not every single person diagnosed with triple-negative breast cancer is a Black woman. So, there are obviously some biological overlaps that, you know, race doesn't play into, but it is interesting to think about, as you said, how this has become this shorthand.
Meredith Broussard: Yeah. It's really interesting. Another myth that's out there that you often run into is the idea that Black women have very dense breasts.
Jamie DePolo: Oh, I haven't heard that one. Okay.
Meredith Broussard: Yeah. So, a friend of mine who had breast cancer was diagnosed in Tennessee, and she went to the doctor, and the doctor said, oh, your breasts are really dense, like, it's surprising that you're not Black because Black women have dense breasts. Like, this is a total myth. Like, Black women's breasts are not different than anybody else's breasts, and guess what? Breast tissue is dense. Like, I don't know anybody who hasn't gone to the doctor, and the doctor has kind of said, you know, a little accusingly, like, oh, it's really hard to read these scans because your breasts are so dense. It's just hard, right? It's hard to see stuff in dense tissue, right? Breasts tend to be dense, right? That's just, that's how they work.
So, that was another example of just running into a very you know, the outdated pseudoscientific notion in the modern world, right? And you know, we all have unconscious bias. We all have these, you know, kind of knuckleheaded things that we've picked up along the way.
And we're all trying to overcome it. We're all trying to become better people, every day, and learn more and change. But with unconscious bias or with things we're wrong about, we can't see unconscious bias, right? It's unconscious. So, we're going to we're always going to run across these things, and the AI developers themselves also have unconscious bias that they then embed in the technological systems that they create.
Jamie DePolo: Yes. Well finally, and very specifically, for our audience, I'm thinking about women going to get a mammogram. We just had the U.S. Preventive Services Task Force update their recommendations. Now, they're finally recommending -- I say finally because I have my bias about when mammograms should start and how frequently they should be done -- but there are women now turning 40 going for their first mammogram.
In your opinion, as an AI researcher, should they be asking questions about AI, how it's being used to read a mammogram? I know, often, at least when I go, I don't get the opportunity to talk to the radiologist, but I can talk to the technician.
And they're usually pretty open about how things are read, is it a 3D mammogram, is AI being used, and those kinds of things. Are there things women should ask, you know? Should they look in their medical records and find this out? What would you say?
Meredith Broussard: I'm not sure that it's so common, right now, that people need to be worried. The important thing is to get a mammogram. The important thing is to get the mammogram early, get it often, you know, get it on schedule because one of the things that really surprised me when I went through my breast cancer experience was that they could catch things so early using a mammogram.
Like, a mammogram is a really fantastic, life-saving technology. Like, I wouldn't personally pay the $40 upcharge to have an AI read my scans. Like, it's not that amazing. On the other hand, if you have a lot of anxiety and it's going to make you feel better paying the $40, like, sure, pay the $40. But, it's not so good that I feel like it needs to be an essential add-on to everybody's experience, right?
Like, I'm definitely not going to be the person who says, oh, yeah, we should be using more AI. And if you do want to ask questions about AI, absolutely, but know that it's likely being used as an addition after your doctor puts in their evaluation. So, it is likely not the front line of diagnosis.
Jamie DePolo: And it's probably a different AI program at different facilities? There's not one universal program that reads mammograms, right now, is there?
Meredith Broussard: Exactly.
Jamie DePolo: Okay.
Meredith Broussard: No. It's going to be different in every situation. If you go to different health systems, if you notice, they have different back-end technologies, right? So, like, Epic or MyChart gets used a lot of places. So, like, sometimes you'll go to one hospital, and they're using Epic, and then you go to another hospital, and you get a different Epic account, right?
So, everybody who's using Epic is probably using the same back-end technology, but if you go to a hospital that, but like every hospital has its own set of contracts and its own set of medical technology, and the contract governs the technology, and the contract governs, you know, the availability of the information to other systems.
This is why, often, when you go and get scans, they will give you a copy of the scan on CD at the point of service because the computer systems are not necessarily able to talk to each other, right? We're back to that weird retro-ness of medical technology systems.
Jamie DePolo: Right.
Meredith Broussard: Right. So, it's all very idiosyncratic. You absolutely can ask questions about it. Probably, the staff will not know what the nuances are of the back end of the medical technology systems, you know? So, you can ask questions. I don't know how far you'll get. It's not very effective technology. Your doctors are your doctors, your medical professionals, are generally doing a great job.
Jamie DePolo: Meredith, thank you so much. This has been really helpful and informative, and I really appreciate your insights.
Meredith Broussard: Thank you so much.
Thank you for listening to The Breastcancer.org Podcast. Please subscribe on Apple Podcasts. To share your thoughts about this or any episode, email us at podcast@breastcancer.org, or leave feedback on the podcast episode landing page on our website, and remember, you could find out a lot more information about breast cancer at Breastcancer.org, and you can connect with thousands of people affected by breast cancer by joining our online community.
Your donation goes directly to what you read, hear, and see on Breastcancer.org.