AI-Supported Mammogram Reading Detects 20% More Cancers

Using artificial intelligence to help read mammograms found more cancers than the standard double reading by two radiologists.
Aug 3, 2023
 

Reading mammograms with the help of artificial intelligence (AI) software found 20% more cancers than the routine double reading by two different radiologists and didn’t increase false positives, according to a Swedish study.

The research was published in the August 2023 issue of The Lancet Oncology. Read the abstract of “Artificial intelligence-supported screen reading versus standard double reading in the Mammography Screening with Artificial Intelligence trial (MASAI): a clinical safety analysis of a randomised, controlled, non-inferiority, single-blinded, screening accuracy study.”

When a breast cancer screening test shows an abnormal area that looks like a cancer but turns out to be normal, it’s called a false positive. Ultimately the news is good: no breast cancer. But the suspicious area usually requires follow-up with more than one doctor, extra tests, and extra procedures, including a possible biopsy.

 

Using AI to read mammograms

To teach AI technology to read mammograms, technicians input information from millions of mammograms. The AI software creates a mathematical representation — an algorithm — of what a normal mammogram looks like and what a mammogram with cancer looks like. The AI system can see more detail in each mammogram than the human eye can, and checks each image against the standards to find any abnormalities.

Most professional guidelines recommend screening mammograms be read twice — once each by two different radiologists — to ensure the reading is correct. Still, double reading takes more time and may increase false positives.

AI-based mammogram support systems are considered by many to be a second set of eyes for radiologists and are considered one of the most promising applications for AI in radiology.

 

About the study

The ultimate goal of this study, called the MASAI trial, is to see if using AI-based mammogram support systems can find more interval cancers. Interval cancers are breast cancers found between a screening mammogram with normal results and the next screening mammogram. Interval breast cancers tend to be larger and grow and spread more quickly than breast cancers found by a routine mammogram.

The results presented here are an early analysis assessing only the safety of using AI to read screening mammograms.

The analysis included 80,020 Swedish women ages 40 to 80 who were eligible for screening mammograms between April 12, 2021 and July 28, 2022. The women were randomly assigned to one of two screening groups:

  • 39,996 women had mammograms that were read with the help of AI technology

  • 40,024 women had mammograms that were traditionally read by two different radiologists

More cancers were detected and more women were called back for additional testing in the AI group:

  • 244 cancers were found and 861 women were called back for more testing among women whose mammograms were read with the help of AI

  • 203 cancers were found and 817 women were called back for more testing among women whose mammograms were read traditionally

These results mean:

  • the cancer detection rate was 6.1 per 1,000 women whose mammograms were read with the help of AI

  • the cancer detection rate was 5.1 per 1,000 women whose mammograms were read traditionally

  • the recall rate was 2.2% for women whose mammograms were read with the help of AI

  • the recall rate was 2% for women whose mammograms were read traditionally

Among the women called back for more testing:

  • 28% of the women called back after a mammogram read with the help of AI were diagnosed with cancer

  • 25% of the women called back after a traditionally read mammogram were diagnosed with cancer

The false positive rate was 1.5% for both groups.

Although the results are encouraging, the researchers noted that the study had several limitations:

  • The analysis was done at a single center and was limited to one type of mammography machine and one AI system. These factors may mean the results can’t be applied widely.

  • Technical factors are going to affect the performance of the AI system, but these are less important than the experience of the radiologists using the system. Even when using an AI system, the final decision on whether to call someone back for more testing is made by the radiologist. So the results depend on the performance of the radiologists.

  • In this study, the radiologists were moderately to highly experienced, which means the results may not apply to less experienced radiologists.

  • Information on the race and ethnicity of the women wasn’t collected. So it’s unclear if AI-supported mammogram readings are accurate across all races and ethnicities.

“These promising interim safety results should be used to inform new trials and program-based evaluations to address the pronounced radiologist shortage in many countries,” lead author Kristina Lång, PhD, said in a statement. Dr. Lång is associate professor of radiology diagnostics at Lund University. “But they are not enough on their own to confirm that AI is ready to be implemented in mammography screening. We still need to understand the implications on patients’ outcomes, especially whether combining radiologists’ expertise with AI can help detect interval cancers that are often missed by traditional screening, as well as the cost-effectiveness of the technology.”

 

What this means for you

The results of this study are encouraging. Still, much more research is needed before AI is routinely used to read mammograms. 

A May 2023 study suggested that if an AI support system offered incorrect advice to a radiologist reading mammograms, it could seriously affect the accuracy of the reading, no matter the experience of the radiologist. 

Future studies plan to investigate the best ways of presenting AI information to radiologists so they can use the information effectively and without bias.

— Last updated on October 5, 2023 at 3:12 PM

Share your feedback
Help us learn how we can improve our research news coverage.