Tipping Point for Breast AI?

Have we reached a tipping point when it comes to AI for breast screening? This week another study was published – this one in Radiology – demonstrating the value of AI for interpreting screening mammograms. 

Of all the medical imaging exams, breast screening probably could use the most help. Reading mammograms has been compared to looking for a needle in a haystack, with radiologists reviewing thousands of images before finding a single cancer. 

AI could help in multiple ways, either at the radiologist’s side during interpretation or by reviewing mammograms in advance, triaging the ones most likely to be normal while reserving suspicious exams for closer attention by radiologists (indeed, that was the approach used in the MASAI study in Sweden in August).

In the new study, UK researchers in the PERFORMS trial compared the performance of Lunit’s INSIGHT MMG AI algorithm to that of 552 radiologists in 240 test mammogram cases, finding that …

  • AI was comparable to radiologists for sensitivity (91% vs. 90%, P=0.26) and specificity (77% vs. 76%, P=0.85). 
  • There was no statistically significant difference in AUC (0.93 vs. 0.88, P=0.15)
  • AI and radiologists were comparable or no different with other metrics

Like the MASAI trial, the PERFORMS results show that AI could play an important role in breast screening. To that end, a new paper in European Journal of Radiology proposes a roadmap for implementing mammography AI as part of single-reader breast screening programs, offering suggestions on prospective clinical trials that should take place to prove breast AI is ready for widespread use in the NHS – and beyond. 

The Takeaway

It certainly does seem that AI for breast screening has reached a tipping point. Taken together, PERFORMS and MASAI show that mammography AI works well enough that “the days of double reading are numbered,” at least where it is practiced in Europe, as noted in an editorial by Liane Philpotts, MD

While double-reading isn’t practiced in the US, the PERFORMS protocol could be used to supplement non-specialized radiologists who don’t see that many mammograms, Philpotts notes. Either way, AI looks poised to make a major impact in breast screening on both sides of the Atlantic.

The Mammography AI Generalizability Gap

The “radiologists with AI beat radiologists without AI” trend might have achieved mainstream status in Spring 2020, when the DM DREAM Challenge developed an ensemble of mammography AI solutions that allowed radiologists to outperform rads who weren’t using AI.

The DM DREAM Challenge had plenty of credibility. It was produced by a team of respected experts, combined eight top-performing AI models, and used massive training and validation datasets (144k & 166k exams) from geographically distant regions (Washington state, USA & Stockholm, Sweden).

However, a new external validation study highlighted one problem that many weren’t thinking about back then. Ethnic diversity can have a major impact on AI performance, and the majority of women in the two datasets were White.

The new study used an ensemble of 11 mammography AI models from the DREAM study (the Challenge Ensemble Model; CEM) to analyze 37k mammography exams from UCLA’s diverse screening program, finding that:

  • The CEM model’s UCLA performance declined from the previous Washington and Sweden validations (AUROCs: 0.85 vs. 0.90 & 0.92)
  • The CEM model improved when combined with UCLA radiologist assessments, but still fell short of the Sweden AI+rads validation (AUROCs: 0.935 vs. 0.942)
  • The CEM + radiologists model also achieved slightly lower sensitivity (0.813 vs. 0.826) and specificity (0.925 vs. 0.930) than UCLA rads without AI 
  • The CEM + radiologists method performed particularly poorly with Hispanic women and women with a history of breast cancer

The Takeaway

Although generalization challenges and the importance of data diversity are everyday AI topics in late 2022, this follow-up study highlights how big of a challenge they can be (regardless of training size, ensemble approach, or validation track record), and underscores the need for local validation and fine-tuning before clinical adoption. 

It also underscores how much we’ve learned in the last three years, as neither the 2020 DREAM study’s limitations statement nor critical follow-up editorials mentioned data diversity among the study’s potential challenges.

Multimodal AI Virtual Breast Biopsies

Radiology Journal detailed a multimodal AI solution that can classify breast lesion subtypes using mammograms, potentially reducing unnecessary biopsies and improving biopsy interpretations. 

Researchers from Israel and IBM/Merative first pretrained a deep learning model with 26k digital mammograms to classify images (malignant, benign, or normal), and used these pretraining weights to develop a lesion subtype classification model trained with mammograms and clinical data. Finally, they trained a pair of lesion classification models using digital mammograms linked to biopsy results from 2,120 women in Israel and 1,642 women in the US. 

When the Israel AI model was tested against mammograms from 441 Israeli women it…

  • Predicted malignancy with an 0.88 AUC
  • Classified ductal carcinoma in situ, invasive carcinomas, or benign lesions with 0.76, 0.85, and 0.82 AUCs
  • Correctly interpreted 98.7% of malignant mammographic examinations and 74.6% of invasive carcinomas (matching three radiologists)
  • Would have prevented 13% of unnecessary biopsies and missed 1.3% of malignancies (at 99% sensitivity)

When the US AI model was tested against mammograms from 344 US women it…

  • Predicted malignancy with a lower 0.80 AUC
  • Classified ductal carcinoma in situ, invasive carcinomas, or benign lesions with lower 0.74, 0.83, and 0.72 AUCs 
  • Correctly interpreted 96.8% of malignant mammographic examinations and 63% of invasive carcinomas (matching three radiologists)

The authors attributed the US model’s lower accuracy to its smaller training dataset, and noted that the two models’ also had worse performance when tested against data from the other country (US model w/Israel data, Israel model w/ US data) or when classifying rare lesion types. 

However, they were still bullish about this approach with enough training data, and noted the future potential to add other imaging modalities and genetic information to further enhance multimodal breast cancer assessments.

The Takeaway 

We’ve historically relied on biopsy results to classify breast lesion subtypes, and that will remain true for quite a while. However, this study shows that multimodal-trained AI can extract far more information from mammograms, while potentially reducing unnecessary biopsies and improving the accuracy of the biopsies that are performed.

Bad AI Goes Viral

A recent mammography AI study review quickly evolved from a “study” to a “story” after a single tweet from Eric Topol (to his 521k followers), calling mammography AI’s accuracy “very disappointing” and prompting a new flow of online conversations about how far imaging AI is from achieving its promise. However, the bigger “story” here might actually be how much AI research needs to evolve.

The Study Review: A team of UK-based researchers reviewed 12 digital mammography screening AI studies (n = 131,822 women). The studies analyzed DM screening AI’s performance when used as a standalone system (5 studies), as a reader aid (3 studies), or for triage (4 studies).

The AI Assessment: The biggest public takeaway was that 34 of the 36 AI systems (94%) evaluated in three of the studies were less accurate than a single radiologist, and all were less accurate than the consensus of two or more radiologists. They also found that AI modestly improved radiologist accuracy when used as a reader aid and eliminated around half of negative screenings when used for triage (but also missed some cancers).

The AI Research Assessment: Each of the reviewed studies were “of poor methodological quality,” all were retrospective, and most studies had high risks of bias and high applicability concerns. Unsurprisingly, these methodology-focused assessments didn’t get much public attention.

The Two Takeaways: The authors correctly concluded that these 12 poor-quality studies found DM screening AI to be inaccurate, and called for better quality research so we can properly judge DM screening AI’s actual accuracy and most effective use cases (and then improve it). However, the takeaway for many folks was that mammography screening AI is worse than radiologists and shouldn’t replace them, which might be true, but isn’t very scientifically helpful.

Get every issue of The Imaging Wire, delivered right to your inbox.

You might also like..

Select All

You're signed up!

It's great to have you as a reader. Check your inbox for a welcome email.

-- The Imaging Wire team

You're all set!