Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Apr 30:14:1275769.
doi: 10.3389/fonc.2024.1275769. eCollection 2024.

MAMILNet: advancing precision oncology with multi-scale attentional multi-instance learning for whole slide image analysis

Affiliations

MAMILNet: advancing precision oncology with multi-scale attentional multi-instance learning for whole slide image analysis

Qinqing Wang et al. Front Oncol. .

Abstract

Background: Whole Slide Image (WSI) analysis, driven by deep learning algorithms, has the potential to revolutionize tumor detection, classification, and treatment response prediction. However, challenges persist, such as limited model generalizability across various cancer types, the labor-intensive nature of patch-level annotation, and the necessity of integrating multi-magnification information to attain a comprehensive understanding of pathological patterns.

Methods: In response to these challenges, we introduce MAMILNet, an innovative multi-scale attentional multi-instance learning framework for WSI analysis. The incorporation of attention mechanisms into MAMILNet contributes to its exceptional generalizability across diverse cancer types and prediction tasks. This model considers whole slides as "bags" and individual patches as "instances." By adopting this approach, MAMILNet effectively eliminates the requirement for intricate patch-level labeling, significantly reducing the manual workload for pathologists. To enhance prediction accuracy, the model employs a multi-scale "consultation" strategy, facilitating the aggregation of test outcomes from various magnifications.

Results: Our assessment of MAMILNet encompasses 1171 cases encompassing a wide range of cancer types, showcasing its effectiveness in predicting complex tasks. Remarkably, MAMILNet achieved impressive results in distinct domains: for breast cancer tumor detection, the Area Under the Curve (AUC) was 0.8872, with an Accuracy of 0.8760. In the realm of lung cancer typing diagnosis, it achieved an AUC of 0.9551 and an Accuracy of 0.9095. Furthermore, in predicting drug therapy responses for ovarian cancer, MAMILNet achieved an AUC of 0.7358 and an Accuracy of 0.7341.

Conclusion: The outcomes of this study underscore the potential of MAMILNet in driving the advancement of precision medicine and individualized treatment planning within the field of oncology. By effectively addressing challenges related to model generalization, annotation workload, and multi-magnification integration, MAMILNet shows promise in enhancing healthcare outcomes for cancer patients. The framework's success in accurately detecting breast tumors, diagnosing lung cancer types, and predicting ovarian cancer therapy responses highlights its significant contribution to the field and paves the way for improved patient care.

Keywords: cancer diagnosis; deep learning; multi-scale attention; multiple instance learning; whole slide image analysis.

PubMed Disclaimer

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Figures

Figure 1
Figure 1
Pipeline of the whole study.
Figure 2
Figure 2
(A) Training process of the MAMILNet; (B) Inference process of the MAMILNet.
Figure 3
Figure 3
(A) The ROC curve of MAMILNet on the breast cancer sentinel lymph node tumor detection task (independent test set). (B) The ROC curve on lung cancer tumor typing task (independent test set). (C) The ROC curve on the ovarian cancer treatment resistance prediction task (independent test set).

Similar articles

References

    1. Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal. (2021) 67:101813. doi: 10.1016/j.media.2020.101813 - DOI - PMC - PubMed
    1. Qu L, Liu S, Liu X, Wang M, Song Z. Towards label-efficient automatic diagnosis and analysis: a comprehensive survey of advanced deep learning-based weakly-supervised, semi-supervised and self-supervised techniques in histopathological image analysis. Phys Med Biol. (2022) 67(20):20TR01. doi: 10.1088/1361-6560/ac910a - DOI - PubMed
    1. Cheplygina V, de Bruijne M, Pluim JP. Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med Image Anal. (2019) 54:280–96. doi: 10.1016/j.media.2019.03.009 - DOI - PubMed
    1. Rony J, Belharbi S, Dolz J, Ayed IB, McCaffrey L, Granger E. Deep weaklysupervised learning methods for classification and localization in histology images: a survey. arXiv preprint arXiv:1909.03354. (2019).
    1. Wang Z, Bi Y, Pan T, Wang X, Bain C, Bassed R, et al. . Multiplex-detection based multiple instance learning network for whole slide image classification. arXiv preprint arXiv:2208.03526. (2022). - PMC - PubMed

Grants and funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

LinkOut - more resources

-