Applications of Large Language Models in Pathology
- PMID: 38671764
- PMCID: PMC11047860
- DOI: 10.3390/bioengineering11040342
Applications of Large Language Models in Pathology
Abstract
Large language models (LLMs) are transformer-based neural networks that can provide human-like responses to questions and instructions. LLMs can generate educational material, summarize text, extract structured data from free text, create reports, write programs, and potentially assist in case sign-out. LLMs combined with vision models can assist in interpreting histopathology images. LLMs have immense potential in transforming pathology practice and education, but these models are not infallible, so any artificial intelligence generated content must be verified with reputable sources. Caution must be exercised on how these models are integrated into clinical practice, as these models can produce hallucinations and incorrect results, and an over-reliance on artificial intelligence may lead to de-skilling and automation bias. This review paper provides a brief history of LLMs and highlights several use cases for LLMs in the field of pathology.
Keywords: BERT; GPT; Gemma; Llama; Mistral; artificial intelligence; bidirectional encoder representations from transformers; generative pretrained transformer; large language model; natural language processing; surgical pathology.
Conflict of interest statement
The author declares no conflicts of interest.
Figures
Similar articles
-
The Role of Large Language Models in Transforming Emergency Medicine: Scoping Review.JMIR Med Inform. 2024 May 10;12:e53787. doi: 10.2196/53787. JMIR Med Inform. 2024. PMID: 38728687 Free PMC article. Review.
-
Quality of Answers of Generative Large Language Models Versus Peer Users for Interpreting Laboratory Test Results for Lay Patients: Evaluation Study.J Med Internet Res. 2024 Apr 17;26:e56655. doi: 10.2196/56655. J Med Internet Res. 2024. PMID: 38630520 Free PMC article.
-
Evaluating Large Language Models for the National Premedical Exam in India: Comparative Analysis of GPT-3.5, GPT-4, and Bard.JMIR Med Educ. 2024 Feb 21;10:e51523. doi: 10.2196/51523. JMIR Med Educ. 2024. PMID: 38381486 Free PMC article.
-
Transformer Models in Healthcare: A Survey and Thematic Analysis of Potentials, Shortcomings and Risks.J Med Syst. 2024 Feb 17;48(1):23. doi: 10.1007/s10916-024-02043-5. J Med Syst. 2024. PMID: 38367119 Free PMC article. Review.
-
Bidirectional Encoder Representations from Transformers-like large language models in patient safety and pharmacovigilance: A comprehensive assessment of causal inference implications.Exp Biol Med (Maywood). 2023 Nov;248(21):1908-1917. doi: 10.1177/15353702231215895. Epub 2023 Dec 12. Exp Biol Med (Maywood). 2023. PMID: 38084745 Free PMC article.
References
-
- Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L., Polosukhin I. Attention Is All You Need; Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017); Long Beach, CA, USA. 4–9 December 2017.
-
- Yenduri G., Srivastava G., Maddikunta P.K.R., Jhaveri R.H., Wang W., Vasilakos A.V., Gadekallu T.R. Generative Pre-Trained Transformer: A Comprehensive Review on Enabling Technologies, Potential Applications, Emerging Challenges, and Future Directions. arXiv. 20232305.10435
-
- Devlin J., Chang M.-W., Lee K., Toutanova K. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. arXiv. 20191810.04805
-
- Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv. 20212010.11929
Publication types
Grants and funding
LinkOut - more resources
Full Text Sources