Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2024 Mar 31;11(4):342.
doi: 10.3390/bioengineering11040342.

Applications of Large Language Models in Pathology

Affiliations
Review

Applications of Large Language Models in Pathology

Jerome Cheng. Bioengineering (Basel). .

Abstract

Large language models (LLMs) are transformer-based neural networks that can provide human-like responses to questions and instructions. LLMs can generate educational material, summarize text, extract structured data from free text, create reports, write programs, and potentially assist in case sign-out. LLMs combined with vision models can assist in interpreting histopathology images. LLMs have immense potential in transforming pathology practice and education, but these models are not infallible, so any artificial intelligence generated content must be verified with reputable sources. Caution must be exercised on how these models are integrated into clinical practice, as these models can produce hallucinations and incorrect results, and an over-reliance on artificial intelligence may lead to de-skilling and automation bias. This review paper provides a brief history of LLMs and highlights several use cases for LLMs in the field of pathology.

Keywords: BERT; GPT; Gemma; Llama; Mistral; artificial intelligence; bidirectional encoder representations from transformers; generative pretrained transformer; large language model; natural language processing; surgical pathology.

PubMed Disclaimer

Conflict of interest statement

The author declares no conflicts of interest.

Figures

Figure 1
Figure 1
ChatGPT-4 Turbo followed instructions appropriately after being given the following prompt: “You are an experienced pathologist. Give me a list of 12 carcinomas with associated immunostains and molecular tests in table format”.

Similar articles

References

    1. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L., Polosukhin I. Attention Is All You Need; Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017); Long Beach, CA, USA. 4–9 December 2017.
    1. Yenduri G., Srivastava G., Maddikunta P.K.R., Jhaveri R.H., Wang W., Vasilakos A.V., Gadekallu T.R. Generative Pre-Trained Transformer: A Comprehensive Review on Enabling Technologies, Potential Applications, Emerging Challenges, and Future Directions. arXiv. 20232305.10435
    1. Devlin J., Chang M.-W., Lee K., Toutanova K. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. arXiv. 20191810.04805
    1. Zeng K.G., Dutt T., Witowski J., Kranthi Kiran G.V., Yeung F., Kim M., Kim J., Pleasure M., Moczulski C., Lopez L.J.L., et al. Improving Information Extraction from Pathology Reports Using Named Entity Recognition. Res. Sq. 2023:rs.3.rs-3035772. doi: 10.21203/rs.3.rs-3035772/v1. - DOI
    1. Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., et al. An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv. 20212010.11929

Grants and funding

This research received no external funding.

LinkOut - more resources

-