Skip to main content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
AMIA Jt Summits Transl Sci Proc. 2024; 2024: 46–53.
Published online 2024 May 31.
PMCID: PMC11141796
PMID: 38827104

Comparison of Three Deep Learning Models in Accurate Classification of 770 Dermoscopy Skin Lesion Images

Abstract

Accurately determining and classifying different types of skin cancers is critical for early diagnosis. In this work, we propose a novel use of deep learning for classification of benign and malignant skin lesions using dermoscopy images. We obtained 770 de-identified dermoscopy images from the University of Missouri (MU) Healthcare. We created three unique image datasets that contained the original images and images obtained after applying a hair removal algorithm. We trained three popular deep learning models, namely, ResNet50, DenseNet121, and Inception-V3. We evaluated the accuracy and the area under the curve (AUC) receiver operating characteristic (ROC) for each model and dataset. DenseNet121 achieved the best accuracy (80.52%) and AUC ROC score (0.81) on the third dataset. For this dataset, the sensitivity and specificity were 0.80 and 0.81, respectively. We also present the SHAP (SHapley Additive exPlanations) values for the predictions made by different models to understand their interpretability.

Introduction

Nonmelanoma skin cancer is the most common type of all cancers1. Melanoma is the most aggressive, deadliest and the fastest growing form of skin cancer24. It accounts for 4% of all skin cancers, and 79% of skin cancer deaths5. Melanoma rates have been gradually increasing, and it is estimated that by 2040 new cases will increase by 50% and deaths by 68%6. In addition, melanoma frequency is expanding at a faster rate than rates of any other cancer type, with a current prediction that one in 34 males and one in 53 women will be diagnosed with melanoma during their lifetime7. Risk of melanoma increases with age; melanoma is the most common cancer in young adults, and women under 50 years of age have a higher probability of developing melanoma than any other cancer, except breast and thyroid8.

Early diagnosis of melanoma with a lower Breslow thickness at diagnosis is associated with higher survival rates9,10. A five-year relative survival rate for localized melanoma with less than 1 mm thickness that has not spread to lymph nodes is between 98% and 99%11. Stage I melanoma-specific 10-year survival is 80%, while stage IV is 0%. Later stage diagnosis is associated with significantly lower 5- and 10- year survival when compared to early I or II stage melanoma11. Due to their specialized training, dermatologists provide appropriate, timely and rapid screening tailored to unique needs of patients primarily focused on risk factors, rather than overdiagnosis12. Still, access to specialty dermatologic care is a global concern that affects incidence and mortality rates, which vary widely between metropolitan, rural and underserved areas.

The supply and geographic distribution of dermatologists continues to be one of the main drivers of disparities in access to early detection of melanoma. Wait times to see a dermatologist for a changing mole are between 33.9 and 73.4 days, but many patients will not access medical care, even when offered at no cost, if they must travel more than 20 miles for their appointment13. Community-based primary care clinicians (PCCs) are often the first point of contact for patients, and they may play an important role in providing screening and early diagnosis for patients without adequate access to dermatologists14. PCCs are not as equipped as dermatologists to provide early detection and have reported lack of adequate training in medical school and residency as barriers to skin screening15.

Leveraging informatics structure and machine learning methods in health applications show amassing evidence that can help PCCs make better clinical decisions1618. Artificial intelligence (AI) and machine learning (ML) algorithms showed accuracy in melanoma detection and have the potential to improve early diagnosis through facilitating large-scale dermatopathology image classification and assessment of skin cancers16,19. Recent studies in melanoma detection involve primarily machine learning algorithms, however, deep learning has been applied successfully to skin image classification. The aim of the project is to detect skin cancer using different deep learning algorithms at MU Healthcare.

In this work, the first section shows some descriptions on previous works that have been done for the detection of skin lesions. In the second section, we introduce our methods and all the steps that were taken for our melanoma detection. We also showed our experiments, the different metrics we used to measure the performance of our different deep learning models. In the last section, the discussion, conclusion, and future works were given.

Methods

Data

This was a single-center retrospective study. This study was approved by the University of Missouri Institutional Review Board (MU IRB), Number 2030585. We obtained 770 dermoscopy images from the electronic health record (EHR). Each image was labeled as “Malignant” or “Benign” by a dermatopathologist (ES or JH). There were 457 images in the benign class and 313 images in the malignant class. Each image was cropped to a size of 600 × 450 pixels. Some few samples of the lesions are shown below (Figure 1). Note that we did not use the HAM1000020 dataset in our study. This is because our goal is to triage skin cancer patients in PCCs by identifying if the lesions are malignant or benign. (HAM10000 organizes images into 7 different classes.) Also, real dermascopy images are less clean compared to HAM10000 images. Hence, the classification task is more challenging using our dataset than working with HAM10000.

Data Pre-Processing

Hairs and shadows on the skin may occlude relevant information about the lesion21,22. As some of the images contained hair around the lesion, we applied a standard hair removal algorithm23 on the original images to create an improved dataset for training the deep learning models (Figure 2). The primary goal of removing hair was to improve the overall classification accuracy. The hair removal algorithm (Dull Razor) had four major steps: (a) transform a color image to a grayscale version, (b) use Morphological Black-Hat filtering on the grayscale image to detect hair contours, (c) create the mask for the inpainting task, and (d) apply an inpainting algorithm on the original image using the mask prepared from the grayscale image. Figure 2 shows an example of an original image with hair and a transformed image after hair removal. Note that the millimeter markings were also removed during this process.

An external file that holds a picture, illustration, etc.
Object name is 2156f2.jpg

Impact of applying the hair removal algorithm

As the hair removal algorithm changed the quality of the transformed images, we created three datasets: Dataset1: original images (n=770), Dataset2: original images that did not contain hair (n=566) and the transformed images for those that contained hair (m=204), Dataset3: transformed images after applying hair removal on all of them (n=770).

Deep Learning Models

Convolutional neural networks (CNNs) have shown excellent performance in large-scale image classification and object detection competitions such as ImageNet Large Scale Visual Recognition Challenge (ILSVRC). We applied the following models

Inception-V3: It uses the inception module that applies multiple convolutions (e.g., 1x1 convolution, 3x3 convolution, 5x5 convolution) and a maximum pooling layer24. The outputs are concatenated to create the input for the next stage. A version on Inception-V3 called GoogLeNet with 22 layers won the ILSVRC 2014 competition25.

ResNet: ResNet uses skip connections between layers to solve the vanishing gradient problem26. In our work, we use ResNet50 that has 50 layers.

DenseNet: DenseNet was proposed to solve the problem of vanishing gradient while being computationally efficient27. It promotes feature reuse resulting in a more compact model. In our work, we use DenseNet121 that has 121 layers.

Skin Lesion Classification

The input dermoscopy image dataset is first pre-processed by applying the hair removal algorithm if needed. After that the dataset is split into three sets randomly: training set (70% of the images), validation set (20% of the images), and testing set (10% of the images). The training and validation sets are used during training. The testing set is used to evaluate the performance of our classifiers. All the images were zero padded and resized to 224*224 pixels before feeding them as inputs to a deep learning model during training and testing.

As the number of images in our dataset was limited (n=770), we employed data augmentation. This step is commonly used in deep learning during training to artificially increase the number of training images and prevent overfitting of the models. Thus, a model sees different variations of an image and can generalize in a better way. Horizontal flip, vertical flip, color jitter, and random rotation were used for data augmentation. For horizontal/vertical flip, images were randomly flipped horizontally/vertically. For color jitter, the brightness, color, and saturation of images were randomly changed. For random rotation, images were rotated by 20 degrees. Rotation allows the model to become invariant to object orientation. The validation set was used to test the model performance during training. We selected the best model based on the highest achieved validation accuracy. The best model was finally used to classify the images in the test set.

Our system was implemented in Python using PyTorch28, CUDA, Numpy29, and OpenCV libraries30. We used existing implementations of deep learning models for skin lesion classification31 and hair removal32. The models were trained and tested on a Dell Precision server with an Intel Xeon processor, 96 GB RAM, 2 TB disk storage, and two NVIDIA Quadro RTX4000 (8GB) graphics processing units (GPUs).

Results

Model Training Settings

All the models were trained with the same hyperparameters: (a) batch size of 16, (b) 500 epochs, and (c) learning rate of 1e-4. Also, each model used the Adam optimizer33 and binary cross entropy loss function. The Adam optimizer is a stochastic gradient descent approach that uses adaptive estimates of first-order and second-order moments. The best model based on highest validation accuracy was saved and used for classification on the test set. We did not apply any over-sampling or under-sampling techniques to handle class imbalance. As our training data was limited, we used the ImageNet34 pre-trained model for fine-tuning. As a result, we can speed up the model convergence and improve the overall classification accuracy.

Performance Metrics

Our binary classification task aims to classify a given lesion image into the Malignant (positive) class or the Benign (negative) class. We applied the concept of true positives (Tp), true negatives (Tn), false positives (Fp), and false negatives (Fn). For instance, Tp denotes the number of images that were Malignant and were classified correctly as Malignant; Fn denotes the number of images there were Malignant but were incorrectly classified as Benign. Table 1 shows the different metrics used to evaluate the models.

Table 1:

Different metrics used in our evaluation

MetricFormula
Precision (P)Tp/(Tp+Fp)
Sensitivity/Recall (R)Tp/(Tp+Fn)
SpecificityTn/(Tn+Fp)
Accuracy(Tp+Tn)/(Tp+Tn+Fp+Fn)
F1-scoreP×R/(P+R)

We compared the models based on different performance metrics: accuracy, sensitivity, specificity, F1-score, and AUC-ROC (Table 2). A good classifier for skin lesion should achieve high sensitivity, specificity, and accuracy. The F1-score, a value between 0-1, is the harmonic mean of precision and recall. AUC-ROC is computed by a graph that shows sensitivity (or true positive rate) and (1-specificity) (or false positive rate) for different cutoffs/thresholds to classify as Malignant/Benign. The area under this curve is the AUC-ROC score of the classifier (between 0-1). Higher the AUC-ROC value, the better is the model performance.

Table 2:

Performance of the models on Dataset1, Dataset2 and Dataset3

Finally, we computed the SHapley Additive exPlanations (SHAP) values35 to better understand the interpretability of machine learning models. SHAP is a technique based on cooperative game theory and is widely used today to understand how different features impact the prediction of a model. SHAP’s goal is to explain the prediction for an input by computing how different features contributed to the prediction. For example, a classifier that is built to predict housing prices can benefit from computing SHAP values. For each input feature, a SHAP value is computed, which quantifies the amount and direction that that feature impacts the predicted house price. Positive value (red) of a feature tends to push the prediction to a higher price, and a negative value (blue) tends to push the prediction to a lower price36.

ResNet50 and DenseNet121 achieved the best AUC-ROC score in Dataset1. However, DenseNet121 was superior to ResNet50 in terms of accuracy and F1-score. However, in Dataset2, the ResNet50 performed the best among the different models and achieved similar AUC-ROC score as in Dataset1. In Dataset3, the DenseNet121 achieved the best AUC-ROC score of 0.81 among the different models. In fact, this was the highest AUC-ROC score across all the experiments. This shows that applying hair removal on all the images resulted in the best classification performance across the three different models. On Dataset3, DenseNet121 achieved sensitivity of 0.80 and specificity of 0.81. Application of the hair removal technique improved the overall performance of the tested deep learning models for binary classification.

Interpretability of the Models

Deep learning models are considered black-box in nature. Therefore, it is difficult to understand how these models made their predictions. To address this issue, SHAP values were proposed recently35.These values can be computed on test images used by a classifier. On a test image, SHAP values are shown as red and blue pixels with different intensities. A red pixel in an image indicates a positive SHAP value and that the pixel increases the probability of the classifier to predict a particular class. The higher the intensity, stronger is the influence of the pixel. Similarly, a blue pixel indicates a negative SHAP value, and that the pixel decreases the probability of the classifier to predict that class. Note that SHAP values do not estimate the quality of predictions.

Figure 4 shows a set of five original images the test set. The actual and the predicted classes are also shown when DenseNet121 was used for classification on Dataset1, Dataset2 and Dataset3. The SHAP values were computed using the training images as the background. Figure 5 shows the SHAP values for the test images for the three datasets. The background images used for computing SHAP values were the training images of the datasets. In general, as hair removal was applied to the images, the hair and millimeter markings were removed. The pixels around the lesion (red/blue) became more important for the classifier’s prediction.

An external file that holds a picture, illustration, etc.
Object name is 2156f4.jpg

Test images selected for computing SHAP values and predictions made by DenseNet121 for these images on different datasets

An external file that holds a picture, illustration, etc.
Object name is 2156f5.jpg

SHAP values for a few test images in the three datasets

Discussion and Conclusions

The use of deep learning models for classifying skin lesions has received much attention in recent years. In 2017, Esteva et. al. showed that deep neural networks can be used to achieve dermatologist-level classification of skin cancer37. They used both dermoscopy and photographic images to fine-tune a single deep neural network without any handcrafted features. Mobile teledermoscopy can help patients with early detection of lesions via mobile phones. In this regard, dermoscopy images for binary classification (benign and malignant classes) were used38.

MobileNetV2 was trained, a CNN for resource-constrained settings, and achieved an accuracy of 91.33%39. Subsequently, CNN for classification of lesions using mobile phones and the well-known HAM10000 dataset was proposed20,40. The major limitation of this work was the inability of the model updates due to challenges in the software framework.

A melanoma classification network by constructing features obtained by MobileNet and DenseNet121 was introduced with a fine-grained classification principle using feature discrimination27,41,42. A segmentation approach to precisely identify the lesions in an image achieved an accuracy of 0.845 on a publicly available dataset containing nearly 1,000 images43. In addition, Nawaz et. al.44 proposed a segmentation approach for early detection of melanoma by combining faster region-based CNN (RCNN)45 and fuzzy k-means clustering. However, they did not focus on classifying skin lesion types.

There has been growing interest in using AI for skin cancer detection in community and primary care settings20 with a major focus on melanoma detection. Phillips et. al. evaluated a deep learning ensemble model for identifying malignant melanoma in pigmented lesions using dermoscopy images19. They used the model developed by Skin Analytics Limited, which was trained on 7,102 dermoscopy images. They showed that the model could achieve good diagnostic accuracy (sensitivity and specificity of 85%) and has the potential to be used in a primary care setting for identifying malignant melanoma.

In our study, we leveraged deep learning models (e.g., DenseNet121, ResNet50, and Inception-v3) for classifying skin lesions into malignant and benign classes. Our best classification accuracy was 81% with sensitivity of 0.8 and specificity of 0.81. Hairs and shadows tend to occlude relevant information about a lesion and serve as unnecessary artifacts for the deep learning models. Applying the hair removal algorithm enhanced the visual features around a lesion and helped the models achieve better classification accuracy. We believe our system holds promise to aid in early detection of skin cancer in primary care settings. Unlike prior efforts, we also studied the interpretability of the tested deep learning models using SHAP values. As a result, predictions made by these models can be easily recognized.

While the skin lesion dataset used in our study is from a single institution, we believe our approach can be adapted nationally. We continue to obtain more data to advance training models. Our next efforts will focus on developing a classifier that can classify skin lesion into four classes: malignant melanocytic, malignant non-melanocytic, benign melanocytic, and benign non-melanocytic.

Acknowledgments

This project was funded by the Translational Research Informing Useful and Meaningful Precision Health (TRIUMPH) grant at the University of Missouri-Columbia.

Figures & Table

An external file that holds a picture, illustration, etc.
Object name is 2156f3.jpg

Our overall approach for skin lesion classification

References

1. Incidence and clinical characteristics of nonmelanoma skin cancers among Hispanic and Asian patients in the US: A 5-year, single institution retrospective review. J Am Acad Dermatol. 2015 May 1;72(5, Supplement 1):AB186. [PubMed] [Google Scholar]
2. Chen JG, Fleischer Jr AB, Smith ED, Kancler C, Goldman ND, Williford PM, et al. Cost of Nonmelanoma Skin Cancer Treatment in the United States. Dermatol Surg. 2001;27(12):1035–8. [PubMed] [Google Scholar]
3. Domingues B, Lopes JM, Soares P, Pópulo H. Melanoma treatment in review. ImmunoTargets Ther. 2018;7:35–49. [PMC free article] [PubMed] [Google Scholar]
4. Ko JM, Velez NF, Tsao H. Pathways to melanoma. Semin Cutan Med Surg. 2010 Dec;29(4):210–7. [PubMed] [Google Scholar]
6. Arnold M, Singh D, Laversanne M, Vignat J, Vaccarella S, Meheus F, et al. Global Burden of Cutaneous Melanoma in 2020 and Projections to 2040. JAMA Dermatol. 2022 May 1;158(5):495–503. [PMC free article] [PubMed] [Google Scholar]
7. Almaani N, Juweid ME, Alduraidi H, Ganem N, Abu-Tayeh FA, Alrawi R, et al. Incidence Trends of Melanoma and Nonmelanoma Skin Cancers in Jordan From 2000 to 2016. JCO Glob Oncol. 2023 Feb 22;9:e2200338. [PMC free article] [PubMed] [Google Scholar]
9. Figueroa-Silva O, Suárez-Peñaranda JM, Balboa-Barreiro V, Sánchez-Aguilar Rojas MD. Volume tumor impact on melanoma survival assessed using Breslow density. J Am Acad Dermatol. 2022 Jun;86(6):1410–2. [PubMed] [Google Scholar]
10. Rashed H, Flatman K, Bamford M, Teo KW, Saldanha G. Breslow density is a novel prognostic feature in cutaneous malignant melanoma. Histopathology. 2017;70(2):264–72. [PMC free article] [PubMed] [Google Scholar]
11. Garbe C, Keim U, Amaral T, Berking C, Eigentler TK, Flatz L, et al. Prognosis of Patients With Primary Melanoma Stage I and II According to American Joint Committee on Cancer Version 8 Validated in Two Independent Cohorts: Implications for Adjuvant Treatment. J Clin Oncol Off J Am Soc Clin Oncol. 2022 Nov 10;40(32):3741–9. [PMC free article] [PubMed] [Google Scholar]
12. Kulkarni RP, Yu WY, Leachman SA. To Improve Melanoma Outcomes, Focus on Risk Stratification, Not Overdiagnosis. JAMA Dermatol. 2022 May 1;158(5):485–7. [PubMed] [Google Scholar]
13. Pala P, Bergler-Czop BS, Gwiżdż JM. Teledermatology: idea, benefits and risks of modern age - a systematic review based on melanoma. Postepy Dermatol Alergol. 2020 Apr;37(2):159–67. [PMC free article] [PubMed] [Google Scholar]
14. Jones OT, Jurascheck LC, van Melle M, Hickman S, Burrows NP, Hall PN, et al. Dermoscopy for melanoma detection and triage in primary care: a systematic review. BMJ Open. 2019 Aug 1;9(8):e027529. [PMC free article] [PubMed] [Google Scholar]
15. Brown AE, Najmi M, Duke T, Grabell DA, Koshelev MV, Nelson KC. Skin Cancer Education Interventions for Primary Care Providers: A Scoping Review. J Gen Intern Med. 2022 Jul 1;37(9):2267–79. [PMC free article] [PubMed] [Google Scholar]
16. Maron RC, Utikal JS, Hekler A, Hauschild A, Sattler E, Sondermann W, et al. Artificial Intelligence and Its Effect on Dermatologists’ Accuracy in Dermoscopic Melanoma Image Classification: Web-Based Survey Study. J Med Internet Res. 2020 Sep 11;22(9):e18091. [PMC free article] [PubMed] [Google Scholar]
17. Cui X, Wei R, Gong L, Qi R, Zhao Z, Chen H, et al. Assessing the effectiveness of artificial intelligence methods for melanoma: A retrospective review. J Am Acad Dermatol. 2019 Nov;81(5):1176–80. [PubMed] [Google Scholar]
18. Brinker TJ, Schmitt M, Krieghoff-Henning EI, Barnhill R, Beltraminelli H, Braun SA, et al. Diagnostic performance of artificial intelligence for histologic melanoma recognition compared to 18 international expert pathologists. J Am Acad Dermatol. 2022 Mar;86(3):640–2. [PubMed] [Google Scholar]
19. Jones OT, Matin RN, van der Schaar M, Prathivadi Bhayankaram K, Ranmuthu CKI, Islam MS, et al. Artificial intelligence and machine learning algorithms for early detection of skin cancer in community and primary care settings: a systematic review. Lancet Digit Health. 2022 Jun;4(6):e466–76. [PubMed] [Google Scholar]
20. Tschandl P, Rosendahl C, Kittler H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data. 2018 Aug 14;5(1):180161. [PMC free article] [PubMed] [Google Scholar]
21. Talavera-Martinez L, Bibiloni P, González Hidalgo M. Comparative Study of Dermoscopic Hair Removal Methods. 2019. pp. 12–21. [PubMed]
22. Talavera-Martinez L, Bibiloni P, Gonzalez-Hidalgo M. Hair Segmentation and Removal in Dermoscopic Images Using Deep Learning. IEEE Access. 2021;9:2694–704. [Google Scholar]
23. Lee T, Ng V, Gallagher R, Coldman A, McLean D. Dullrazor®: A software approach to hair removal from images. Comput Biol Med. 1997 Nov 1;27(6):533–43. [PubMed] [Google Scholar]
24. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015. pp. 1–9.
25. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the Inception Architecture for Computer Vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016. pp. 2818–26.
26. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016. pp. 770–8.
27. Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017. pp. 2261–9.
28. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, et al. Advances in Neural Information Processing Systems [Internet] Curran Associates, Inc.; 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. [Google Scholar]
29. Harris CR, Millman KJ, van der Walt SJ, Gommers R, Virtanen P, Cournapeau D, et al. Array programming with NumPy. Nature. 2020 Sep;585(7825):357–62. [PMC free article] [PubMed] [Google Scholar]
33. Kingma DP, Ba J. Adam: A Method for Stochastic Optimization arXiv. 2017. http://arxiv.org/abs/1412.6980 .
34. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. ImageNet Large Scale Visual Recognition Challenge arXiv. 2015. http://arxiv.org/abs/1409.0575 .
35. Lundberg SM, Lee SI. A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems Curran Associates, Inc. 2017.
36. Explaining Machine Learning Models: A Non-Technical Guide to Interpreting SHAP Analyses from: https://www.aidancooper.co.uk/a-non-technical-guide-to-interpreting-shap-analyses/
37. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017 Feb;542(7639):115–8. [PMC free article] [PubMed] [Google Scholar]
38. Ech-Cherif A, Misbhauddin M, Ech-Cherif M. Deep Neural Network Based Mobile Dermoscopy Application for Triaging Skin Cancer Detection. 2019 2nd International Conference on Computer Applications & Information Security (ICCAIS) 2019. pp. 1–6.
39. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC. MobileNetV2: Inverted Residuals and Linear Bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018. pp. 4510–20.
40. Dai X, Spasić I, Meyer B, Chapman S, Andres F. Machine Learning on Mobile: An On-device Inference App for Skin Cancer Detection. 2019 Fourth International Conference on Fog and Mobile Edge Computing (FMEC) 2019. pp. 301–5.
41. Wei L, Ding K, Hu H. Automatic Skin Cancer Detection in Dermoscopy Images Based on Ensemble Lightweight Deep Learning Network. IEEE Access. 2020;8:99633–47. [Google Scholar]
42. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv. 2017. http://arxiv.org/abs/1704.04861 .
43. ISIC Archive. https://www.isic-archive.com .
44. Nawaz M, Mehmood Z, Nazir T, Naqvi RA, Rehman A, Iqbal M, et al. Skin cancer detection from dermoscopic images using deep learning and fuzzy k-means clustering. Microsc Res Tech. 2022 Jan;85(1):339–51. [PubMed] [Google Scholar]
45. Ren S, He K, Girshick R, Sun J. Advances in Neural Information Processing Systems. Curran Associates, Inc.; 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. [Google Scholar]

Articles from AMIA Summits on Translational Science Proceedings are provided here courtesy of American Medical Informatics Association

-