Training CNN-based Model on Low Resource Hardware and Small Dataset for Early Prediction of Melanoma from Skin Lesion Images
DOI:
https://doi.org/10.21512/emacsjournal.v5i2.9904Keywords:
Melanoma, CNN, PretrainedAbstract
Melanoma is a kind of rare skin cancer that can spread quickly to the other skin layers and the organs beneath. Melanoma is known to be curable only if it is diagnosed at an early stage. This poses a challenge for accurate prediction to cut the number of deaths caused by melanoma. Deep learning methods have recently shown promising performance in classifying images accurately. However, it requires a lot of samples to generalize well, while the number of melanoma sample images is limited. To solve this issue, transfer learning has widely adapted to transfer the knowledge of the pretrained model to another domain or new dataset which has lesser samples or different tasks. This study is aimed to find which method is better to achieve this for early melanoma prediction from skin lesion images. We investigated three pretrained and one non-pretrained image classification models. Specifically, we choose the pretrained models which are efficient to train on small training sample and low hardware resource. The result shows that using limited sample images and low hardware resource, pretrained image models yield better overall accuracy and recall compared to the non-pretrained model. This suggests that pretrained models are more suitable in this task with constrained data and hardware resource.
Plum Analytics
References
Aima, A. and Sharma, A. (2019). Predictive approach for melanoma skin cancer detection using cnn. SSRN Electronic Journal.
Al-Zou, A., Thabit, M., Al-Sakkaf, K., and Basaleem, H. (2016). Skin cancer: Clinico-pathological study of 204 patients in southern governorates of yemen. Asian Pacific journal of cancer prevention: APJCP, 17:3195–3199.
Alwakid, G., Gouda, W., Humayun, M., and Sama, N. U. (2022). Melanoma detection using deep learning-based classifications. In Healthcare, volume 10, page 2481. MDPI.
Alzubaidi, L., Zhang, J., Humaidi, A. J., Al-Dujaili, A., Duan, Y., Al-Shamma, O., Santamar ́ıa, J., Fadhel, M. A., Al-Amidie, M., and Farhan, L. (2021). Review of deep learning: Concepts, cnn architectures, challenges, applications, future directions. Journal of big Data, 8:1–74.
BOZKURT, F. (2021). Classification of blood cells from blood cell images using dense convolutional network. Journal of Science, Technology and Engineering Research, 2(2):81–88.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (09). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee.
Dildar, M., Akram, S., Irfan, M., Khan, H. U., Ramzan, M., Mahmood, A. R., Alsaiari, S. A., Saeed, A. H. M., Alraddadi, M. O., and Mahnashi, M. H. (2021). Skin cancer detection: a review using deep learning techniques. International journal of environmental research and public health, 18(10):5479.
He, K., Gkioxari, G., Doll ́ar, P., and Girshick, R. (2017). Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708.
Jojoa Acosta, M. F., Caballero Tovar, L. Y., Garcia-Zapirain, M. B., and Percybrooks, W. S. (2021). Melanoma diagnosis using deep learning techniques on dermatoscopic images. BMC Medical Imaging, 21(1):1–11.
Kadampur, M. A. and Al Riyaee, S. (2020). Skin cancer detection: Applying a deep learning based model driven architecture in the cloud for classifying dermal cell images. Informatics in Medicine Unlocked, 18:100282.
Kalaiyarivu, M. and Nalini, N. (2022). Classification of skin disease image using texture and color features with machine learning techniques. Mathematical Statistician and Engineering Applications, 71(3s2):682–699.
Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature, 521(7553):436–444.
Lu, X. and Firoozeh Abolhasani Zadeh, Y. (2022). Deep learning-based classification for melanoma detection using xceptionnet. Journal of Healthcare Engineering, 2022.
Muhaba, K. A., Dese, K., Aga, T. M., Zewdu, F. T., and Simegn, G. L. (2022). Automatic skin disease diagnosis using deep learning from clinical image and patient information. Skin Health and Disease, 2(1):e81.
Nur Aziz Thohari, A., Triyono, L., Hestiningsih, I., Suyanto, B., and Yobioktobera, A. (2022). Performance evaluation of pre-trained convolutional neural network model for skin disease classification. JUITA: Jurnal Informatika, 10:9.
Rezaoana, N., Hossain, M. S., and Andersson, K. (2020). Detection and classification of skin cancer by using a parallel cnn model. In 2020 IEEE International Women in Engineering (WIE) Conference on Electrical and Computer Engineering (WIECON-ECE), pages 380–386. IEEE.
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510–4520.
Saragih, R. E., Roza, Y., Purnajaya, A. R., and Kaharuddin, K. (2022). Ambarella fruit ripeness classification based on efficientnet models. Journal of Digital Ecosystem for Natural Sustainability, 2(2):55–60.
Simonyan, K. and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Sinaga, D. (2018). The evaluation of skin cancer profile in fatmawati hospital centre. Journal of Education and Practice, 9(4):21–28.
Tabik, S., Peralta C ́amara, D., Herrera-Poyatos, A., and Herrera, F. (2017). A snapshot of image pre-processing for convolutional neural networks: case study of mnist. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 10(1):555–568.
Tan, M. and Le, Q. (2021). Efficientnetv2: Smaller models and faster training. In International conference on machine learning, pages 10096–10106. PMLR.
Tschandl, P., Rosendahl, C., and Kittler, H. (2018). The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific data, 5(1):1–9.
Vijayalakshmi, M. (2019). Melanoma skin cancer detection using image processing and machine learning. International Journal of Trend in Scientific Research and Development (IJTSRD), 3(4):780–784.
Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., and Change Loy, C. (2018). Esrgan: Enhanced super resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops, pages 0–0.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 Engineering, MAthematics and Computer Science (EMACS) Journal
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)