Exploring the Best Parameters of Deep Learning for Breast Cancer Classification System
DOI:
https://doi.org/10.21512/commit.v16i2.8174Keywords:
Best Parameter, Deep Learning, Breast Cancer Classification SystemAbstract
Breast cancer is one of the deadliest cancers in the world. It is essential to detect the signs of cancer as early as possible, to make the survival rate higher. However, detecting the signs of breast cancer using the machine or deep learning algorithms from the diagnostic imaging results is not trivial. Slight changes in the illumination of the scanned area can significantly affect the automatic breast cancer classification process. Hence, the research aims to propose an automatic classifier for breast cancer from digital medical imaging (e.g., Positron Emission Tomography or PET, X-Ray of Mammogram, and Magnetic Resonance Imaging (MRI) images). The research proposes modified deep learning architecture with five different settings to model automatic breast cancer classifiers. In addition, five machine learning algorithms are also explored to model the classifiers. The dataset used in the research is the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM). A total of 2,676 mammogram images are used in the research and are split into 80%:20% (2,141:535) for training and testing datasets. The results demonstrate that the model trained with eight layers of Convolutional Neural Networks (CNN) (SET-8) achieves the best accuracy score of 94.89% and 93.75% in the training and validation dataset, respectively.
Plum Analytics
References
S. Lei, R. Zheng, S. Zhang, R. Chen, S. Wang, K. Sun, H. Zeng, W. Wei, and J. He, “Breast cancer incidence and mortality in women in China: Temporal trends and projections to 2030,” Cancer Biology & Medicine, vol. 18, no. 3, pp. 900–909, 2021.
M. Amrane, S. Oukid, I. Gagaoua, and T. Ensari, “Breast cancer classification using machine learning,” in 2018 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT). Istanbul, Turkey: IEEE, April 18–19, 2018, pp. 1–4.
O. I. Obaid, M. A. Mohammed, M. K. A. Ghani, S. A. Mostafa, and F. T. AL-Dhief, “Evaluating the performance of machine learning techniques in the classification of wisconsin breast cancer,” International Journal of Engineering & Technology, vol. 7, no. 4.36, pp. 160–166, 2018.
E. A. Bayrak, P. Kırcı, and T. Ensari, “Comparison of machine learning methods for breast cancer diagnosis,” in 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT). Istanbul, Turkey: IEEE, April 24–26, 2019, pp. 1–3.
A. B. Banu and P. Thirumalaikolundusubramanian, “Comparison of Bayes classifiers for breast cancer classification,” Asian Pacific Journal of Cancer Prevention: APJCP, vol. 19, no. 10, p. 2917, 2018.
B. Sahu, S. N. Mohanty, and S. K. Rout, “A hybrid approach for breast cancer classification and diagnosis,” EAI Endorsed Transactions on Scalable Information Systems, vol. 6, no. 20, pp. e2–e2, 2019.
A. F. M. Agarap, “On breast cancer detection: An application of machine learning algorithms on the wisconsin diagnostic dataset,” in Proceedings of the 2nd International Conference on Machine Learning and Soft Computing, 2018, pp. 5–9.
M. U. Rehman, S. Akhtar, M. Zakwan, and M. H. Mahmood, “Novel architecture with selected feature vector for effective classification of mitotic and non-mitotic cells in breast cancer histology images,” Biomedical Signal Processing and Control, vol. 71, no. Part B, 2022.
A. Chowanda, “Separable convolutional neural networks for facial expressions recognition,”
Journal of Big Data, vol. 8, no. 1, pp. 1–17, 2021.
A. Chowanda and A. D. Chowanda, “Recurrent neural network to deep learn conversation in Indonesian,” Procedia Computer Science, vol. 116, pp. 579–586, 2017.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Preprint arXiv:1409.1556, 2014.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2818–2826.
D. Wang, A. Khosla, R. Gargeya, H. Irshad, and A. H. Beck, “Deep learning for identifying metastatic breast cancer,” arXiv Preprint arXiv:1606.05718, 2016.
A. Yala, C. Lehman, T. Schuster, T. Portnoi, and R. Barzilay, “A deep learning mammographybased model for improved breast cancer risk prediction,” Radiology, vol. 292, no. 1, pp. 60–66, 2019.
S. Khan, N. Islam, Z. Jan, I. U. Din, and J. J. P. C. Rodrigues, “A novel deep learning based framework for the detection and classification of breast cancer using transfer learning,” Pattern Recognition Letters, vol. 125, pp. 1–6, 2019.
Z. Han, B. Wei, Y. Zheng, Y. Yin, K. Li, and S. Li, “Breast cancer multi-classification from histopathological images with structured deep learning model,” Scientific Reports, vol. 7, no. 1, pp. 1–10, 2017.
L. Shen, L. R. Margolies, J. H. Rothstein, E. Fluder, R. McBride, and W. Sieh, “Deep learning to improve breast cancer detection on screening mammography,” Scientific Reports, vol. 9, no. 1, pp. 1–12, 2019.
J. Xie, R. Liu, J. Luttrell IV, and C. Zhang, “Deep learning based analysis of histopathological images of breast cancer,” Frontiers in Genetics, vol. 10, pp. 1–19, 2019.
Q. Hu, H. M. Whitney, and M. L. Giger, “A deep learning methodology for improved breast cancer diagnosis using multiparametric MRI,” Scientific Reports, vol. 10, no. 1, pp. 1–11, 2020.
L. Tsochatzidis, L. Costaridou, and I. Pratikakis, “Deep learning for breast cancer diagnosis from mammograms–A comparative study,” Journal of Imaging, vol. 5, no. 3, pp. 1–11, 2019.
K. Jabeen, M. A. Khan, M. Alhaisoni, U. Tariq, Y. D. Zhang, A. Hamza, A. Mickus, and R. Damaˇseviˇcius, “Breast cancer classification from ultrasound images using probability-based optimal deep learning feature fusion,” Sensors, vol. 22, no. 3, pp. 1–23, 2022.
R. S. Lee, F. Gimenez, A. Hoogi, K. K. Miyake, M. Gorovoy, and D. L. Rubin, “A curated mammography data set for use in computer-aided detection and diagnosis research,” Scientific Data, vol. 4, no. 1, pp. 1–9, 2017.
P. Xi, C. Shu, and R. Goubran, “Abnormality detection in mammography using deep convolutional neural networks,” in 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA). Rome, Italy: IEEE, June 11–13, 2018, pp. 1–6.
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Andry Chowanda
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)