Age and Gender Recognition for Masked Face Using YOLO-X and CNN in Smart Advertisement Systems
DOI:
https://doi.org/10.21512/commit.v19i2.13717Keywords:
Personalized Advertisement Board, Age and Gender Recognition, Masked and Unmasked Face, You Only Look Once -X (YOLO-X), Convolution Neural Network (CNN)Abstract
The conventional advertisement board often fails to attract its target customers effectively due to its limited ability to display content relevant to viewers. To address this, a Smart Personalized Advertisement (SAVER) board employing an age and gender recognition system is proposed. In the post-pandemic era, where many people wear face masks, developing effective smart advertising systems has become even more challenging. This study aims to evaluate and compare Convolutional Neural Network (CNN) architectures integrated with You Only Look Once-X (YOLO-X) for age and gender recognition in smart advertising applications that accommodate both masked and unmasked faces. The proposed framework first detects faces in an image using the YOLO-X model. The detected faces are then cropped based on bounding boxes and aligned to ensure consistent orientation. Subsequently, CNNs classify age groups and gender based on facial attributes. The detection results are used to determine which advertisements should be displayed. This study uniquely addresses the recognition of age and gender for both masked and unmasked faces and implements the solution in a real-time advertising system. The proposed system achieved 68% precision in delivering smart personalized advertisements, demonstrating its effectiveness in real-world public settings. In summary, this research contributes to the development of intelligent public display systems capable of delivering demographically aware content.
Plum Analytics
References
[1] S. Rosengren, M. Eisend, S. Koslow, and M. Dahlen, “A meta-analysis of when and how advertising creativity works,” Journal of Marketing, vol. 84, no. 6, pp. 39–56, 2020.
[2] J. L. Hayes, G. Golan, B. Britt, and J. Applequist, “How advertising relevance and consumer—Brand relationship strength limit disclosure effects of native ads on Twitter,” International Journal of Advertising, vol. 39, no. 1, pp. 131–165, 2020.
[3] L. V. Shulga, J. A. Busser, B. Bai, and H. Kim, “Branding co-creation with consumer-generated advertising: Effect on creators and observers,” Journal of Advertising, vol. 52, no. 1, pp. 5–23, 2023.
[4] A. Kallevig, W. Zuem, M. Willis, S. Ranfagni, and S. Rovai, “Managing creativity in the age of data-driven marketing communication: A model for agencies to improve their distribution and valuation of creativity,” Journal of Advertising Research, vol. 62, no. 4, pp. 301–320, 2022.
[5] S. Chen, Y. Wu, F. Deng, and K. Zhi, “How does ad relevance affect consumers’ attitudes toward personalized advertisements and social media platforms? The role of information co-ownership, vulnerability, and privacy cynicism,” Journal of Retailing and Consumer Services, vol. 73, 2023.
[6] M. K. Benkaddour, S. Lahlali, and M. Trabelsi, “Human age and gender classification using convolutional neural network,” in 2020 2nd International Workshop on Human-Centric Smart Environments for Health and Well-Being (IHSH). Boumerdes, Algeria: IEEE, Feb. 9–10, 2021, pp. 215–220.
[7] M. Alhalabi, N. Hussein, E. Khan, O. Habash, J. Yousaf, and M. Ghazal, “Sustainable smart advertisement display using deep age and gender recognition,” in 2021 International Conference on Decision Aid Sciences and Application (DASA). Sakheer, Bahrain: IEEE, Dec. 7–8, 2021, pp. 33–37.
[8] O. Singh and K. Mourya, “Gender and age detection using machine learning algorithm,” International Journal of Creative Research Thoughts (IJCRT), vol. 12, no. 3, pp. e32–e–35, 2024.
[9] D. K. Srivastava, E. Gupta, S. Shrivastav, and R. Sharma, “Detection of age and gender from facial images using CNN,” in Proceedings of 3rd International Conference on Recent Trends in Machine Learning, IoT, Smart Cities and Applications: ICMISC 2022. Telangana, India: Springer, March 28–29, 2023, pp. 481–491.
[10] S. Naaz, H. Pandey, and C. Lakshmi, “Deep learning based age and gender detection using facial images,” in 2024 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI). Chennai, India: IEEE, May 9–10, 2024, pp. 1–11.
[11] F. Murtadho, D. W. Sudiharto, C. W. Wijiutomo, and E. Ariyanto, “Design and implementation of smart advertisement display board prototype,” in 2019 International Seminar on Application for Technology of Information and Communication (ISemantic). Semarang, Indonesia: IEEE, Sep. 21–22, 2019, pp. 246–250.
[12] D. Sundawa, D. S. Logayah, and R. A. Hardiyanti, “New normal in the era of pandemic COVID-19 in forming responsibility social life and culture of Indonesian society,” in IOP Conference Series: Earth and Environmental Science, vol. 747. East Java, Indonesia: IOP Publishing, Sep. 12, 2021, pp. 1–10.
[13] Y. Feng, S. Yu, H. Peng, Y. R. Li, and J. Zhang, “Detect faces efficiently: A survey and evaluations,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 1, pp. 1–18, 2021.
[14] D. Qi, W. Tan, Q. Yao, and J. Liu, “YOLO5Face: Why reinventing a face detector,” in European Conference on Computer Vision. Tel Aviv, Israel: Springer, Oct. 23–27, 2022, pp. 228–244.
[15] B. Wang, J. Zheng, and C. L. P. Chen, “A survey on masked facial detection methods and datasets for fighting against COVID-19,” IEEE Transactions on Artificial Intelligence, vol. 3, no. 3, pp. 323–343, 2021.
[16] D. Fitousi, N. Rotschild, C. Pnini, and O. Azizi, “Understanding the impact of face masks on the processing of facial identity, emotion, age, and gender,” Frontiers in Psychology, vol. 12, pp. 1–13, 2021.
[17] R. Liu and Z. Ren, “Application of YOLO on mask detection task,” in 2021 IEEE 13th International Conference on Computer Research and Development (ICCRD). Beijing, China: IEEE, Jan. 5–7, 2021, pp. 130–136.
[18] S. Abbasi, H. Abdi, and A. Ahmadi, “A facemask detection approach based on YOLO applied for a new collected dataset,” in 2021 26th International Computer Conference, Computer Society of Iran (CSICC). Tehran, Iran: IEEE, March 3–4, 2021, pp. 1–6.
[19] S. Singh, U. Ahuja, M. Kumar, K. Kumar, and M. Sachdeva, “Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment,” Multimedia Tools and Applications, vol. 80, no. 13, pp. 19 753–19 768, 2021.
[20] S. A. Sanjaya and S. A. Rakhmawan, “Face mask detection using MobileNetV2 in the era of COVID-19 pandemic,” in 2020 International Conference on Data Analytics for Business and Industry: Way Towards a Sustainable Economy (ICDABI). Sakheer, Bahrain: IEEE, Oct. 26–27, 2020, pp. 1–5.
[21] X. Fan and M. Jiang, “RetinaFaceMask: A single stage face mask detector for assisting control of the COVID-19 pandemic,” in 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). Melbourne, Australia: IEEE, Oct. 17–20, 2021, pp. 832–837.
[22] Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun, “YOLOX: Exceeding YOLO series in 2021,” 2021. [Online]. Available: https://arxiv.org/abs/2107.08430
[23] T. Panboonyuen, S. Thongbai, W. Wongweeranimit, P. Santitamnont, K. Suphan, and C. Charoenphon, “Object detection of road assets using transformer-based YOLOX with feature pyramid decoder on Thai highway panorama,” Information, vol. 13, no. 1, pp. 1–12, 2021.
[24] M.Wu, L. Guo, R. Chen,W. Du, J.Wang, M. Liu, X. Kong, and J. Tang, “Improved YOLOX foreign object detection algorithm for transmission lines,” Wireless Communications and Mobile Computing, vol. 2022, pp. 1–10, 2022.
[25] A. B. Handoko, V. C. Putra, I. Setyawan, D. Utomo, J. Lee, and I. K. Timotius, “Evaluation of YOLO-X and MobileNetV2 as face mask detection algorithms,” in 2022 IEEE Industrial Electronics and Applications Conference (IEACon). Kuala Lumpur, Malaysia: IEEE, Oct. 3–4, 2022, pp. 105–110.
[26] H. Gholamalinezhad and H. Khosravi, “Pooling methods in deep neural networks, a review,” 2020. [Online]. Available: https://arxiv.org/abs/2009.07485
[27] A. Saxena, P. Singh, and S. N. Singh, “Gender and age detection using deep learning,” in 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence). Noida, India: IEEE, Jan. 28–29, 2021, pp. 719–724.
[28] F. Alonso-Fernandez, K. Hernandez-Diaz, S. Ramis, F. J. Perales, and J. Bigun, “Facial masks and soft-biometrics: Leveraging face recognition CNNs for age and gender prediction on mobile ocular images,” IET Biometrics, vol. 10, no. 5, pp. 562–580, 2021.
[29] K. Karkkainen and J. Joo, “Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, Jan. 5–9, 2021, pp. 1548–1558.
[30] R. Golwalkar and N. Mehendale, “Age detection with face mask using deep learning and FaceMaskNet-9,” 2020. [Online]. Available: https://papers.ssrn.com/sol3/papers.cfm?abstract id=3733784
[31] S. Ge, J. Li, Q. Ye, and Z. Luo, “Detecting masked faces in the wild with LLE-CNNs,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2682–2690.
[32] S. Yang, P. Luo, C. C. Loy, and X. Tang, “WIDER FACE: A face detection benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5525–5533.
[33] A. Anwar and A. Raychowdhury, “Masked face recognition for secure authentication,” 2020. [Online]. Available: https://arxiv.org/abs/2008. 11104
[34] M. F. Karaaba, O. Surinta, L. R. B. Schomaker, and M. A. Wiering, “In-plane rotational alignment of faces by eye and eye-pair detection,” in Proceedings of the 10th International Conference on Computer Vision Theory and Applications (VISAPP-2015), 2015, pp. 392–399.
[35] X. Li, Y. Makihara, C. Xu, Y. Yagi, and M. Ren, “Gait-based human age estimation using age group-dependent manifold learning and regression,” Multimedia tools and applications, vol. 77, no. 21, pp. 28 333–28 354, 2018.
[36] Y. Lin, J. Shen, Y. Wang, and M. Pantic, “FPage: Leveraging face parsing attention for facial age estimation in the wild,” IEEE Transactions on Image Processing, vol. 34, pp. 4767–4777, 2022.
[37] ——, “RoI Tanh-polar transformer network for face parsing in the wild,” Image and Vision Computing, vol. 112, 2021.
[38] D. G. Goldstein, R. P. McAfee, and S. Suri, “The effects of exposure time on memory of display advertisements,” in Proceedings of the 12th ACM conference on Electronic commerce. California, USA: Association for Computing Machinery, June 5–9, 2011, pp. 49–58.
[39] M. H. Nguyen, “Impacts of unbalanced test data on the evaluation of classification methods,” International Journal of Advanced Computer Science and Applications, vol. 10, no. 3, pp. 497–502, 2019.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Handoko, Aaron Berliano Handoko, Darmawan Utomo

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
Â
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)














