Universal Face Recognition Using Multiple Deep Learning Agent and Lazy Learning Algorithm
DOI:
https://doi.org/10.21512/commit.v15i2.6688Keywords:
Face Recognition, Multiple Deep Learning Agent, Lazy Learning AlgorithmAbstract
Mainstream face recognition systems have a problem regarding the disparity of recognizing faces from different races and ethnic backgrounds. This problem is caused by the imbalances in the proportion of racial representations found in mainstream datasets. Hence, the research proposes using a multi-agent system to overcome this problem. The system employs several face recognition agents according to the number of races that are necessary to make data encodings for the classification process. The first step in implementing this system is to develop a race classifier. The number of races is arbitrary or determined differently in a caseby-case scenario. The race classifier determines which face recognition agent will try to recognize the face in the query. Each face recognition agent is trained using a different dataset according to their assigned race, so they have different parts in the system. The research utilizes lazy learning algorithms as the final classifier to accommodate a system with the constant data flow of the database. The experiment divides the data into three racial groups, which are black, Asian, and white. The experiment concludes that dividing face recognition tasks based on racial groups into several face recognition models has better performance than a single model with the same dataset with the same imbalances in racial representation. The multiple agent system achieves 85% on the Face Recognition Rate (FRR), while the single pipeline model achieves only 80.83% using the same dataset.
Plum Analytics
References
R. S. Malpass and J. Kravitz, “Recognition for faces of own and other race,” Journal of Personality and Social Psychology, vol. 13, no. 4, pp. 330–334, 1969.
J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Proceedings of the 1st Conference on Fairness, Accountability and Transparency. PMLR, 2018, pp. 77–91.
J. Wang and Z. Li, “Research on face recognition based on CNN,” in IOP Conference Series: Earth and Environmental Science, vol. 170. IOP Publishing, 2018, pp. 1–5.
T. Rajeshkumar, U. Samsudeen, U. Scholar, S. Sangeetha, and U. S. Rani, “Enhanced visual attendance system by face recognition using K–Nearest Neighbor algorithm,” Journal of Advanced Research in Dynamical and Control Systems, vol. 11, no. 06-Special Issue, pp. 141–147, 2019.
Y. Zhang, E. Zhang, and W. Chen, “Deep neural network for halftone image classification based on sparse auto-encoder,” Engineering Applications of Artificial Intelligence, vol. 50, pp. 245–255, 2016.
Y. Wei, W. Xia, M. Lin, J. Huang, B. Ni, J. Dong, Y. Zhao, and S. Yan, “HCP: A flexible CNN framework for multi-label image classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 9, pp. 1901–1907, 2016.
O. D´eniz, G. Bueno, J. Salido, and F. De La Torre, “Face recognition using histograms of oriented gradients,” Pattern Recognition Letters, vol. 32, no. 12, pp. 1598–1603, 2011.
B. H. Park, S. Y. Oh, and I. J. Kim, “Face alignment using a deep neural network with local feature learning and recurrent regression,” Expert Systems with Applications, vol. 89, pp. 66–80, 2017.
T. Vo, T. Nguyen, and C. T. Le, “Race recognition using deep convolutional neural networks,” Symmetry, vol. 10, no. 11, p. 564, 2018.
W. Wang, F. He, and Q. Zhao, “Facial ethnicity classification with deep convolutional neural networks,” in Chinese Conference on Biometric Recognition. Chengdu, China: Springer, Oct. 14–16, 2016, pp. 176–185.
D. E. Rumelhart and J. L. McClelland, “Learning internal representations by error propagation,” in Parallel distributed processing: Explorations in the microstructure of cognition: Foundations. MIT Press, 1987, pp. 318–362.
S. Gao, Y. Zhang, K. Jia, J. Lu, and Y. Zhang, “Single sample face recognition via learning deep supervised autoencoders,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 10, pp. 2108–2118, 2015.
I. Manakov, M. Rohm, and V. Tresp, “Walking the tightrope: An investigation of the convolutional autoencoder bottleneck,” ArXiv Preprint ArXiv:1911.07460, 2019.
A. Tewari, M. Zollhofer, H. Kim, P. Garrido, F. Bernard, P. Perez, and C. Theobalt, “MoFA: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017, pp. 1274–1283.
C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A survey on deep transfer learning,” in International Conference on Artificial Neural Networks. Rhodes, Greece: Springer, Oct. 4–7, 2018, pp. 270–279.
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015.
F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1251–1258.
J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3431–3440.
H. C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1285–1298, 2016.
T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Transactions on Information Theory, vol. 13, no. 1, pp. 21–27, 1967.
J. P. Jose, P. Poornima, and K. M. Kumar, “A novel method for color face recognition using KNN classifier,” in 2012 International Conference on Computing, Communication and Applications. Dindigul, India: IEEE, Feb. 2012, pp. 1–3.
H. Kamel, D. Abdulah, and J. M. Al-Tuwaijari, “Cancer classification using Gaussian Naive Bayes Algorithm,” in 2019 International Engineering Conference (IEC). Erbil, Iraq: IEEE, June 23–25, 2019, pp. 165–170.
C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “300 faces in-the-wild challenge: The first facial landmark localization challenge,” in Proceedings of the IEEE International Conference on Computer Vision Workshops, 2013, pp. 397–403.
Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3730–3738.
B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, “Learning transferable architectures for scalable image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8697–8710.
C. Nwankpa, W. Ijomah, A. Gachagan, and S. Marshall, “Activation functions: Comparison of trends in practice and research for deep learning,”ArXiv Preprint ArXiv:1811.03378, 2018.
A. F. Agarap, “Deep learning using Rectified Linear Units (ReLU),” ArXiv Preprint ArXiv:1803.08375, 2018.
J. Nilsson and T. Akenine-M¨oller, “Understanding SSIM,” ArXiv Preprint ArXiv:2006.13846, 2020.
F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 815–823.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017, pp. 4278–4284.
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1–9.
O. M. Parkhi, A. Vedaldi, and A. Zisserman, “Deep face recognition,” in BMVC. British Machine Vision Association, 2015, pp. 1–12.
D. P. Kingama and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference for Learning Representations, San Diego, USA, May 7–9 2014.
S. Ruder, “An overview of gradient descent optimization algorithms,” ArXiv Preprint ArXiv:1609.04747, 2016.
Downloads
Published
Issue
Section
License
Copyright (c) 2021 Kenny Vincent, Yosi Kristian
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)