Automatic Fish Identification Using Single Shot Detector
DOI:
https://doi.org/10.21512/commit.v16i2.8126Keywords:
Automatic Fish Identification , Single Shot Detector, Sorting MachineAbstract
The vast sea conditions and the long coastline make Bengkulu one of the provinces with a high diversity of marine fish. Although it is predicted to have high diversity, data on the diversity of marine fish on the Bengkulu coast is still very limited, especially in the process of fish species detection. With the development and expansion of computer capabilities, the ability to classify fish can be done with the help of computer equipment. The research presents a new method of automating the detection of marine fish with a Single Shot Detector method. It is a relatively simple algorithm to detect an object with the help of a MobileNet architecture. In the research, the Single Shot Detector used is six extra convolution layers. Three of the extra layers can generate six predictions for each cell. The Single Shot Detector model, in total, can generate 8,732 predictions. The research succeeds in identifying seven from ten genera of marine fish with a total dataset of 1,000 images, with 90% training data and 10% validation data. Each fish genus has 100 images with different shooting angles and backgrounds. The results show that the Single Shot Detector model with MobileNet architecture gets an accuracy value of 52.48% for the identification of 10 genera of marine fish.
Plum Analytics
References
A. Salman, S. A. Siddiqui, F. Shafait, A. Mian, M. R. Shortis, K. Khurshid, A. Ulges, and U. Schwanecke, “Automatic fish detection in underwater videos by a deep neural network-based hybrid motion learning system,” ICES Journal of Marine Science, vol. 77, no. 4, pp. 1295–1307, 2020.
S. Cui, Y. Zhou, Y. Wang, and L. Zhai, “Fish detection using deep learning,” Applied Computational Intelligence and Soft Computing, vol. 2020, pp. 1–13, 2020.
C. Shi, C. Jia, and Z. Chen, “FFDet: A fully convolutional network for coral reef fish detection by layer fusion,” in 2018 IEEE Visual Communications and Image Processing (VCIP). Taichung, Taiwan: IEEE, Dec. 9–12, 2018, pp. 1–4.
J. Hu, G. S. Xia, F. Hu, and L. Zhang, “A comparative study of sampling analysis in the scene classification of optical high-spatial resolution remote sensing imagery,” Remote Sensing, vol. 7, no. 11, pp. 14 988–15 013, 2015.
F. Azadivar, T. Truong, and Y. Jiao, “A decision support system for fisheries management using operations research and systems science approach,” Expert Systems with Applications, vol. 36, no. 2, pp. 2971–2978, 2009.
M. H. Saleem, S. Khanchi, J. Potgieter, and K. M. Arif, “Image-based plant disease identification by deep learning meta-architectures,” Plants, vol. 9, no. 11, pp. 1–23, 2020.
B. Qian, Y. Xiao, Z. Zheng, M. Zhou, W. Zhuang, S. Li, and Q. Ma, “Dynamic multi-scale convolutional neural network for time series classification,” IEEE Access, vol. 8, pp. 109 732–109 746, 2020.
J. Salamon and J. P. Bello, “Deep convolutional neural networks and data augmentation for environmental sound classification,” IEEE Signal Processing Letters, vol. 24, no. 3, pp. 279–283, 2017.
A. R. Singkam, A. P. Yani, and A. Fajri, “Keragaman ikan laut dangkal Provinsi Bengkulu,” Jurnal Enggano, vol. 5, no. 3, pp. 424–438, 2020.
G. Cheng, J. Han, and X. Lu, “Remote sensing image scene classification: Benchmark and state of the art,” Proceedings of the IEEE, vol. 105, no. 10, pp. 1865–1883, 2017.
A. Jalal, A. Salman, A. Mian, M. Shortis, and F. Shafait, “Fish detection and species classification in underwater environments using deep learning with temporal information,” Ecological Informatics, vol. 57, pp. 1–13, 2020.
G. Chandan, A. Jain, H. Jain, and Mohana, “Real time object detection and tracking using deep learning and OpenCV,” in 2018 International Conference on Inventive Research in Computing Applications (ICIRCA). Coimbatore, India: IEEE, July 11–12, 2018, pp. 1305–1308.
Y. Wageeh, H. E.-D. Mohamed, A. Fadl, O. Anas, N. ElMasry, A. Nabil, and A. Atia, “YOLO fish detection with Euclidean tracking in fish farms,” Journal of Ambient Intelligence and Humanized Computing, vol. 12, no. 1, pp. 5–12, 2021.
D. Diamanta and H. Toba, “Pendeteksian citra pengunjung menggunakan single shot detector untuk analisis dan prediksi seasonality,” Jurnal Teknik Informatika dan Sistem Informasi, vol. 7, no. 1, pp. 125–141, 2021.
D. Jiang, B. Sun, S. Su, Z. Zuo, P. Wu, and X. Tan, “FASSD: A feature fusion and spatial attention-based single shot detector for small object detection,” Electronics, vol. 9, no. 9, pp. 1–20, 2020.
W. Shi, S. Bao, and D. Tan, “FFESSD: An accurate and efficient single-shot detector for target detection,” Applied Sciences, vol. 9, no. 20, pp. 1–13, 2019.
L. Jin and G. Liu, “An approach on image processing of deep learning based on improved ssd,” Symmetry, vol. 13, no. 3, pp. 1–15, 2021.
Kementerian Kelautan dan Perikanan, “Kelautan dan perikanan.” [Online]. Available: https://statistik.kkp.go.id/
D. Guan, H. Li, T. Inohae, W. Su, T. Nagaie, and K. Hokao, “Modeling urban land use change by the integration of cellular automaton and Markov model,” Ecological Modelling, vol. 222, no. 20-22, pp. 3761–3772, 2011.
M. Liu, J. Shi, Z. Li, C. Li, J. Zhu, and S. Liu, “Towards better analysis of deep convolutional neural networks,” IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 1, pp. 91–100, 2016.
S. Qiu, G. Wen, J. Liu, Z. Deng, and Y. Fan, “Unified partial configuration model framework for fast partially occluded object detection in high-resolution remote sensing images,” Remote Sensing, vol. 10, no. 3, pp. 1–23, 2018.
S. Kato, S. Amemiya, H. Takao, H. Yamashita, N. Sakamoto, and O. Abe, “Automated detection of brain metastases on non-enhanced CT using single-shot detectors,” Neuroradiology, vol. 63, no. 12, pp. 1995–2004, 2021.
A. Juneja, S. Juneja, A. Soneja, and S. Jain, “Real time object detection using CNN based single shot detector model,” Journal of Information Technology Management, vol. 13, no. 1, pp. 62–80, 2021.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
J. Redmon and A. Farhadi, “YOLO9000: Better, faster, stronger,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7263–7271.
——, “YOLO V2.0,” in CVPR, 2017.
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Arie Vatresia, Ruvita Faurina, Vivin Purnamasari, Indra Agustian
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)