Diseases Classification for Tea Plant Using Concatenated Convolution Neural Network
DOI:
https://doi.org/10.21512/commit.v13i2.5886Keywords:
Concatenated Convolution Neural Network, Classification, GoogLeNet, Xception, Inception- ResNet-v2Abstract
Plant diseases can cause a significant decrease in tea crop production. Early disease detection can help to minimize the loss. For tea plants, experts can identify the diseases by visual inspection on the leaves. However, providing experts to deal with disease identification may be very costly. The machine learning technology can be implemented to provide automatic plant disease detection. Currently, deep learning is state-of-the-art for object identification in computer vision. In this study, the researchers propose the Convolutional Neural Network (CNN) for tea disease detections. The researchers focus on the implementation of concatenated CNN, namely GoogleNet, Xception, and Inception-ResNet-v2, for this task. About 4727 images of tea leaves are collected, comprising of three types of diseases that commonly occur in Indonesia and a healthy class. The experimental results confirm the effectiveness of concatenated CNN for tea disease detections. The accuracy of 89.64% is achieved.
Plum Analytics
References
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004.
H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, vol. 110, no. 3, pp. 346–359, 2008.
D. G. Tsolakidis, D. I. Kosmopoulos, and G. Papadourakis, “Plant leaf recognition using Zernike moments and Histogram of Oriented Gradients,” in Hellenic Conference on Artificial Intelligence. Ioannina, Greece: Springer, May 15–17, 2014, pp. 406–417.
M. S. Hossain, R. M. Mou, M. M. Hasan, S. Chakraborty, and M. A. Razzak, “Recognition and detection of tea leaf’s diseases using Support Vector Machine,” in 2018 IEEE 14th International Colloquium on Signal Processing & Its Applications (CSPA). Batu Feringghi, Malaysia: IEEE, March 9–10, 2018, pp. 150–154.
R. Muralidharan and C. Chandrasekar, “Object recognition using SVM-KNN based on geometric moment invariant,” International Journal of Computer Trends and Technology, vol. 1, no. 1, pp. 215–220, 2011.
——, “Object recognition using Support Vector Machine augmented by RST invariants,” International Journal of Computer Science Issues (IJCSI), vol. 8, no. 5, p. 280, 2011.
Z.-Q. Zhao, P. Zheng, S.-t. Xu, and X. Wu, “Object detection with deep learning: A review,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–21, 2019.
S. Sladojevic, M. Arsenovic, A. Anderla, D. Culibrk, and D. Stefanovic, “Deep Neural Networks based recognition of plant diseases by leaf image classification,” Computational Intelligence and Neuroscience, vol. 2016, pp. 1–11, 2016.
S. Zhang, X. Wu, Z. You, and L. Zhang, “Leaf image based cucumber disease recognition using sparse representation classification,” Computers and Electronics in Agriculture, vol. 134, no. March, pp. 135–141, 2017.
A. K. Rangarajan, R. Purushothaman, and A. Ramesh, “Tomato crop disease classification using pre-trained deep learning algorithm,” Procedia Computer Science, vol. 133, pp. 1040–1047, 2018.
S. P. Mohanty, D. P. Hughes, and M. Salath´e, “Using deep learning for image-based plant disease detection,” Frontiers in Plant Science, vol. 7, pp. 1–10, 2016.
A. Ramcharan, P. McCloskey, K. Baranowski, N. Mbilinyi, L. Mrisho, M. Ndalahwa, J. Legg, and D. P. Hughes, “A mobile-based deep learning model for cassava disease diagnosis,” Frontiers in Plant Science, vol. 10, pp. 1–8, 2019.
B. Liu, Y. Zhang, D. He, and Y. Li, “Identification of apple leaf diseases based on Deep Convolutional Neural Networks,” Symmetry, vol. 10, no. 1, p. 11, 2018.
G. Geetharamani and A. Pandian, “Identification of plant leaf diseases using a nine-layer deep Convolutional Neural Network,” Computers & Electrical Engineering, vol. 76, pp. 323–338, 2019.
K. P. Ferentinos, “Deep learning models for plant disease detection and diagnosis,” Computers and Electronics in Agriculture, vol. 145, pp. 311–318, 2018.
A. T. Sapkal and U. V. Kulkarni, “Comparative study of leaf disease diagnosis system using texture features and deep learning features,” International Journal of Applied Engineering Research, vol. 13, no. 19, pp. 14 334–14 340, 2018.
P. M. Mainkar, S. Ghorpade, and M. Adawadkar, “Plant leaf disease detection and classification using image processing techniques,” International Journal of Innovative and Emerging Research in Engineering, vol. 2, no. 4, pp. 139–144, 2015.
J. G. Barbedo, “Factors influencing the use of deep learning for plant disease recognition,” Biosystems Engineering, vol. 172, pp. 84–91, 2018.
S. H. Lee, C. S. Chan, S. J. Mayo, and P. Remagnino, “How deep learning extracts and learns leaf features for plant classification,” Pattern Recognition, vol. 71, pp. 1–13, 2017.
M. Dyrmann, H. Karstoft, and H. S. Midtiby, “Plant species classification using Deep Convolutional Neural Network,” Biosystems Engineering, vol. 151, pp. 72–80, 2016.
X. Hao, W. Zhang, F. Zhao, Y. Liu, W. Qian, Y. Wang, L. Wang, J. Zeng, Y. Yang, and X. Wang, “Discovery of plant viruses from tea plant (Camellia sinensis (l.) o. kuntze) by metagenomic sequencing,” Frontiers in microbiology, vol. 9, pp. 1–15, 2018.
L. Keith, W.-H. Ko, and D. M. Sato. (2006) Identification guide for diseases of tea (Camellia sinensis). [Online]. Available: https://scholarspace.manoa.hawaii.edu/bitstream/10125/12400/PD-33.pdf
J. Chen, Q. Liu, and L. Gao, “Visual tea leaf disease recognition using a Convolutional Neural Network model,” Symmetry, vol. 11, no. 3, p. 343, 2019.
B. C. Karmokar, M. S. Ullah, M. K. Siddiquee, and K. M. R. Alam, “Tea leaf diseases recognition using neural network ensemble,” International Journal of Computer Applications, vol. 114, no. 17, pp. 27–30, 2015.
ImageNet. ImageNet Large Scale Visual Recognition Challenge (ILSVRC). [Online]. Available: http://www.image-net.org/challenges/LSVRC/
C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, June 7–12, 2015, pp. 1–9.
F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, Hawaii, United States, July 21–26, 2017, pp. 1251–1258.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, United States, June 26–July 1, 2016, pp. 770–778.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-ResNet and the impact of residual connections on learning,” in Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, Feb. 04–09, 2017.
D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, May 7–9, 2015.
Downloads
Published
Issue
Section
License
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)