Power-Efficient Surveillance Camera Using Sleep Mode and YOLOv3 Model-Based Edge Computing
Keywords:
Power Efficient, Surveillance Camera, Sleep Mode, YOLOv3, Edge ComputingAbstract
Surveillance cameras play a vital role in a wide range of monitoring applications, particularly in ensuring real-time security and observation. However, conventional surveillance systems often face limitations in energy efficiency, especially when deployed in remote locations or powered by battery sources. Although many surveillance cameras offer high-resolution capabilities, only a few incorporate power management strategies to optimize energy usage. The research presents the design and implementation of a low-power surveillance camera system based on the ESP32-CAM platform, incorporating a sleep mode to enhance power efficiency. Two operational scenarios are tested: one with enabled sleep mode and one without. Experimental results show that the camera without sleep mode achieves a higher frame rate of up to 17.01 FPS than the sleep-enabled camera with a maximum of 3.53 FPS. Despite the reduced frame rate, the system successfully performs object detection using the YOLOv3 model processed via edge computing. Furthermore, the average wake-up time from sleep mode is 1.414 seconds, indicating a fast, responsive system suitable for low-power embedded applications. In terms of energy consumption, the sleep-enabled device consumes only 3475.543 mW over 2 hours of operation, compared to 5561.639 mW for the device without sleep mode, resulting in an energy saving of approximately 37.5%. These findings confirm that implementing sleep mode is effective in managing power consumption without compromising core surveillance functionality. The research contributes to the development of sustainable and energy-efficient monitoring solutions and highlights the potential for further enhancement through advanced edge computing platforms in future work.
References
[1] F. A. Alenizi, S. Abbasi, A. H. Mohammed, and A. M. Rahmani, “The artificial intelligence technologies in Industry 4.0: A taxonomy, approaches, and future directions,” Computers &Industrial Engineering, vol. 185, 2023.
[2] BPS - Statistics Indonesia, “Statistical data stories for Indonesia – Digital transformation: Exploring the association of digitalization with education and health,” 2025. [Online]. Available: https://bit.ly/3MnVInv
[3] K. Aminiyeganeh, R. W. L. Coutinho, and A. Boukerche, “IoT video analytics for surveillance-based systems in smart cities,” Computer Communications, vol. 224, pp. 95–105, 2024.
[4] PT PLN (Persero), “Statistik PLN 2023,” 2024. [Online]. Available: https://web.pln.co.id/statics/uploads/2024/07/Laporan-Statistik-2023-Ind.pdf
[5] M. I. Khalif and A. Muis, “Fuzzy logic optimization to control Air Conditioner (AC) conditions using rule-based algorithm,” ELKHA: Jurnal Teknik Elektro, vol. 17, no. 1, pp. 38–45, 2025.
[6] President of the Republic of Indonesia, “Presidential regulation 70/2009 concerning energy conservation,” 2009. [Online]. Available: https://bit.ly/4at3GF6
[7] C. L. Kok, J. B. Heng, Y. Y. Koh, and T. H. Teo, “Energy-, cost-, and resource-efficient iot hazard detection system with adaptive monitoring,” Sensors, vol. 25, no. 6, pp. 1–34, 2025.
[8] C. K. Rao, S. K. Sahoo, and F. F. Yanine, “An Internet of things–based intelligent smart energy monitoring system for solar photovoltaic applications,” in Performance enhancement and control of photovoltaic systems. Elsevier, 2024, pp. 375–416.
[9] M. Giordano, N. Baumann, M. Crabolu, R. Fischer, G. Bellusci, and M. Magno, “Design and performance evaluation of an ultralow-power smart IoT device with embedded TinyML for asset activity monitoring,” IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–11, 2022.
[10] H. Rajab, H. Al-Amaireh, T. Bouguera, and T. Cinkler, “Evaluation of energy consumption of LPWAN technologies,” EURASIP Journal on Wireless Communications and Networking, vol. 2023, pp. 1–26, 2023.
[11] S. Kim, C. Kim, and S. Kim, “Improving performance of real-time object detection in edge device through concurrent multi-frame processing,” IEEE Access, vol. 13, pp. 1522–1533, 2024.
[12] R. Al Amin, M. Hasan, V. Wiese, and R. Obermaisser, “FPGA-based real-time object detection and classification system using YOLO for edge computing,” IEEE Access, vol. 12, pp. 73 268–73 278, 2024.
[13] W. Qian, Z. Zhu, C. Zhu, and Y. Zhu, “Fpgabased accelerator for yolov5 object detection with optimized computation and data access for edge deployment,” Parallel Computing, vol. 124, 2025.
[14] A. Upadhyay, G. Sunil, S. Das, J. Mettler, K. Howatt, and X. Sun, “Multiclass weed and crop detection using optimized YOLO models on edge devices,” Journal of Agriculture and Food Research, vol. 22, pp. 1–14, 2025.
[15] D. Syed, S. S. Refaat, and H. Abu-Rub, “Performance evaluation of distributed machine learning for load forecasting in smart grids,” in 2020 Cybernetics & Informatics (K&I). Velke Karlovice, Czech Republic: IEEE, Jan. 29–Feb. 01, 2020, pp. 1–6.
[16] H. H. Nguyen, T. N. Ta, N. C. Nguyen, V. T. Bui, H. M. Pham, and D. M. Nguyen, “YOLO based real-time human detection for smart video surveillance at the edge,” in 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE). Phu Quoc Island, Vietnam: IEEE, Jan. 13–15, 2021, pp. 439–444.
[17] Y. H. Chang, F. C. Wu, and H. W. Lin, “Design and implementation of ESP32-based edge computing for object detection,” Sensors, vol. 25, no. 6, pp. 1–26, 2025.
[18] F. Xu, L. Huang, X. Gao, T. Yu, and L. Zhang, “Research on YOLOv3 model compression strategy for UAV deployment,” Cognitive Robotics, vol. 4, pp. 8–18, 2024.
[19] J. Kim and S. Ha, “Energy-aware scenariobased mapping of deep learning applications onto heterogeneous processors under real-time constraints,” IEEE Transactions on Computers, vol. 72, no. 6, pp. 1666–1680, 2022.
[20] K. Dokic, “Microcontrollers on the edge–Is ESP32 with camera ready for machine learning?” in International Conference on Image and Signal Processing. Dubai, UAE: Springer International Publishing, June 20–21, 2020, pp. 213–220.
[21] Espressif Systems, “Sleep modes.” [Online]. Available: https://docs.espressif.com/projects/esp-idf/en/stable/esp32/api-reference/system/sleep modes.html
[22] P. Hobden, E. Nurellari, and S. Srivastava, “RFSoC softwarisation of a 2.45 GHz doppler microwave radar motion sensor,” Journal of Sensor and Actuator Networks, vol. 13, no. 5, pp. 1–27, 2024.
[23] K. Gebru, M. Rapelli, R. Rusca, C. Casetti, C. F. Chiasserini, and P. Giaccone, “Edge-based passive crowd monitoring through WiFi beacons,” Computer Communications, vol. 192, pp. 163–170, 2022.
[24] T. C. Shen and E. T. H. Chu, “Edge-computingbased people-counting system for elevators using MobileNet–single-stage object detection,” Future Internet, vol. 15, no. 10, pp. 1–21, 2023.
[25] M. A. Khan, H. Menouar, and R. Hamila, “LCDnet: A lightweight crowd density estimation model for real-time video surveillance,” Journal of Real-Time Image Processing, vol. 20, no. 2, pp. 1–11, 2023.
[26] M. Katsigiannis and K. Mykoniatis, “Enhancing industrial IoT with edge computing and computer vision: An analog gauge visual digitization approach,” Manufacturing Letters, vol. 41, pp. 1264–1273, 2024.
[27] C. F. Liao and Y. J. Weng, “Enabling spaceaware service discovery model in home networks through a compatible extension to mDNS/DNSSD,” Electronics, vol. 12, no. 18, pp. 1–17, 2023.
[28] Espressif Systems, “ESP32 lowpower management.” [Online]. Available: https://docs.espressif.com/projects/esp-iot-solution/en/latest/low power solution/esp32 lowpower solution.html
[29] H. Wu, C. Chen, and K. Weng, “An energyefficient strategy for microcontrollers,” Applied Sciences, vol. 11, no. 6, pp. 1–18, 2021.
[30] A. Vijayakumar and S. Vairavasundaram, “YOLO-based object detection models: A review and its applications,” Multimedia Tools and Applications, vol. 83, no. 35, pp. 83 535–83 574, 2024.
[31] Y. Zhou, H. Qian, and P. Ding, “Lite-YOLOv3: A real-time object detector based on multi-scale slice depthwise convolution and lightweight attention mechanism,” Journal of Real-Time Image Processing, vol. 20, 2023.
[32] T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft COCO: Common Objects in Context,” in European Conference on Computer Vision. Zurich, Switzerland: Springer, Sep. 6–12, 2014, pp. 740–755.
[33] K. Debattista, K. Bugeja, S. Spina, T. Bashford-Rogers, and V. Hulusic, “Frame rate vs resolution: A subjective evaluation of spatiotemporal perceived quality under varying computational budgets,” Computer Graphics Forum, vol. 37, no. 1, pp. 363–374, 2018.
[34] F. Harjanto, Z. Wang, S. Lu, A. C. Tsoi, and D. D. Feng, “Investigating the impact of frame rate towards robust human action recognition,” Signal Processing, vol. 124, pp. 220–232, 2016.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Mhd. Idham Khalif, Raden Deiny Mardian, Ade Faiz Kurnia Putra, M. Dhanu Wicaksono, Tirta Akdi Toma Mesoya Hulu, Listyo Edi Prabowo

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
a. Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
b. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
c. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
Â
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)

















