Obstacle Avoidance Method using Stereo Camera for Autonomous Robot
DOI:
https://doi.org/10.21512/ijcshai.v2i2.14617Keywords:
autonomous robot, obstacle avoidance, obstacle detection, stereo cameraAbstract
This paper presents the development and implementation of an obstacle avoidance system for an autonomous robot using a stereo camera setup. The system enables the robot to navigate its environment safely by identifying obstacles and making real-time movement decisions based on depth perception. The stereo vision configuration allows the robot to estimate distances through disparity computation and polynomial linear regression modeling. The proposed algorithm performs stereo matching, image rectification, and depth estimation to generate disparity maps representing obstacle distances. The robot uses this information to figure out if the items it sees are close, medium, or far away, and then it chooses the right move, such stopping, turning left, or turning right. The robot can find and avoid obstacles in different indoor settings, as shown by the experimental findings. The regression model employed for depth estimation attained a high degree of accuracy, evidenced by a R² value of 0.97 and a minimal mean absolute error, signifying robust reliability in distance prediction. The research validates that the amalgamation of stereo vision with regression-based distance estimate yields a resilient and economical method for autonomous navigation. This study advances the ongoing evolution of intelligent robotic systems that can execute autonomous decision-making with limited human oversight
References
[1] G. Schöner, M. Dose, and C. Engels, “Dynamics of behavior: Theory and applications for autonomous robot architectures,” Robotics and Autonomous Systems, vol. 16, no. 2, pp. 213–245, Dec. 1995, doi: 10.1016/0921-8890(95)00049-6.
[2] H. Li, Z. Li, N. Unver, and M. R. Azimi, “StereoVoxelNet: Real-Time Obstacle Detection Based on Occupancy Voxels from a Stereo Camera Using Deep Neural Networks,” arXiv preprint arXiv:2209.08459, 2022.
[3] A. K. Seewald, “Evaluating Two Ways for Mobile Robot Obstacle Avoidance with Stereo Cameras: Stereo View Algorithms and End-to-End Trained Disparity-Sensitive Networks,” in Proc. 11th Int. Conf. on Pattern Recognition Applications and Methods (ICPRAM), 2022, pp. 301–308.
[4] F. Umam, M. Fuad, I. Suwarno, A. Ma’arif, and W. Caesarendra, “Obstacle Avoidance Based on Stereo Vision Navigation System for Omni-Directional Robot,” Journal of Robotics and Control (JRC), vol. 4, no. 3, pp. 345–354, 2023, doi: 10.18196/jrc.4317977.
[5] C.-H. Nguyen, Q.-A. Vu, and T. N. Pham, “Optimal Obstacle Avoidance Strategy Using Deep Reinforcement Learning Based on Stereo Camera,” MM Science Journal, vol. 2024, pp. 6424–6433, Oct. 2024, doi: 10.17973/MMSJ.2024_10_2024063.
[6] S. Badrloo, “Fast and Accurate Obstacle Detection Based on Stereo Vision and Deep Learning,” ISPRS Archives – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLVIII-G, pp. 177–184, 2025, doi: 10.5194/isprs-archives-XLVIII-G-2025-177-2025.
[7] R. Szeliski, Computer Vision. London: Springer London, 2011. doi: 10.1007/978-1-84882-935-0.
[8] R. A. Hamzah and H. Ibrahim, “Literature Survey on Stereo Vision Disparity Map Algorithms,” Journal of Sensors, vol. 2016, pp. 1–23, 2016, doi: 10.1155/2016/8742920.
[9] G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.
[10] Z. Zhang, “Determining the Epipolar Geometry and its Uncertainty: A Review,” Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, vol. 37, no. 1, pp. 55–57, Dec. 1992, doi: 10.1177/075910639203700105.
[11] C. Loop and Z. Zhang, Computing rectifying homographies for stereo vision, vol. 1. 1999, p. 131 Vol. 1. doi: 10.1109/CVPR.1999.786928.
[12] J. L. Devore, Probability and Statistics for Engineering and the Sciences, 8th edition. Boston, MA: Cengage Learning, 2011.
[13] Rostam Affendi Hamzah, Rosman Abd Rahim, and Zarina Mohd Noh, “Sum of Absolute Differences algorithm in stereo correspondence problem for stereo matching in computer vision application,” in 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, Jul. 2010, pp. 652–657. doi: 10.1109/ICCSIT.2010.5565062.
[14] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, no. 11, pp. 1330–1334, Nov. 2000, doi: 10.1109/34.888718.
[15] L. Nalpantidis, I. Kostavelis, and A. Gasteratos, “Stereovision-Based Algorithm for Obstacle Avoidance,” in Intelligent Robotics and Applications, vol. 5928, M. Xie, Y. Xiong, C. Xiong, H. Liu, and Z. Hu, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 195–204. doi: 10.1007/978-3-642-10817-4_19.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Nabeel Kahlil Maulana, Widodo Budiharto, Hanis Amalia Saputri

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)



