Comparative Performance Analysis of Object-Oriented Programming and Data-Oriented Programming in TensorFlow

Authors

  • Mangapul Siahaan
  • Jefriyanto Chandra Universitas Internasional Batam
  • Muhamad Dody Firmansyah

Abstract

The rapid advancement of deep learning significantly increases computational demands, making performance optimization essential for model scalability and deployment. While numerous studies optimize neural network architectures, the effect of different programming paradigms on computational efficiency remains insufficiently explored. This study aims to compare Object-Oriented Programming (OOP) and Data-Oriented Programming (DOP) paradigms in TensorFlow-based deep learning workflows, focusing on their performance across four processing phases: build, compile, train, and evaluate, under a controlled experimental environment with repeated iterations and systematic measurements. Both paradigms are implemented using identical Convolutional Neural Network (CNN) architectures trained on the CIFAR-100 image dataset over thirty controlled experimental iterations. A custom profiler integrating the Python System and Process Utilities (psutil) and NVIDIA Management Library (pynvml) monitors real-time system performance, capturing CPU and GPU utilization as well as memory usage. The results reveal that DOP achieves better resource efficiency with lower memory usage (549.98 MB versus 676.25 MB), higher GPU utilization (64.68% versus 61.08%), and faster evaluation execution (1.50 seconds versus 2.59 seconds), while also attaining higher model accuracy (32.38% versus 28.08%). In contrast, OOP benefits from TensorFlow’s Sequential API optimizations, resulting in faster training times but greater CPU and memory consumption. These findings highlight that DOP provides superior runtime efficiency and offers practical benefits for performance-critical deep learning applications.

 

Dimensions

References

Arboretti, R., Ceccato, R., Pegoraro, L., & Salmaso, L. (2022). Design of experiments and machine learning for product innovation: A systematic literature review. In Quality and Reliability Engineering International (Vol. 38, Issue 2, pp. 1131–1156). John Wiley and Sons Ltd. https://doi.org/10.1002/qre.3025

Audibert, A., Chen, Y., Graur, D., Klimovic, A., Šimša, J., & Thekkath, C. A. (2023). tf.data service: A case for disaggregating ML input data processing. SoCC 2023 - Proceedings of the 2023 ACM Symposium on Cloud Computing, 358–375. https://doi.org/10.1145/3620678.3624666

Cueto-Mendoza, E., & Kelleher, J. (2024). A framework for measuring the training efficiency of a neural architecture. Artificial Intelligence Review, 57(12), 349. https://doi.org/10.1007/s10462-024-10943-8

Filippou, K., Aifantis, G., Papakostas, G. A., & Tsekouras, G. E. (2023). Structure learning and hyperparameter optimization using an Automated Machine Learning (AutoML) pipeline. Information (Switzerland), 14(4), 232. https://doi.org/10.3390/info14040232

Im, J., Lee, J., Lee, S., & Kwon, H. Y. (2024). Data pipeline for real-time energy consumption data management and prediction. Frontiers in Big Data, 7, 1308236. https://doi.org/10.3389/fdata.2024.1308236

Isenko, A., Mayer, R., Jedele, J., & Jacobsen, H. A. (2022). Where is my training bottleneck? Hidden trade-offs in deep learning preprocessing pipelines. In Proceedings of the 2022 International Conference on Management of Data (pp. 1825-1839). https://doi.org/10.1145/3514221.3517848

Liang, S. (2021). Comparative analysis of SVM, XGBoost and Neural Network on Hate Speech Classification. Jurnal RESTI, 5(5), 896–903. https://doi.org/10.29207/resti.v5i5.3506

Lott, S. F., & Phillips, Dusty. (2021). Python object-oriented programming: Build robust and maintainable object-oriented Python applications and libraries. Packt Publishing.

M, P., J, A. B., & E.S, S. (2024). Real-time web server monitoring system using Python. Journal of Artificial Intelligence and Capsule Networks, 6(3), 332–339. https://doi.org/10.36548/jaicn.2024.3.006

Meir, Y., Tzach, Y., Hodassman, S., Tevet, O., & Kanter, I. (2024). Towards a universal mechanism for successful deep learning. Scientific Reports, 14(1). https://doi.org/10.1038/s41598-024-56609-x

Mironov, T., Motaylenko, L., Andreev, D., Antonov, I., & Aristov, M. (2021). Comparison of object-oriented programming and data-oriented design for implementing trading strategies backtester. Vide. Tehnologija. Resursi - Environment, Technology, Resources, 2, 124–130. https://doi.org/10.17770/etr2021vol2.6629

Murray, D. G., Šimša, J., Klimovic, A., & Indyk, I. (2021). tf.Data: A machine learning data processing framework. Proceedings of the VLDB Endowment, 14(12), 2945–2958. https://doi.org/10.14778/3476311.3476374

Paleyes, A., Cabrera, C., & Lawrence, N. D. (2022). An empirical evaluation of flow based programming in the machine learning deployment context. In Proceedings of the 1st International Conference on AI Engineering: Software Engineering for AI (pp. 54-64). https://doi.org/10.1145/3522664.3528601

Rafidison, M. A., Ramafiarisona, H. M., Randriamitantsoa, P. A., Rafanantenana, S. H. J., Toky, F. M. R., Rakotondrazaka, L. P., & Rakotomihamina, A. H. (2023). Image classification based on light convolutional neural network using pulse couple neural network. Computational Intelligence and Neuroscience, 2023(1), 7371907. https://doi.org/10.1155/2023/7371907

Rainio, O., Teuho, J., & Klén, R. (2024). Evaluation metrics and statistical tests for machine learning. Scientific Reports, 14(1), 6086. https://doi.org/10.1038/s41598-024-56706-x

Rangineni, S. (2023). An analysis of data quality requirements for machine learning development pipelines frameworks. International Journal of Computer Trends and Technology, 71(8), 16–27. https://doi.org/10.14445/22312803/ijctt-v71i8p103

Sevilla, J., Heim, L., Ho, A., Besiroglu, T., Hobbhahn, M., & Villalobos, P. (2022). Compute trends across three eras of machine learning. In 2022 International Joint Conference on Neural Networks (IJCNN) (pp. 1-8). IEEE. https://doi.org/10.1109/IJCNN55064.2022.9891914

Sharvit, Y. (2022). Data-oriented programming: Reduce software complexity. Manning.

TensorFlow Team. (2024, August 15). Better performance with the tf.data API. https://www.tensorflow.org/guide/data_performance

Wingqvist, D., Wickstrom, F., & Memeti, S. (2022). Evaluating the performance of object-oriented and data-oriented design with multi-threading in game development. 2022 IEEE Games, Entertainment, Media Conference, GEM 2022. https://doi.org/10.1109/GEM56474.2022.10017610

Xin, D., Miao, H., Parameswaran, A., & Polyzotis, N. (2021). Production machine learning pipelines: Empirical analysis and optimization opportunities. Proceedings of the ACM SIGMOD International Conference on Management of Data, 2639–2652. https://doi.org/10.1145/3448016.3457566

Yefta, C. (2022). Analysis study to detect student learning problems in online learning using text mining. Journal of Theoretical and Applied Information Technology, 31(6).

Zhao, M., Adamiak, E., & Kozyrakis, C. (2025). cedar: Optimized and unified machine learning input data pipelines. Proceedings of the VLDB Endowment, 18(2), 488–502. https://doi.org/10.14778/3705829.3705861

Published

2026-02-12

How to Cite

Siahaan, M., Chandra, J., & Firmansyah, M. D. (2026). Comparative Performance Analysis of Object-Oriented Programming and Data-Oriented Programming in TensorFlow. ComTech: Computer, Mathematics and Engineering Applications, 17(1). Retrieved from https://journal.binus.ac.id/index.php/comtech/article/view/14648
Abstract 0  .