The Impact of Parameter Scaling: Analysis of Specific Large Language Model Capabilities
DOI:
https://doi.org/10.21512/ijcshai.v3i1.15119Keywords:
LLM, parameter scaling, model efficiency, capability evaluation, inference speedAbstract
Large Language Models (LLMs) are currently very diverse. Some of the largest include Chat-GPT, Gemini, Microsoft Copilot, Claude Sonet, Grok, and DeepSeek. Based on this, the plan of this research is to determine how efficient these AI models can be, based on their strengths in LLM training. In this study, we will examine the impact of LLM scaling parameters on the results of each local model we will test. This study also limits the number of parameters and classifies the questions to be asked. From these questions, we can identify and classify which local LLM models perform better when asked the same questions. Then, we will objectively evaluate each of them based on the results of the study. Thus, this study aims to establish a known correlation between scaling parameters and results. We also hope that it will be useful for improving work efficiency in selecting AI that suits user needs and expanding users' knowledge of AI so they can perform their jobs more efficiently and accurately. From this research, we conclude, aware of the results of the work that has been done, that local LLMs with large scaling are not entirely good and efficient. As with Gemma3, even with 12B parameters, the results weren't better than the Gemma3 model with 4B parameters. Alternatively, if you're using similar hardware to ours, you can use GPT-oss (openai/gpt-oss-20B) and Qwen3 (Qwen/Qwen3-4B & Qwen/Qwen3-8B), which offer good results in terms of reasoning and inference speed.
References
[1] J. Kaplan et al., "Scaling Laws for Neural Language Models," arXiv:2001.08361, 2020. [Online]. Available: https://arxiv.org/abs/2001.08361
[2] W. X. Zhao et al., “A Survey of Large Language Models,” Preprint di arXiv, e-print arXiv:2303.18223, Mar. 2023. [Online]. Available: https://arxiv.org/abs/2303.18223
[3] Y. Tay et al., “Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers,” Preprint di arXiv, e-print arXiv:2109.10686, Sep. 2021. [Online]. Available: https://arxiv.org/abs/2109.10686
[4] T. B. Brown et al., “Language Models are Few-Shot Learners,” Preprint di arXiv, e-print arXiv:2005.14165, Mei 2020. [Online]. Available: https://arxiv.org/abs/2005.14165
[5] J. Hoffmann et al., “Training Compute-Optimal Large Language Models,” Preprint di arXiv, e-print arXiv:2203.15556, Mar. 2022. [Online]. Available: https://arxiv.org/abs/2203.15556
[6] P. Brauner, A. Hick, R. Philipsen, dan M. Ziefle, “What does the public think about artificial intelligence?—A criticality map to understand bias in the public perception of AI,” Front. Comp. Sci., vol. 5, Art. no. 1113903, Mar. 2023, doi: 10.3389/fcomp.2023.1113903. Available: https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2023.1113903/full
[7] S. Mirchandani et al., “Large language models as general pattern machines,” Preprint di arXiv, e-print arXiv:2307.04721, Jul. 2023. [Online]. Available: https://arxiv.org/abs/2307.04721
[8] X. Bi et al., “DeepSeek LLM: Scaling Open-Source Language Models with Longtermism,” Preprint di arXiv, e-print arXiv:2401.02954, Jan. 2024. [Online]. Available: https://arxiv.org/abs/2401.02954
[9] T. Henighan et al., “Scaling Laws for Autoregressive Generative Modeling,” Preprint di arXiv, e-print arXiv:2010.14701, Okt. 2020.
[Online]. Available: https://arxiv.org/abs/2010.14701
[10] B. Zhang et al., “When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method,” Preprint di arXiv, e-print arXiv:2402.17193, Feb. 2024. [Online]. Available: https://arxiv.org/abs/2402.17193
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Ariya Uttama Putera, Felix Marcellino; Sonya Rapinta Manalu, Keenan Ario Muhamad

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License - Share Alike that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
USER RIGHTS
All articles published Open Access will be immediately and permanently free for everyone to read and download. We are continuously working with our author communities to select the best choice of license options, currently being defined for this journal as follows: Creative Commons Attribution-Share Alike (CC BY-SA)





