Independent Researcher (Data & Generative AI), CA, USA.
International Journal of Science and Research Archive, 2025, 16(02), 1519-1530
Article DOI: 10.30574/ijsra.2025.16.2.2379
Received on 05 July 2025; revised on 22 August; accepted on 25 August 2025
The outbreak of large language models (LLMs) and their unprecedented speed of development and deployment has transformed natural language processing (NLP), but evaluation advantages and disadvantages of each model can’t be compared directly because of not using standard evaluation frameworks. The study presents a multi-dimensional model of evaluation aimed at a systematic evaluation of LLMs in various performance and usability aspects. Based on the concepts of benchmarking, comparative analysis metrics, and trends in interpretability and fairness assessment, we are offering a modularized architecture that assesses LLMs in different contexts: task-accuracy, robustness, explainability, efficiency, and bias cleansing. The structure can combine the quantitative and qualitative scoring techniques, using standardized data, cross-cultural standards, and equity tests to derive varying-dimensional scoring results. On three state-of-the-art LLMs GPT-4, PaLM and LLaMA, we find that performance trade-offs vary substantially across models and argue that model selection should be context-aware. The findings demonstrate that certain models, though being quite correct in general, are outperforming others in terms of interpretability or computational cost, which highlights the insufficiency of single-metric assessment. The proposed framework is meant to assist academic researchers, industrial practitioners, and policymakers in a hunt to find a reliable and reproducible solution to the evaluation and deployment of LLMs to a variety of NLP use cases. In further developments the framework will also be applied to multi- modal and federated settings, and real-time adaptability and integration of user feedback.
Large Language Model Evaluation; Comparing Language Models; Language Model Benchmarking; Evaluating NLP Models; Language Model Comparison Framework
Preview Article PDF
Furhad Parvaiz Qadri. Model evaluation framework to compare large language models. International Journal of Science and Research Archive, 2025, 16(02), 1519-1530. Article DOI: https://doi.org/10.30574/ijsra.2025.16.2.2379.
Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0







