Independent Researcher, MS in Information Systems, University of Utah, Salt Lake City, Utah.
International Journal of Science and Research Archive, 2025, 14(01), 1931-1935
Article DOI: 10.30574/ijsra.2025.14.1.0266
Received on 14 December 2024; revised on 27 January 2025; accepted on 30 January 2025
Software testing is indispensable for ensuring that modern applications meet rigorous standards of functionality, reliability, and security. However, the complexity and pace of contemporary software development often overwhelm traditional and even AI-based testing approaches, leading to gaps in coverage, delayed feedback, and increased maintenance costs. Recent breakthroughs in Generative AI, particularly Large Language Models (LLMs), offer a new avenue for automating and optimizing testing processes. These models can dynamically generate test cases, predict system vulnerabilities, handle continuous software changes, and reduce the burden on human testers. This paper explores how Generative AI complements and advances established AI-driven testing frameworks, outlines the associated challenges of data preparation and governance, and proposes future directions for fully autonomous, trustworthy testing solutions.
Artificial Intelligence; Generative AI; Large Language Models (LLMs); Software Testing; Test Automation; Quality Assurance; DevOps
Preview Article PDF
Subham Dandotiya. Generative AI for software testing: Harnessing large language models for automated and intelligent quality assurance. International Journal of Science and Research Archive, 2025, 14(01), 1931-1935. Article DOI: https://doi.org/10.30574/ijsra.2025.14.1.0266.
Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0







