Department of Business, Lamar University, Texas, USA.
International Journal of Science and Research Archive, 2025, 14(03), 435-443
Article DOI: 10.30574/ijsra.2025.14.3.0677
Received on 29 January 2025; revised on 06 March 2025; accepted on 08 March 2025
Artificial Intelligence (AI) has become an integral part of decision-making processes across various sectors, including healthcare, finance, criminal justice, and autonomous vehicles. While AI offers significant advantages in terms of efficiency, accuracy, and scalability, it also raises critical ethical concerns, particularly regarding the balance between automation and human oversight. This research article explores the ethical implications of AI-driven decision-making, focusing on the need for a balanced approach that leverages the strengths of both AI and human judgment. We present a detailed analysis of the ethical challenges, propose a framework for balancing automation and human oversight, and provide empirical data to support our arguments. The findings suggest that a hybrid model combining AI automation with human oversight is essential to ensure fairness, transparency, and accountability in AI-driven decisions.
Artificial intelligence; Explainable AI; Business decision-making; Human-in-the-loop (HITL); Algorithmic bias; Ethical frameworks; Governance models
Preview Article PDF
Rafiul Azim Jowarder. The Ethics of AI Decision-Making: Balancing Automation, Explainable AI, and Human Oversight. International Journal of Science and Research Archive, 2025, 14(03), 435-443. Article DOI: https://doi.org/10.30574/ijsra.2025.14.3.0677.
Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0







