Home
International Journal of Science and Research Archive
International, Peer reviewed, Open access Journal ISSN Approved Journal No. 2582-8185

Main navigation

  • Home
    • Journal Information
    • Abstracting and Indexing
    • Editorial Board Members
    • Reviewer Panel
    • Journal Policies
    • IJSRA CrossMark Policy
    • Publication Ethics
    • Instructions for Authors
    • Article processing fee
    • Track Manuscript Status
    • Get Publication Certificate
    • Current Issue
    • Issue in Progress
    • Past Issues
    • Become a Reviewer panel member
    • Join as Editorial Board Member
  • Contact us
  • Downloads

ISSN Approved Journal || eISSN: 2582-8185 || CODEN: IJSRO2 || Impact Factor 8.2 || Google Scholar and CrossRef Indexed

Fast Publication within 48 hours || Low Article Processing Charges || Peer Reviewed and Referred Journal || Free Certificate

Research and review articles are invited for publication in January 2026 (Volume 18, Issue 1)

Mitigating adversarial threats in deep learning models trained on sensitive imaging and sequencing datasets within hospital infrastructures

Breadcrumb

  • Home
  • Mitigating adversarial threats in deep learning models trained on sensitive imaging and sequencing datasets within hospital infrastructures

Nnamdi Rex Onwubuche *

Saunders College of Business, Rochester Institute of Technology, USA.

Review Article

International Journal of Science and Research Archive, 2025, 16(01), 1146-1167

Article DOI: 10.30574/ijsra.2025.16.1.2128

DOI url: https://doi.org/10.30574/ijsra.2025.16.1.2128

Received on 02 June 2025; revised on 13 July 2025; accepted on 15 July 2025

As deep learning continues to transform clinical diagnostics, models trained on sensitive imaging and sequencing datasets are increasingly deployed within hospital infrastructures for tasks such as tumor classification, variant calling, and disease risk prediction. While these models offer remarkable accuracy and efficiency, they also present new vulnerabilities to adversarial threats maliciously crafted inputs designed to deceive AI systems without altering visual or genomic content perceptibly. Such attacks can compromise diagnostic reliability, patient safety, and institutional trust, particularly when targeting critical applications involving radiology scans or genetic data. This paper investigates strategies for mitigating adversarial threats in deep learning models operating within hospital ecosystems. We explore how attacks such as Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and adversarial patching exploit model interpretability gaps and high-dimensional data sparsity in medical domains. Emphasis is placed on the unique risks posed to models trained on radiological images (e.g., CT, MRI) and sequencing outputs (e.g., variant allele frequencies, expression matrices) that contain highly sensitive and potentially re-identifiable patient information. We present a multi-tiered defense framework incorporating adversarial training, input preprocessing techniques, certified robustness estimators, and gradient masking to strengthen model resilience. Additionally, we introduce a hospital-specific deployment architecture that includes real-time adversarial input detection using AI-enhanced monitoring agents and edge-layer validation. This design ensures localized protection while minimizing latency in high-throughput clinical workflows. By focusing on healthcare-specific deep learning vulnerabilities and aligning with clinical data governance standards, this research contributes a secure deployment pathway for trustworthy AI applications in precision medicine and hospital cybersecurity.

Adversarial Attacks; Deep Learning Security; Medical Imaging; Genomic Sequencing; Clinical AI; Hospital Cybersecurity

https://journalijsra.com/sites/default/files/fulltext_pdf/IJSRA-2025-2128.pdf

Preview Article PDF

Nnamdi Rex Onwubuche. Mitigating adversarial threats in deep learning models trained on sensitive imaging and sequencing datasets within hospital infrastructures. International Journal of Science and Research Archive, 2025, 16(01), 1146-1167. Article DOI: https://doi.org/10.30574/ijsra.2025.16.1.2128.

Copyright © 2025 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0

For Authors: Fast Publication of Research and Review Papers


ISSN Approved Journal publication within 48 hrs in minimum fees USD 35, Impact Factor 8.2


 Submit Paper Online     Google Scholar Indexing Peer Review Process

Footer menu

  • Contact

Copyright © 2026 International Journal of Science and Research Archive - All rights reserved

Developed & Designed by VS Infosolution