Securing AI-Powered Healthcare Decision Support Systems: A Comprehensive Review of Attack Vectors and Defensive Strategies

Favour Lewechi Ezeogu *

Department of Computer Information Systems, Prairie View A&M University, United States.

Chidiebube Nelson Ozioko

Department of Computer Information Systems, Prairie View A&M University, United States.

Ihuoma Remita Uchenna

Department of Computer Information Systems, Prairie View A&M University, United States.

Innocent Junior Opara

Department of Computer Information Systems, Prairie View A&M University, United States.

Salvation Ifechukwude Atalor

Department of Computer Information Systems, Prairie View A&M University, United States.

*Author to whom correspondence should be addressed.


Abstract

Background: Artificial intelligence (AI) is emerging as a transformative technology in healthcare, enabling the development of AI-powered clinical decision support systems (CDSS). These systems leverage large-scale data and advanced computational algorithms to assist in diagnosis, treatment planning, and patient management. However, the integration of AI into clinical practice faces critical challenges, particularly related to cybersecurity and system vulnerability.

Objectives: This review aims to evaluate the security vulnerabilities of AI-powered healthcare decision support systems by identifying common attack vectors and examining current defensive strategies. It also explores the implications of these vulnerabilities for patient safety, data integrity, and healthcare delivery.

Methods: A comprehensive literature review was conducted using databases such as PubMed, Scopus, IEEE Xplore, Web of Science, SpringerLink, and Google Scholar. Articles published between 2015 and 2025 were screened using PRISMA guidelines. Keywords included "AI in healthcare", "decision support systems", "cybersecurity", "adversarial attacks", and "defensive strategies".

Results: Out of 1,255 initially identified articles, 200 were included after applying inclusion and exclusion criteria. The findings reveal that AI-powered systems are susceptible to various threats, including adversarial inputs, model inversion, data poisoning, and privacy breaches. Several defensive mechanisms, such as secure model training, encryption, and adversarial detection frameworks, have been proposed and partially implemented.

Conclusions: AI-powered decision support systems hold great promise in enhancing healthcare delivery. However, unresolved security vulnerabilities pose significant risks. Addressing these concerns requires multidisciplinary collaboration among AI developers, healthcare professionals, and cybersecurity experts. Future research and funding should prioritize secure deployment, ethical governance, and regulatory compliance to ensure safe and effective integration into clinical practice.

Keywords: Artificial intelligence (AI), health care decision support systems (CDSS), cybersecurity, attack vectors, data privacy, defensive strategies, machine learning security, adversarial attacks, medical informatics, regulatory compliance, healthcare technology, secure AI deployment


How to Cite

Ezeogu, Favour Lewechi, Chidiebube Nelson Ozioko, Ihuoma Remita Uchenna, Innocent Junior Opara, and Salvation Ifechukwude Atalor. 2025. “Securing AI-Powered Healthcare Decision Support Systems: A Comprehensive Review of Attack Vectors and Defensive Strategies”. Asian Journal of Advanced Research and Reports 19 (6):1-11. https://doi.org/10.9734/ajarr/2025/v19i61037.