Сейчас загружается
20. Adversarial Attacks and Defense Mechanisms in Machine Learning: A Structured Review of Methods, Domains, and Open Challenges // IEEE Access.

Askhatuly A., Berdysheva D., Berdyshev A., Adamova A., Yedilkhan D. Adversarial Attacks and Defense Mechanisms in Machine Learning: A Structured Review of Methods, Domains, and Open Challenges // IEEE Access. — 2025. — DOI: 10.1109/ACCESS.2024.0429000.

Abstract:
Machine Learning (ML) and Deep Learning (DL) models are increasingly deployed in critical domains such as computer vision, natural language processing (NLP), automatic speech recognition (ASR), time-series analysis, and cybersecurity. Despite their success, these models remain vulnerable to adversarial attacks that can mislead predictions and undermine system reliability. To address this challenge, we conduct a Systematic Literature Review (SLR) of adversarial attacks and defenses in ML/DL, following PRISMA guidelines. Our review analyzes 132 peer-reviewed studies published between 2017 and 2025, identified from leading digital libraries including IEEE, ACM, ScienceDirect, and SpringerLink. The objectives of this study are to: (i) identify state-of-the-art adversarial attack and defense techniques and assess their strengths and limitations; (ii) evaluate how novel threats and countermeasures are addressed across domains; and (iii) propose a unified taxonomy that integrates adversarial attacks, defenses, evaluation metrics, and application domains. Our findings show that adversarial training and certified robustness dominate defense research, but both face trade-offs in scalability and generalizability. Computer vision remains the most studied domain, while NLP, ASR, and time-series research is growing but lacks standardized benchmarks. This review contributes a taxonomy and cross-domain synthesis of adversarial ML research, highlights research gaps, and outlines future directions to advance robustness and support the development of trustworthy AI systems.

Link / DOI: https://doi.org/10.1109/ACCESS.2025.3624409

Отправить комментарий