Adversarial Attacks and Vulnerabilities of AI in a Decision-Making System: Issues and Countermeasures
Adversarial Attacks and Vulnerabilities of AI in a Decision-Making System: Issues and Countermeasures
Augustin Nyembo Mpamp,Richard Ilunga Kalanda,Pélagie Mpemba Mukonkole,Jean Marie Ibanga Mbayo
TLDR
To counter adversarial attacks, it is essential to design AI models that are more resilient to disruptions by integrating self-correction and adaptive learning mechanisms and to establish robust security standards and ensure the reliable and seamless adoption of AI in critical infrastructure.
摘要
Adversarial attacks exploit vulnerabilities in artificial intelligence models to alter their decisions, posing a major threat to critical domains such as cybersecurity, finance, and autonomous vehicles. Although various defense strategies, such as adversarial learning and anomaly detection, have been developed, they remain limited by their lack of generalization and their negative impact on model performance. The constant evolution of attack techniques renders these approaches insufficient, requiring an overhaul of protection strategies to ensure more resilient and secure AI.
To counter these threats, it is essential to design AI models that are more resilient to disruptions by integrating self-correction and adaptive learning mechanisms. Furthermore, securing deployment environments, encrypting models, and auditing queries are essential measures to strengthen their protection. Finally, close collaboration between researchers, industry, and regulators is necessary to establish robust security standards and ensure the reliable and seamless adoption of AI in critical infrastructure.
