Un événement

GDR Sécurité Informatique Region Centre Val de Loire

organisé par 

Le Laboratoire d'Informatique Fondamentale d'Orleans INSA Val de Loire
Unleashing the Beast: Evaluating Adversarial Vulnerability of AI-driven Intrusion Detection
Hélène Orsini  1, 2, 3, 4, 5@  , Yufei Han  1, 2, 3, 4, 5@  , Valérie Viet Triem Tong  1, 2, 3, 4, 5@  
1 : Inria Rennes – Bretagne Atlantique
Institut National de Recherche en Informatique et en Automatique
2 : CentraleSupélec [campus de Rennes]
CentraleSupelec, Saclay, France.
3 : Université de Rennes
Université de Rennes I
4 : Confidentialité, Intégrité, Disponibilité et Répartition
CentraleSupélec, Inria Rennes – Bretagne Atlantique, SYSTÈMES LARGE ÉCHELLE
5 : CNRS
CNRS, CNRS : UMR8568, CNRS, CNRS : UMR6074, CNRS, CNRS : UMR5593

Machine Learning (ML)-based Intrusion detection systems (IDS) aim at avoiding, preventing, and informing about malicious intrusion into a system using Machine Learning-based techniques. Accurate as they are, recent studies show the dark side of the story: ML techniques can be prone to slightly perturbing the inputs, a.k.a. adversarial attacks. As a security-critical application, such an adversarial vulnerability raises reliability and trustworthiness concerns over deploying ML-based IDS in practice.
Our work proposes to assess the adversarial robustness of ML-based IDS methods using a computationally efficient robustness evaluation protocol. Our results over a state-of-the-art ML-based IDS method and real-world log-based anomaly detection data demonstrate the existence of an adversarial vulnerability in the ML-based IDS approach. We show that the claimed high detection rate over adversarial noise-free cross-validation test may not be suitable to measure the practical usability of ML-based IDS methods.


Personnes connectées : 1 Vie privée
Chargement...