Machine Learning (ML)-based Intrusion detection systems (IDS) aim at avoiding, preventing, and informing about malicious intrusion into a system using Machine Learning-based techniques. Accurate as they are, recent studies show the dark side of the story: ML techniques can be prone to slightly perturbing the inputs, a.k.a. adversarial attacks. As a security-critical application, such an adversarial vulnerability raises reliability and trustworthiness concerns over deploying ML-based IDS in practice.
Our work proposes to assess the adversarial robustness of ML-based IDS methods using a computationally efficient robustness evaluation protocol. Our results over a state-of-the-art ML-based IDS method and real-world log-based anomaly detection data demonstrate the existence of an adversarial vulnerability in the ML-based IDS approach. We show that the claimed high detection rate over adversarial noise-free cross-validation test may not be suitable to measure the practical usability of ML-based IDS methods.