RIBS: Risky Blind-Spots for Attack Classification Models
Document Type
Conference Proceeding
Publication Date
2019
Abstract
Nowadays, there has been an increment in the use of machine learning methods for cyber-security applications. These methods can be prone to generalization, especially in a binary attack classification setting, where the objective is to differentiate between benign vs. malicious behavior. This generalization creates risky security blind-spot weaknesses that make the system vulnerable. Current attackers are well aware of these blind-spots and as a counter-strategy, they exploit such vulnerabilities to bypass security measures and achieve their nefarious objectives. In this work, we propose a methodology to mitigate the problem, RIsky Blind-Spot (RIBS), by making the classification more robust. Our proposed approach creates a generator model that can learn the real characteristics of the data, and consequently, sample real examples targeting the blind-spots of a classifier. We validate our methodology in the context of power grids, where we show how this framework can improve the detection of unknown malicious behavior. Our approach provides an increment of 10% in terms of accuracy and detected attacks when compared to the baseline method.
Publication Information
Joaristi, Mikel; Putnam, Arthur; Cuzzocrea, Alfredo; and Serra, Edoardo. (2019). "RIBS: Risky Blind-Spots for Attack Classification Models". 2019 IEEE International Conference on Big Data (Big Data), 5773- 5779. https://dx.doi.org/10.1109/BigData47090.2019.9006356