RIBS: Risky Blind-Spots for Attack Classification Models

Document Type

Conference Proceeding

Publication Date



Nowadays, there has been an increment in the use of machine learning methods for cyber-security applications. These methods can be prone to generalization, especially in a binary attack classification setting, where the objective is to differentiate between benign vs. malicious behavior. This generalization creates risky security blind-spot weaknesses that make the system vulnerable. Current attackers are well aware of these blind-spots and as a counter-strategy, they exploit such vulnerabilities to bypass security measures and achieve their nefarious objectives. In this work, we propose a methodology to mitigate the problem, RIsky Blind-Spot (RIBS), by making the classification more robust. Our proposed approach creates a generator model that can learn the real characteristics of the data, and consequently, sample real examples targeting the blind-spots of a classifier. We validate our methodology in the context of power grids, where we show how this framework can improve the detection of unknown malicious behavior. Our approach provides an increment of 10% in terms of accuracy and detected attacks when compared to the baseline method.