Additional Funding Sources

NSF Cloud Computing and Privacy REU Award Number: 2244596

Abstract

In the realm of modern technology, malware has become a paramount concern. Defined as any software designed with malicious intent, malware manifests in numerous types that infect computer systems and devices. As of 2023, executable files account for 53% of computer viruses' spread. Compounded by the emergence of AI and polymorphic malware, attackers have intensified their efforts to obfuscate malicious code, rendering traditional defenses, such as signature-based detection systems, ineffective. To counter the evolving nature of modern malware, the adoption of machine learning (ML) models for detection has gained prominence. These models are able to continuously analyze memory and other data, identifying new patterns and features that aid in uncovering previously hidden malware variants. While ML-based detection systems demonstrate commendable performance, they still have vulnerabilities that necessitate further exploration. In this research proposal, we aim to address the aforementioned gaps and challenges by developing novel techniques to robustify ML-based malware detection systems. Specifically, we will focus on designing a testing framework that utilizes adversarial machine learning to generate AEs as variants of known modern malware datasets. These AEs will simulate real-world attack strategies, thereby enabling researchers to continuously update detection systems and enhance their resilience against emerging threats. Additionally, we will explore the development of comprehensive evaluation methods that incorporate robustness as a central metric to gauge the effectiveness of ML-based detection systems.

Share

COinS
 

Enhancing Malware Analysis and Detection Using Adversarial Machine Learning Techniques

In the realm of modern technology, malware has become a paramount concern. Defined as any software designed with malicious intent, malware manifests in numerous types that infect computer systems and devices. As of 2023, executable files account for 53% of computer viruses' spread. Compounded by the emergence of AI and polymorphic malware, attackers have intensified their efforts to obfuscate malicious code, rendering traditional defenses, such as signature-based detection systems, ineffective. To counter the evolving nature of modern malware, the adoption of machine learning (ML) models for detection has gained prominence. These models are able to continuously analyze memory and other data, identifying new patterns and features that aid in uncovering previously hidden malware variants. While ML-based detection systems demonstrate commendable performance, they still have vulnerabilities that necessitate further exploration. In this research proposal, we aim to address the aforementioned gaps and challenges by developing novel techniques to robustify ML-based malware detection systems. Specifically, we will focus on designing a testing framework that utilizes adversarial machine learning to generate AEs as variants of known modern malware datasets. These AEs will simulate real-world attack strategies, thereby enabling researchers to continuously update detection systems and enhance their resilience against emerging threats. Additionally, we will explore the development of comprehensive evaluation methods that incorporate robustness as a central metric to gauge the effectiveness of ML-based detection systems.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.