Type of Culminating Activity
Graduate Student Project
Graduation Date
5-2025
Degree Title
Master of Science in Cyber Operations and Resilience
Major Advisor
Sin Ming Loo, Ph.D.
Abstract
Artificial intelligence is increasingly woven into cybersecurity operations, shaping everything from threat detection to automated incident response. While these technologies improve speed and scalability, they also raise urgent questions about governance, ethical use, and operational risk. Although frameworks such as the NIST AI Risk Management Framework, ISO/IEC 42001, the EU AI Act, IEEE's Ethically Aligned Design, and OECD AI Principles each offer structure, they tend to address only parts of the problem. Most focus on compliance or ethics but rarely both, and few are tailored to the high-pressure, risk-sensitive environments found in cybersecurity.
To bridge these gaps, this research proposes a governance model called Context-Aware Governance (CAG). Rather than replacing existing frameworks, CAG acts as a complementary lens that aligns oversight with the mission, risk tolerance, and operational context of the organization. The model functions across three tiers (strategic, operational, and tactical) so governance can be scaled to match the decisions being made and the consequences of getting them wrong.
The design process included a comparative analysis of five leading frameworks, mapped against key governance criteria: technical control, ethical grounding, compliance support, operational applicability, and contextual adaptability. Using a radar chart, this analysis highlighted where the frameworks overlapped, and more importantly, where they left significant gaps. These findings were shaped further by insights drawn from cyber-resilient systems engineering, context-aware computing, and applied AI ethics. The resulting framework, CAG, was designed to reflect both theoretical grounding and operational need. A five-level maturity model illustrates how organizations can progress from fragmented oversight to a more adaptive and resilient approach. Practitioner feedback and expert interviews also informed the model, adding real-world relevance and validation.
At its core, this paper makes the case that effective AI governance cannot be built on static policy alone. Oversight must scale with system complexity, align with organizational mission, and remain agile as both technologies and threats continue to evolve. CAG supports that progression by helping organizations move toward governance practices that are not only ethically and legally sound, but also operationally effective.
Recommended Citation
Cunningham, Russell S., "Adaptive Oversight in Action: Proposing Context-Aware Governance for AI in Cybersecurity" (2025). Boise State Graduate Student Projects. 6.
https://scholarworks.boisestate.edu/interdisc_gradproj/6