AI-Driven Transformation, Regulatory Fragmentation, and the Rise of Adaptive Cyber Resilience
This paper synthesises recent literature to show that cybersecurity is shifting toward an AI-augmented, socio-technical resilience paradigm shaped by dual-use AI, agentic systems, behavioural exploitation, regulatory fragmentation, and post-quantum cryptographic risk.
Sanchez P.
4/6/202634 min read


Abstract
Cybersecurity is undergoing a fundamental transformation driven by the rapid advancement of artificial intelligence (AI), generative AI (GenAI), autonomous agentic systems, regulatory fragmentation, and emerging post-quantum cryptographic risks. This paper critically examines these developments through a systematic literature review guided by PRISMA principles and synthesised using qualitative thematic analysis. Drawing on recent peer-reviewed research, the study identifies five interrelated themes shaping the cybersecurity landscape in 2026: AI as a dual-use capability amplifying both defence and attack, the rise of agentic AI and associated governance challenges, the increasing exploitation of human cognitive vulnerabilities through GenAI-enabled social engineering, the shift toward cyber resilience in response to global regulatory complexity, and the emerging urgency of post-quantum cryptographic migration.
The findings demonstrate that cybersecurity is transitioning away from deterministic, perimeter-based models toward adaptive, socio-technical systems characterised by continuous risk negotiation and probabilistic governance. In this evolving environment, artificial intelligence simultaneously strengthens defensive capabilities while expanding adversarial sophistication, creating systemic asymmetries between attackers and defenders. Furthermore, the emergence of autonomous AI agents introduces new governance challenges that extend beyond traditional identity and access management frameworks, requiring lifecycle-based oversight and inter-layer security coordination.
The study concludes that cybersecurity in 2026 should be understood as an AI-augmented cyber resilience ecosystem in which organisational survival depends on the integration of adaptive governance, behavioural security models, and forward-looking cryptographic strategies. The research contributes to theoretical understanding by reframing cybersecurity as a dynamic, uncertainty-driven discipline and provides practical implications for Chief Information Security Officers (CISOs), security operations centres, and organisational risk management frameworks.
1. Introduction
1.1 Context and background
Cybersecurity has entered a phase of structural transformation driven by the convergence of artificial intelligence (AI), autonomous systems, regulatory fragmentation, and emerging cryptographic disruption. Historically, cybersecurity research has focused on perimeter defence, signature-based detection, and human-centred risk mitigation models. However, the rapid adoption of machine learning (ML) and generative AI (GenAI) has fundamentally altered both the attack surface and defensive capacity of organisations (Mohamed, 2025).
Recent literature highlights that AI is no longer a supplementary capability but a central mechanism in cybersecurity operations. AI and ML systems are now widely used for intrusion detection, threat intelligence analysis, automated response, and anomaly detection across distributed infrastructures (Mohamed, 2025; Wairagade, 2025). This shift has been accelerated by the emergence of large language models (LLMs), which enable semi-autonomous reasoning and orchestration of security workflows (Uddin et al., 2025).
Concurrently, adversaries have also adopted AI-driven techniques, including automated phishing generation, deepfake-enabled social engineering, and adaptive malware development. This dual-use dynamic has created what recent research describes as a “cybersecurity arms race” between defensive AI systems and AI-augmented threat actors (Uddin et al., 2025).
1.2 AI as a transformative force in cybersecurity
The integration of AI into cybersecurity has significantly enhanced organisational capability to detect and respond to complex threats. Systematic reviews confirm that AI improves detection accuracy, reduces response time, and enables predictive threat modelling across diverse environments (Mohamed, 2025; Wairagade, 2025). These capabilities are particularly evident in Security Operations Centres (SOCs), where AI systems are increasingly used to triage alerts and prioritise incidents.
However, recent peer-reviewed research also emphasises that AI introduces new operational risks. These include model drift, adversarial manipulation, data poisoning, and explainability challenges in high-stakes decision-making environments (Biggio and Roli, 2018; Mohamed, 2025). As a result, organisations are increasingly adopting hybrid SOC models in which AI augments rather than replaces human analysts.
A 2025 systematic review of AI-powered cybersecurity systems highlights that strategic integration, rather than full automation, remains the dominant and most effective deployment model (Wairagade, 2025). This reinforces the need for governance frameworks that balance efficiency gains with operational resilience.
1.3 Emergence of agentic AI and autonomous security systems
One of the most significant recent developments is the emergence of agentic AI—systems capable of autonomous planning, decision-making, and tool use across multi-step workflows. Unlike traditional AI models that respond to prompts, agentic systems can execute sequences of actions with limited human oversight, creating both opportunities and risks for cybersecurity governance.
Recent academic work identifies a rapid evolution from single-model assistants to multi-agent architectures capable of orchestrating complex cybersecurity tasks such as log analysis, threat hunting, and incident response automation (Vinay, 2025). These systems improve operational efficiency but introduce new security concerns, particularly around accountability, control boundaries, and unintended actions.
Arora and Hastings (2025) argue that securing agentic AI requires a lifecycle-based security framework incorporating confidentiality, integrity, availability, and accountability (CIAA). Their work demonstrates that traditional cybersecurity frameworks are insufficient for managing autonomous AI behaviours, particularly when agents interact dynamically with external tools and data sources.
This shift significantly impacts Identity and Access Management (IAM), as organisations must now define and enforce identity boundaries not only for humans and devices but also for autonomous AI agents operating within enterprise systems.
1.4 GenAI and the collapse of traditional security awareness models
Security awareness training has historically been a cornerstone of organisational cybersecurity strategy. However, recent research suggests that traditional awareness programmes are increasingly ineffective against AI-enabled threats.
Empirical studies in human-centred cybersecurity demonstrate that users rely heavily on cognitive heuristics when evaluating threats, making them particularly vulnerable to sophisticated phishing and social engineering attacks (Sheng et al., 2010). The introduction of GenAI has significantly amplified this risk by enabling attackers to generate highly personalised, context-aware, and linguistically convincing content at scale (Uddin et al., 2025).
Recent peer-reviewed literature suggests a shift away from static awareness training toward continuous Security Behaviour and Culture Programmes (SBCPs), which embed adaptive behavioural reinforcement into organisational workflows (Mohamed, 2025). This reflects a broader recognition that human vulnerability is not simply a knowledge gap but a systemic behavioural challenge shaped by cognitive overload and adversarial manipulation.
1.5 Regulatory volatility and the rise of cyber resilience
Cybersecurity governance is increasingly shaped by a fragmented and rapidly evolving regulatory environment. Organisations must now comply with overlapping frameworks such as the EU NIS2 Directive, Digital Operational Resilience Act (DORA), and emerging AI governance regulations. This regulatory complexity introduces significant compliance and operational risk.
From a theoretical perspective, this aligns with the concept of regulatory pluralism, where multiple overlapping governance systems create compliance uncertainty and increased organisational burden (Bennett and Raab, 2020). In response, organisations are shifting toward cyber resilience frameworks that prioritise adaptability over static compliance.
The NIST Cybersecurity Framework defines resilience as the ability to anticipate, withstand, recover from, and adapt to cyber incidents (NIST, 2018). Recent studies argue that resilience is becoming the dominant paradigm in cybersecurity governance due to increasing system interdependence and regulatory fragmentation (Wairagade, 2025).
1.6 Transition from theoretical risk to post-quantum urgency
Post-quantum cryptography (PQC) has transitioned from theoretical discussion to operational necessity. Quantum computing poses a credible long-term threat to widely used cryptographic systems, particularly those based on RSA and elliptic curve cryptography.
Shor’s algorithm demonstrated that quantum systems could theoretically break current public-key encryption methods, fundamentally undermining modern digital trust infrastructures (Shor, 1994). Recent literature highlights that migration to PQC is not a purely technical upgrade but a complex, multi-year organisational transformation requiring infrastructure redesign and interoperability management (Bernstein et al., 2017).
Contemporary peer-reviewed studies emphasise that organisations must begin hybrid cryptographic deployments immediately to mitigate long-term exposure risk, even in the absence of fully mature quantum hardware (Uddin et al., 2025).
1.7 Problem statement and research gap
Despite rapid advances in AI-enabled cybersecurity, significant gaps remain in understanding how organisations can safely integrate autonomous AI systems while maintaining governance, accountability, and operational resilience.
Current literature identifies three primary gaps:
Governance gap – Lack of frameworks for managing autonomous AI agents within enterprise IAM systems (Vinay, 2025).
Behavioural gap – Ineffectiveness of traditional security awareness training against GenAI-enhanced attacks (Uddin et al., 2025).
Resilience gap – Insufficient integration between AI-driven automation and cyber resilience frameworks (Wairagade, 2025).
These gaps indicate that cybersecurity is transitioning from a control-based discipline to a dynamic socio-technical governance system.
1.8 Aim and scope of the study
This paper aims to critically analyse emerging cybersecurity trends for 2026, with a focus on AI-driven transformation, agentic AI governance, regulatory volatility, and post-quantum cryptographic readiness. It situates these trends within contemporary peer-reviewed literature to provide a theoretically grounded understanding of cybersecurity’s evolving role within enterprise risk management.
1.9 Chapter summary
This chapter has established the contextual foundation for analysing cybersecurity in 2026. It has demonstrated that AI—particularly GenAI and agentic systems—is fundamentally reshaping cybersecurity operations, governance structures, and risk models. It has also highlighted the growing importance of cyber resilience, regulatory adaptation, and post-quantum preparedness. The following chapters will further examine each trend in detail and evaluate their implications for organisational cybersecurity strategy.
2. Literature Review
2.1 Introduction
This chapter critically reviews contemporary academic literature on the transformation of cybersecurity in the context of artificial intelligence (AI), generative AI (GenAI), agentic systems, regulatory complexity, and post-quantum cryptography. The purpose is to synthesise current peer-reviewed knowledge, identify conceptual convergences and divergences, and establish the theoretical foundation for understanding cybersecurity as an adaptive socio-technical system.
Recent research consistently highlights that cybersecurity is undergoing a paradigm shift from rule-based defence models to AI-augmented, autonomous, and resilience-oriented architectures (Mohamed, 2025; Wairagade, 2025). This transition is not merely technological but organisational and epistemological, altering how risk, governance, and control are conceptualised.
2.2 Evolution of cybersecurity paradigms
Early cybersecurity literature conceptualised security primarily as a perimeter-based discipline focused on confidentiality, integrity, and availability (CIA triad). However, contemporary scholarship argues that this model is increasingly inadequate due to distributed cloud environments, IoT expansion, and AI-driven automation (von Solms and van Niekerk, 2013; NIST, 2018).
Recent theoretical developments extend this foundation toward cyber resilience, defined as the ability of systems to anticipate, withstand, recover from, and adapt to cyber incidents (NIST, 2018). More recent research reframes resilience as an adaptive property of socio-technical systems rather than a static capability (Wairagade, 2025).
This shift is further reinforced by AI integration into cybersecurity operations, which introduces dynamic decision-making processes that cannot be fully captured by static rule-based frameworks (Uddin et al., 2025).
2.3 Artificial intelligence as a cybersecurity multiplier
2.3.1 Defensive applications of AI
The literature widely acknowledges AI as a transformative force in cybersecurity defence. Machine learning models are now used for intrusion detection, anomaly detection, malware classification, and predictive threat intelligence (Mohamed, 2025; Wairagade, 2025).
A systematic review by Wairagade (2025) identifies that AI significantly improves:
detection accuracy in high-volume environments
reduction of false positives in SOC workflows
automation of repetitive incident triage tasks
predictive identification of attack patterns
Similarly, Mohamed (2025) highlights that deep learning architectures outperform traditional signature-based systems in detecting zero-day attacks and advanced persistent threats (APTs).
However, despite these improvements, the literature consistently emphasises that AI systems remain dependent on high-quality training data and are vulnerable to adversarial manipulation.
2.3.2 Limitations and risks of AI in cybersecurity
Despite its advantages, AI introduces new categories of vulnerability. Biggio and Roli (2018) demonstrate that adversarial machine learning can be exploited to manipulate classification systems through carefully crafted inputs.
Recent literature extends these concerns to operational environments, highlighting risks such as:
model drift in dynamic environments
data poisoning in training pipelines
explainability limitations in critical decisions
over-reliance on automated classification systems
Mohamed (2025) further argues that lack of transparency in AI-driven decision-making presents governance challenges, particularly in high-risk sectors such as finance and critical infrastructure.
2.4 Generative AI and cybersecurity transformation
2.4.1 GenAI in defensive cybersecurity
Generative AI has rapidly become a core component of modern cybersecurity architectures. Recent peer-reviewed studies demonstrate its application in:
automated vulnerability assessment
threat intelligence summarisation
security orchestration and response (SOAR) enhancement
code analysis and remediation support
A comprehensive review by Uddin et al. (2025) finds that GenAI improves both speed and coverage of vulnerability assessment pipelines, particularly when integrated with SOAR systems for automated remediation. Similarly, Sikos (2025) highlights that GenAI tools can reduce alert fatigue in SOC environments by summarising and prioritising large-scale log data.
2.4.2 Offensive use of GenAI
The literature also strongly emphasises the dual-use nature of GenAI. Adversaries increasingly use generative models to:
produce highly convincing phishing campaigns
automate malware development
generate deepfake audio and video for social engineering
enhance reconnaissance and target profiling
Uddin et al. (2025) argue that this represents a structural escalation in cyber threat sophistication, where attacks are no longer static but dynamically generated and personalised. This aligns with broader findings that GenAI has reduced the technical barrier to entry for advanced cybercrime, increasing attack frequency and variability.
2.5 Agentic AI and autonomous cybersecurity systems
2.5.1 Emergence of agentic architectures
A significant recent development is the emergence of agentic AI systems—autonomous models capable of planning, reasoning, and executing multi-step tasks. These systems are increasingly deployed in Security Operations Centres (SOCs) and automated response environments (Vinay, 2025).
Vinay (2025) identifies a progression in AI cybersecurity architectures:
single-model reasoning systems
tool-augmented assistants
multi-agent collaborative systems
semi-autonomous investigative pipelines
This evolution reflects increasing operational autonomy and complexity in AI-driven security environments.
2.5.2 Governance challenges in agentic AI
Agentic AI introduces fundamental governance challenges, particularly regarding:
accountability of autonomous decisions
traceability of agent actions
permission boundaries between human and machine actors
tool-use security and privilege escalation
A governance-focused study by Suggu (2025) proposes the Model–Control–Policy (MCP) framework, emphasising the need for structured oversight mechanisms in agentic workflows. Similarly, Arora and Hastings (2025) argue that agentic systems require lifecycle-based security models incorporating confidentiality, integrity, availability, and accountability (CIAA), extending beyond traditional security paradigms. Recent research highlights that emerging agent communication protocols such as the Model Context Protocol (MCP) and Agent-to-Agent (A2A) introduce novel systemic risks, including cascading trust failures, unauthorised task delegation, and unintended cross-agent behaviour, particularly in multi-agent environments operating at machine speed (Anbiaee et al., 2026; Ehtesham et al., 2025).
2.6 AI-driven Security Operations Centres (SOCs)
The integration of AI into SOC environments is one of the most widely studied applications of cybersecurity AI.
Sommer and Paxson (2010) provide foundational insight into machine learning limitations in intrusion detection, particularly regarding data representativeness and false positives. More recent literature builds on this by analysing real-world deployment challenges.
Key findings from recent literature indicate that AI significantly improves alert triage efficiency, threat correlation, and response speed within Security Operations Centres (SOCs), but does not eliminate the need for human oversight. Instead, AI systems function most effectively as decision-support tools rather than autonomous operators (Srinivas et al., 2025; Singh et al., 2025).
However, the integration of large language models (LLMs) introduces critical reliability challenges, particularly hallucinations—where models generate inaccurate or fabricated outputs—which can lead to false alerts, missed threats, or incorrect incident analysis (Sood AK, et al., 2025). These risks are especially problematic in high-stakes security environments where accuracy and traceability are essential.
Furthermore, research highlights that automated response systems require strict guardrails, including human-in-the-loop validation, explainability mechanisms, and controlled autonomy levels, to prevent unintended or harmful actions (Mohsin et al., 2025).
Consequently, the literature converges on a hybrid operational model in which AI augments, rather than replaces, human analysts, combining machine efficiency with human judgement to achieve resilient and trustworthy SOC performance.
2.7 Security awareness, human factors, and behavioural risk
Human behaviour remains a dominant factor in cybersecurity breaches. Posey et al. (2014) demonstrate that organisational commitment and behavioural compliance significantly influence security outcomes.
However, GenAI fundamentally alters the human risk landscape. Sheng et al. (2010) show that phishing susceptibility is strongly linked to cognitive heuristics, which are now increasingly exploited by AI-generated phishing content.
Recent literature argues that traditional security awareness training is insufficient in GenAI contexts because:
attacks are dynamically personalised
content is linguistically indistinguishable from legitimate communication
employees face cognitive overload due to increased message realism
As a result, researchers advocate for Security Behaviour and Culture Programmes (SBCPs), which embed continuous behavioural reinforcement mechanisms rather than static training modules (Mohamed, 2025).
2.8 Regulatory fragmentation and cyber resilience
Cybersecurity governance is increasingly shaped by overlapping regulatory frameworks such as:
EU NIS2 Directive
Digital Operational Resilience Act (DORA)
EU AI Act
sector-specific compliance regimes
Bennett and Raab (2020) describe this as regulatory pluralism, where multiple governance systems overlap without full harmonisation, increasing compliance complexity. In response, organisations are adopting cyber resilience frameworks aligned with NIST (2018), which emphasise:
preparation and anticipation
continuous monitoring
rapid recovery
adaptive learning
Wairagade (2025) argues that resilience is now the dominant strategic paradigm in cybersecurity governance due to increasing system interdependence and geopolitical uncertainty.
2.9 Post-quantum cryptography and future-proof security
Post-quantum cryptography (PQC) is widely recognised as a critical future cybersecurity challenge. Shor (1994) demonstrated that quantum computing could theoretically break widely used public-key encryption systems. Bernstein et al. (2017) highlight that migration to PQC is complex due to:
long infrastructure replacement cycles
compatibility requirements
performance trade-offs
global coordination challenges
Recent literature stresses that PQC adoption must begin before quantum systems reach cryptographic relevance, due to the long lead time required for enterprise migration (Uddin et al., 2025).
2.10 Synthesis of literature and research gap
The literature reveals four dominant and interrelated themes:
AI is simultaneously a defensive and offensive force, reshaping cybersecurity dynamics (Uddin et al., 2025).
Agentic systems introduce autonomy risks, requiring new governance models (Vinay, 2025; Suggu, 2025).
Human factors remain critical, but are increasingly exploited by AI-generated attacks (Sheng et al., 2010).
Cyber resilience is replacing traditional security paradigms, driven by regulatory and technological complexity (Wairagade, 2025).
Research gap
Despite extensive literature, three key gaps persist:
Lack of unified governance frameworks for agentic AI in cybersecurity
Limited empirical validation of fully autonomous SOC systems
Insufficient integration between PQC migration strategies and AI-driven security architectures
These gaps justify further research into how organisations can safely integrate autonomous AI while maintaining resilience, accountability, and regulatory compliance.
2.11 Conclusion
This chapter has critically reviewed contemporary literature on AI-driven cybersecurity transformation. It demonstrates that cybersecurity is evolving into a multi-layered socio-technical discipline shaped by AI autonomy, human behavioural risk, regulatory fragmentation, and quantum disruption.
The convergence of these forces suggests a fundamental paradigm shift: from static defence systems to adaptive, AI-augmented cyber resilience ecosystems. The following chapter will build on this foundation by developing a conceptual framework for understanding cybersecurity governance in the age of agentic AI.
3. Research Methodology
3.1 Introduction
This chapter outlines the research methodology employed to investigate emerging cybersecurity trends in the context of artificial intelligence (AI), agentic systems, regulatory volatility, and post-quantum cryptography. Given the conceptual and rapidly evolving nature of the topic, a systematic literature review (SLR) guided by PRISMA principles, combined with qualitative thematic analysis, was adopted.
This hybrid approach ensures methodological rigour in study selection while enabling deep interpretive analysis of complex socio-technical cybersecurity phenomena (Page et al., 2021; Braun and Clarke, 2006).
3.2 Research design
The study adopts a qualitative systematic literature review design with thematic synthesis. This design is appropriate for exploring emerging cybersecurity paradigms where empirical datasets are fragmented and theoretical development is ongoing.
A PRISMA-guided SLR was selected to ensure transparency, reproducibility, and minimisation of selection bias in literature identification and screening (Moher et al., 2009; Page et al., 2021). Subsequently, qualitative thematic analysis was applied to extract and synthesise conceptual patterns across studies.
This dual-method approach enables:
systematic identification of peer-reviewed literature
structured screening and eligibility assessment
interpretive synthesis of cybersecurity themes
identification of research gaps and conceptual convergence
3.3 Research approach
The research follows an interpretivist epistemological stance, recognising that cybersecurity trends are socially constructed through interactions between technology, organisations, and adversaries.
Cybersecurity research increasingly requires interpretivist and socio-technical approaches (von Solms and van Niekerk, 2013), as emerging threats such as GenAI-enabled attacks and autonomous agentic systems operate within complex human–machine ecosystems that are not fully explainable through purely positivist or quantitative methods. Recent literature emphasises that AI-driven cybersecurity phenomena involve interpretive, behavioural, and organisational dimensions that require qualitative and mixed-method research designs to fully capture their systemic effects (Abbas et al., 2023; Ernst and Treude, 2026).
3.4 PRISMA systematic literature review process
The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework was used to structure the literature identification and screening process.
3.4.1 Identification phase
Academic literature was sourced from the following databases:
Scopus
IEEE Xplore
ACM Digital Library
SpringerLink
ScienceDirect
Google Scholar (for supplementary coverage)
Search queries included combinations of the following keywords:
“cybersecurity AND artificial intelligence”
“generative AI AND security operations”
“agentic AI AND cybersecurity governance”
“post-quantum cryptography AND migration”
“cyber resilience AND regulatory compliance”
Only publications between 2018 and 2026 were prioritised, with emphasis on studies from 2023–2026 due to rapid AI evolution.
3.4.2 Screening phase
Initial search results were screened using the following criteria:
Inclusion criteria:
peer-reviewed journal articles or conference papers
studies focused on cybersecurity, AI security, or cryptography
English-language publications
publications within defined timeframe (2018–2026)
Exclusion criteria:
non-peer-reviewed blogs or opinion articles
studies unrelated to cybersecurity applications
duplicate publications
non-English sources
Titles and abstracts were screened to remove irrelevant studies before full-text review.
3.4.3 Eligibility phase
Full-text articles were assessed for:
methodological relevance to cybersecurity AI systems
theoretical contribution to AI governance or cyber resilience
empirical or systematic review quality
relevance to at least one of the identified thematic domains
Studies lacking methodological transparency or clear cybersecurity relevance were excluded.
3.4.4 Inclusion phase
Following PRISMA screening, a final corpus of peer-reviewed studies was included for thematic synthesis. These studies collectively represent key domains such as:
AI-driven cybersecurity systems
GenAI threat landscapes
agentic AI governance frameworks
post-quantum cryptographic migration
human behavioural cybersecurity
3.4.5 PRISMA flow summary
Although not visually rendered here, the PRISMA process followed this structure:
Records identified through databases
Duplicate records removed
Titles and abstracts screened
Full-text articles assessed for eligibility
Studies included in final synthesis
This structured filtering ensured transparency and reproducibility of the review process.
3.5 Data extraction process
A structured data extraction framework was used to ensure consistency across reviewed studies.
For each article, the following data were extracted:
author(s) and year
research methodology
cybersecurity domain (e.g., AI, cryptography, SOC)
key findings
limitations
relevance to research themes
This enabled cross-study comparison and facilitated thematic synthesis.
3.6 Qualitative thematic analysis
The study employed Braun and Clarke’s (2006) six-phase thematic analysis framework, which is widely used in qualitative cybersecurity and information systems research.
Phase 1: Familiarisation with data
All selected papers were read in full to develop an initial understanding of recurring cybersecurity themes, particularly in relation to AI and governance.
Phase 2: Initial coding
Relevant concepts were systematically coded, including:
AI autonomy risks
adversarial machine learning
GenAI-enabled phishing
SOC automation
regulatory fragmentation
cryptographic migration challenges
human behavioural vulnerability
Phase 3: Theme generation
Codes were grouped into broader thematic categories:
AI-driven cybersecurity transformation
Agentic AI governance and autonomy risk
Human behavioural vulnerability in GenAI environments
Regulatory fragmentation and cyber resilience
Post-quantum cryptographic transition
Phase 4: Theme review
Themes were reviewed against the dataset to ensure:
internal coherence
conceptual distinctiveness
alignment with cybersecurity literature
Overlapping themes were merged where appropriate.
Phase 5: Theme definition and naming
Final themes were clearly defined to reflect structural trends in cybersecurity:
AI as a dual-use cybersecurity force
Autonomous AI governance challenges
Behavioural exploitation through GenAI
Resilience under regulatory complexity
Cryptographic transition risk
Phase 6: Reporting
Themes were used to structure the findings and discussion chapters, ensuring alignment between literature synthesis and conceptual interpretation.
3.7 Validity and reliability
To ensure methodological rigour, the following strategies were applied:
Triangulation of sources across multiple academic databases
Transparent inclusion/exclusion criteria based on PRISMA
Structured thematic coding framework to reduce researcher bias
Auditability of data extraction tables
Although qualitative synthesis is inherently interpretive, these measures enhance dependability and confirmability.
3.8 Ethical considerations
As this study relies exclusively on secondary data (published academic literature), no human participants were involved. Therefore, formal ethical approval was not required.
However, ethical academic practice was maintained through:
accurate citation of all sources
avoidance of misrepresentation of findings
inclusion of peer-reviewed and credible academic material only
3.9 Limitations of the methodology
Several limitations are acknowledged:
rapid evolution of AI cybersecurity literature may result in temporal lag
reliance on published studies excludes emerging industry-only insights
potential publication bias toward positive AI cybersecurity outcomes
limited availability of long-term empirical studies on agentic AI systems
Despite these limitations, the PRISMA-guided SLR combined with thematic analysis provides a robust and academically defensible framework for synthesising current knowledge.
3.10 Chapter summary
This chapter outlined a PRISMA-guided systematic literature review combined with qualitative thematic analysis used to investigate cybersecurity trends in 2026. The methodology ensures transparency, reproducibility, and interpretive depth, enabling structured synthesis of complex and rapidly evolving cybersecurity literature.
The next chapter presents the findings derived from the thematic analysis, structured around the five core themes identified in this review process.
4. Research Findings
4.1 Introduction
This chapter presents the findings derived from the qualitative thematic analysis of peer-reviewed literature on cybersecurity trends in the context of artificial intelligence (AI), generative AI (GenAI), agentic systems, regulatory volatility, and post-quantum cryptography. The analysis followed Braun and Clarke’s (2006) six-phase framework and synthesised results from studies selected through a PRISMA-guided systematic literature review (Page et al., 2021).
Five overarching themes were identified:
AI as a dual-use cybersecurity force
Autonomous AI governance and agentic system risk
Human behavioural vulnerability in GenAI environments
Regulatory fragmentation and cyber resilience
Post-quantum cryptographic transition risk
These themes are interdependent and collectively indicate a structural transformation in cybersecurity practice, governance, and risk modelling.
4.2 Theme 1: AI as a dual-use cybersecurity force
4.2.1 Overview of findings
The literature consistently demonstrates that artificial intelligence functions as both a defensive and offensive mechanism within cybersecurity ecosystems. Defensive applications include intrusion detection, anomaly identification, malware classification, and automated threat response (Mohamed, 2025; Wairagade, 2025). However, the same technologies are increasingly leveraged by threat actors to enhance attack sophistication.
4.2.2 Defensive enhancement through AI
Across reviewed studies, AI is shown to significantly improve:
detection accuracy in high-volume environments
real-time anomaly identification
SOC alert prioritisation
predictive threat intelligence generation
Mohamed (2025) identifies that deep learning models outperform traditional rule-based systems in identifying zero-day exploits and polymorphic malware. Similarly, Wairagade (2025) reports that AI integration reduces analyst workload by automating repetitive triage tasks.
4.2.3 Offensive exploitation of AI
Conversely, AI is widely used in offensive cyber operations. The literature highlights:
AI-generated phishing campaigns with high contextual accuracy
automated malware generation and obfuscation
scalable reconnaissance using LLMs
synthetic identity creation using generative models
Uddin et al. (2025) describe this as a “capability compression effect,” where advanced cybercrime techniques become accessible to low-skilled actors.
4.2.4 Synthesis
The dual-use nature of AI creates a security paradox: the same systems that strengthen defence simultaneously expand offensive capabilities. This dynamic intensifies the cyber threat landscape and accelerates the need for adaptive defensive architectures.
4.3 Theme 2: Autonomous AI governance and agentic system risk
4.3.1 Emergence of agentic systems
A key finding is the rapid emergence of agentic AI systems capable of executing multi-step tasks autonomously. Unlike traditional models, these systems can interact with tools, APIs, and external environments without continuous human input (Vinay, 2025).
4.3.2 Risk characteristics of agentic AI
The literature identifies several critical risk dimensions:
loss of operational transparency (difficulty tracing decision pathways)
goal misalignment (agents executing unintended actions)
permission escalation risks in tool-using environments
multi-agent cascading failures
Arora and Hastings (2025) argue that agentic AI introduces a new security class of “action-based vulnerabilities” rather than purely data-based vulnerabilities.
4.3.3 Governance frameworks
Emerging governance approaches include:
lifecycle-based security controls (CIAA models)
model-to-tool permission segmentation
continuous behavioural monitoring of AI agents
policy-constrained agent execution environments
Suggu (2025) proposes a Model–Control–Policy framework to ensure traceability and enforceable boundaries in autonomous workflows.
4.3.4 Synthesis
Agentic AI shifts cybersecurity from system protection to behavioural governance of autonomous digital actors, requiring a fundamental redefinition of IAM and operational trust models.
4.4 Theme 3: Human behavioural vulnerability in GenAI environments
4.4.1 Evolution of human-targeted attacks
Human factors remain central to cybersecurity breaches, but GenAI has significantly amplified attack sophistication. Traditional phishing and social engineering attacks are now enhanced with:
linguistic personalisation
real-time contextual adaptation
deepfake-enabled impersonation
Sheng et al. (2010) previously identified cognitive heuristics as a primary driver of phishing susceptibility. The current literature demonstrates that GenAI exacerbates these vulnerabilities.
4.4.2 Breakdown of traditional awareness models
Findings indicate that traditional security awareness training is increasingly ineffective due to:
hyper-realistic attack content
cognitive overload among employees
continuous adaptation of attack strategies
indistinguishability between legitimate and malicious communications
Mohamed (2025) argues that awareness training alone cannot mitigate GenAI-enhanced threats.
4.4.3 Emergence of behavioural security models
In response, organisations are shifting toward:
Security Behaviour and Culture Programmes (SBCPs)
continuous behavioural reinforcement mechanisms
embedded security nudges within workflows
real-time phishing simulation systems
4.4.4 Synthesis
Human vulnerability is no longer primarily a knowledge deficit but a cognitive exploitation surface amplified by generative systems.
4.5 Theme 4: Regulatory fragmentation and cyber resilience
4.5.1 Regulatory complexity
The literature identifies increasing fragmentation in global cybersecurity regulation, including:
EU NIS2 Directive
DORA (Digital Operational Resilience Act)
EU AI Act
sector-specific compliance regimes
This creates overlapping and sometimes conflicting governance obligations.
4.5.2 Impact on organisations
Key impacts identified include:
increased compliance overhead
duplication of reporting requirements
inconsistent global security standards
strategic uncertainty in cross-border operations
Bennett and Raab (2020) describe this as regulatory pluralism, which increases governance complexity in digital systems.
4.5.3 Rise of cyber resilience
To address regulatory fragmentation, organisations are adopting cyber resilience frameworks that prioritise:
adaptability over static compliance
continuous monitoring and recovery
operational continuity under attack conditions
integration of governance, risk, and technology functions
NIST (2018) provides the foundational framework, while recent studies (Wairagade, 2025) emphasise its increasing strategic centrality.
4.5.4 Synthesis
Cybersecurity governance is shifting from compliance-based security to resilience-based survivability models.
4.6 Theme 5: Post-quantum cryptographic transition risk
4.6.1 Quantum threat landscape
The literature confirms that quantum computing poses a long-term but credible threat to current cryptographic systems. Shor (1994) established the theoretical basis for breaking RSA and ECC encryption.
4.6.2 Migration challenges
Key challenges identified include:
long infrastructure replacement cycles
cryptographic dependency chains in enterprise systems
interoperability between classical and quantum-resistant systems
performance degradation in early PQC implementations
Bernstein et al. (2017) emphasise that cryptographic migration is a multi-decade process rather than a short-term upgrade.
4.6.3 Organisational readiness
Recent literature highlights that organisations are largely in the early stages of PQC readiness. Key gaps include:
limited inventory of cryptographic assets
lack of migration roadmaps
absence of hybrid cryptographic deployment strategies
Uddin et al. (2025) argue that delayed adoption significantly increases long-term systemic risk exposure.
4.6.4 Synthesis
Post-quantum cryptography represents a latent systemic risk requiring early strategic intervention despite uncertain timelines.
4.7 Cross-theme synthesis
Across all five themes, three macro-level structural transformations emerge:
1. Shift to autonomy-driven risk
Cybersecurity threats are increasingly generated by autonomous AI systems rather than static adversaries.
2. Collapse of traditional human-centric assumptions
Both attackers and defenders now rely on AI augmentation, fundamentally changing behavioural security models.
3. Transition from control to resilience
Static defence mechanisms are being replaced by adaptive, continuously evolving security architectures.
4.8 Chapter summary
This chapter presented the findings of the thematic analysis, identifying five core themes that define the cybersecurity landscape in 2026. The results demonstrate that cybersecurity is undergoing a systemic transformation driven by AI autonomy, human behavioural exploitation, regulatory fragmentation, and quantum cryptographic disruption.
The next chapter will interpret these findings in relation to the conceptual framework, discussing their implications for cybersecurity governance, organisational strategy, and the evolving role of the CISO.
5. Discussion of Findings
5.1 Introduction
This chapter interprets the findings presented in Chapter 4 in relation to the broader academic literature and conceptual foundations established in Chapters 1–3. Rather than reiterating descriptive themes, the focus here is on critical synthesis, theoretical implication, and conceptual advancement.
The analysis demonstrates that cybersecurity in 2026 is undergoing a structural transition driven by artificial intelligence (AI), agentic autonomy, regulatory fragmentation, and cryptographic disruption. These forces collectively challenge established assumptions about control, trust, and organisational security governance.
Three overarching meta-implications emerge:
Cybersecurity is shifting from deterministic control to probabilistic governance
AI is redefining both attacker capability and defender cognition
Cyber resilience is replacing prevention as the dominant security paradigm
5.2 Cybersecurity as a shift from deterministic control to probabilistic governance
Traditional cybersecurity models are grounded in deterministic assumptions: defined perimeters, known threat signatures, and predictable system behaviour. However, the findings demonstrate that these assumptions no longer hold in AI-augmented environments.
AI systems—particularly those driven by machine learning and GenAI—introduce non-deterministic outputs, where system behaviour is probabilistic rather than rule-based (Mohamed, 2025). This fundamentally disrupts classical security engineering, which assumes repeatability and verifiability of system states.
Agentic AI further intensifies this shift. As shown in Chapter 4, autonomous systems can execute multi-step actions across digital environments without continuous human oversight (Vinay, 2025). This introduces what can be conceptualised as “behavioural uncertainty at machine speed”, where outcomes cannot be fully predicted even when inputs and constraints are known.
From a theoretical standpoint, this aligns with complexity theory approaches to cybersecurity, where systems are viewed as adaptive and emergent rather than controllable entities. The implication is that cybersecurity governance must transition from control assurance to probabilistic risk management, where uncertainty is not eliminated but continuously managed.
5.3 AI as a Bidirectional Capability Amplifier
A central finding of this study is that artificial intelligence operates as a bidirectional capability amplifier, simultaneously enhancing both defensive and offensive cybersecurity operations. Rather than functioning as a neutral efficiency tool, AI actively reshapes the cyber threat landscape by accelerating capabilities on both sides of the security equation, thereby altering the structural balance between attackers and defenders.
5.3.1 Defensive amplification
On the defensive side, AI significantly strengthens cybersecurity operations by improving threat detection accuracy, enhancing Security Operations Centre (SOC) efficiency, enabling predictive threat intelligence, and supporting automated or semi-automated response mechanisms.
Machine learning and anomaly detection systems allow organisations to process large-scale telemetry data in real time, identifying deviations from baseline behaviour that would be difficult for human analysts to detect manually. Similarly, AI-assisted SOC workflows improve alert triage by prioritising incidents based on risk scoring, reducing analyst fatigue and improving response times.
Recent studies confirm that these capabilities produce measurable gains in operational efficiency and detection performance, particularly in environments characterised by high-volume log data and distributed infrastructure complexity (Mohamed, 2025; Wairagade, 2025). However, the literature also emphasises that these improvements remain dependent on continuous human oversight due to model uncertainty, data drift, and adversarial manipulation risks.
5.3.2 Offensive amplification
Conversely, AI significantly enhances offensive cyber capabilities by lowering technical barriers and increasing the scale, speed, and sophistication of attacks. This includes large-scale personalised phishing campaigns, automated malware generation, deepfake-enabled deception, and the democratisation of advanced attack techniques to low-skill actors.
Generative AI systems enable attackers to produce highly convincing and context-aware social engineering content, significantly increasing the success rate of deception-based attacks. At the same time, AI-assisted code generation tools reduce the expertise required to develop exploit chains or malicious scripts, effectively broadening the threat actor base.
Uddin et al. (2025) describe this phenomenon as capability compression, where AI reduces the skill threshold required to execute high-impact cyberattacks while simultaneously increasing their sophistication. This results in a widening operational gap between defensive preparedness and offensive accessibility.
5.3.3 Critical interpretation
The key implication is that AI does not merely create dual-use capability in a balanced manner; rather, it produces a form of asymmetric capability acceleration. In practice, attacker capability is often amplified more rapidly than defensive governance structures can adapt.
This asymmetry arises because defensive systems are constrained by organisational controls, regulatory compliance, safety requirements, and ethical constraints, whereas offensive applications of AI operate with significantly fewer restrictions. As a result, AI introduces a structural imbalance in the cybersecurity ecosystem, where innovation cycles favour adversarial adaptation over institutional response.
5.4 The Collapse of Traditional Human-Centric Security Assumptions
A major conceptual finding of this study is that traditional human-centric cybersecurity models are becoming increasingly unstable under GenAI-driven conditions. Historically, cybersecurity frameworks have assumed that human users represent the weakest link in security systems, and that targeted training and awareness programmes can meaningfully reduce risk exposure.
However, the emergence of GenAI fundamentally challenges these assumptions by reshaping the nature of human vulnerability and attack execution.
5.4.1 Cognitive overload and perceptual indistinguishability
Generative AI enables attackers to produce communication that is linguistically fluent, context-aware, dynamically personalised, and highly consistent with legitimate organisational communication patterns. As a result, the traditional reliance on heuristic detection mechanisms—such as linguistic irregularities, tone inconsistencies, or formatting anomalies—becomes increasingly ineffective.
This leads to perceptual indistinguishability, where malicious and legitimate communications become functionally identical from a human cognitive perspective. Empirical evidence shows that human users struggle to reliably distinguish between authentic and AI-generated content even when warned about synthetic manipulation risks (Sheng et al., 2010).
5.4.2 From knowledge deficit to cognitive exploitation
The literature increasingly suggests a fundamental shift in the nature of human cybersecurity vulnerability. Rather than being primarily driven by lack of awareness or training, vulnerability is increasingly a product of systematic cognitive exploitation enabled by AI systems.
This reframes cybersecurity risk as a neuro-cognitive and behavioural manipulation problem, where attackers optimise persuasion strategies using AI-generated content tailored to individual psychological profiles, behavioural patterns, and contextual triggers. In this model, human decision-making is not merely uninformed but actively targeted through adaptive influence mechanisms.
5.4.3 Implications for organisational strategy
Security Behaviour and Culture Programmes (SBCPs) represent an initial organisational response to these challenges, shifting focus from one-off training interventions to continuous behavioural reinforcement.
However, the literature suggests that SBCPs alone are insufficient in GenAI environments unless supplemented with real-time adaptive security systems embedded directly within workflows. This includes contextual threat warnings, behavioural anomaly detection, and continuous feedback mechanisms that dynamically adjust user interactions based on evolving risk conditions.
5.5 Agentic AI and the Breakdown of Traditional Governance Boundaries
One of the most significant structural disruptions identified in this study is the emergence of agentic AI systems operating as autonomous digital actors within enterprise environments. These systems introduce decision-making autonomy that extends beyond traditional software automation, fundamentally challenging existing cybersecurity governance structures.
5.5.1 Breakdown of IAM assumptions
Traditional Identity and Access Management (IAM) frameworks are built on several core assumptions: static identities, human-controlled authentication, and predictable access patterns. Agentic AI systems violate all three assumptions.
These systems can generate dynamic identities, execute autonomous actions across APIs and enterprise tools, and adapt their behaviour based on contextual goals or environmental feedback. This results in what can be described as fluid identity boundaries, where identity becomes an emergent and continuously evolving property rather than a fixed attribute.
5.5.2 Governance fragmentation
The literature identifies a significant governance gap between multiple layers of cybersecurity control, including model-level AI constraints, infrastructure-level security controls (such as IAM and network segmentation), and policy-level regulatory frameworks.
While recent studies propose layered governance architectures (Arora and Hastings, 2025; Suggu, 2025), empirical validation of these frameworks remains limited. In practice, these governance layers are often implemented independently, resulting in inconsistent enforcement and fragmented oversight.
5.5.3 Critical insight
The central issue is not the absence of cybersecurity controls, but the lack of inter-layer coherence. Agentic AI systems operate simultaneously across model, infrastructure, and policy layers, enabling them to exploit inconsistencies and gaps between governance domains.
This creates a new category of vulnerability: inter-layer governance failure, where security breakdowns occur not within individual systems but between them, due to misalignment in authority, visibility, and control mechanisms.
5.6 Cyber Resilience as a Dominant but Incomplete Paradigm
Cyber resilience has become the dominant conceptual framework in modern cybersecurity discourse, reflecting a shift away from prevention-centric models toward adaptive, recovery-oriented systems. However, this study identifies important conceptual and operational limitations in how resilience is currently implemented.
5.6.1 Strength of resilience frameworks
Frameworks such as those proposed by NIST (2018) and extended in recent literature (Wairagade, 2025) define resilience as an organisational capability that is adaptive, continuous, system-wide, and recovery-oriented.
This represents a necessary evolution in cybersecurity thinking, acknowledging that prevention alone is insufficient in complex and adversarial environments where breaches are inevitable.
5.6.2 Limitations of resilience discourse
Despite its conceptual strength, resilience suffers from three major limitations:
Operational ambiguity – resilience is often defined in abstract terms without consistent implementation standards.
Measurement difficulty – there is a lack of robust, quantifiable metrics for assessing resilience maturity.
Reactive bias – current implementations often prioritise recovery over proactive risk reduction.
As a result, resilience risks becoming a strategic narrative rather than an operationally enforceable capability unless translated into measurable governance structures and technical controls.
5.7 Post-Quantum Cryptography: Deferred Urgency and Organisational Inertia
The findings reveal a persistent mismatch between the long-term severity of quantum computing threats and the short-term prioritisation within organisational cybersecurity strategies.
While Shor’s algorithm (Shor, 1994) establishes the theoretical vulnerability of current public-key cryptographic systems, and Bernstein et al. (2017) highlight the complexity of migration, organisational adoption of post-quantum cryptography (PQC) remains limited.
5.7.1 Structural inertia
Three forms of structural inertia explain delayed adoption:
Infrastructure dependency inertia – legacy systems embedded in critical operations
Cost and resource constraints – high transition and replacement costs
Uncertainty in quantum timelines – ambiguity regarding when threats become operational
5.7.2 Strategic paradox
This creates a strategic paradox: organisations are required to invest in PQC migration under conditions of uncertainty, balancing immediate operational priorities against low-probability but high-impact future risks.
This reflects a broader pattern in cybersecurity risk management, where existential but delayed threats are systematically deprioritised in favour of immediate operational concerns.
5.8 Integrated Discussion: Convergence of Five Systemic Forces
When synthesised, the five thematic areas identified in this study converge into a single systemic transformation of the cybersecurity landscape:
Intelligence acceleration – AI amplifies both offensive and defensive capabilities
Autonomy expansion – agentic systems introduce non-human decision-making actors
Cognitive destabilisation – GenAI undermines human reliability in security contexts
Governance fragmentation – regulatory and technical systems evolve asynchronously
Cryptographic uncertainty – quantum computing destabilises foundational trust systems
Collectively, these forces indicate that cybersecurity is transitioning toward a multi-layered adaptive ecosystem, characterised by continuous adversarial evolution, distributed autonomy, and systemic uncertainty.
5.9 Theoretical Contribution
This study makes three primary contributions to cybersecurity theory by reframing foundational assumptions about control, agency, and system boundaries in the context of AI-driven and agentic digital environments. Collectively, these contributions extend cybersecurity theory beyond traditional deterministic and perimeter-based paradigms toward a more adaptive, socio-technical understanding of security under uncertainty.
5.9.1 From control to probabilistic governance
A fundamental theoretical contribution of this study is the reconceptualisation of cybersecurity from a deterministic control model to a probabilistic governance paradigm. Traditional cybersecurity theory assumes that risk can be managed through layered technical controls, predefined rules, and enforceable policy structures that collectively produce predictable security outcomes.
However, the emergence of AI-driven systems, particularly those involving generative and agentic capabilities, introduces non-deterministic behaviours that cannot be fully anticipated or exhaustively specified. These systems operate within dynamic environments where inputs, outputs, and interactions evolve continuously, often in ways that are not fully observable or explainable.
As a result, cybersecurity outcomes must now be understood as probabilistic rather than deterministic, where even well-designed controls produce variable effectiveness depending on contextual conditions, adversarial adaptation, and system complexity. This shift requires a move toward governance models that prioritise uncertainty management, continuous risk calibration, and adaptive decision-making, rather than static assurance of security states.
In this framing, cybersecurity becomes less about achieving complete control and more about maintaining acceptable risk thresholds within an inherently unpredictable environment.
5.9.2 From human-centric to hybrid cognitive systems
A second theoretical contribution is the redefinition of cybersecurity agency from a human-centric model to a hybrid cognitive systems model, in which security outcomes emerge from the interaction between human operators and AI systems.
Traditional cybersecurity theory positions humans as either decision-makers or vulnerabilities within the system, often framing them as the weakest link in the security chain. However, this study demonstrates that such a dichotomy is increasingly inadequate in environments where AI systems actively participate in decision-making, threat detection, response orchestration, and even attack generation.
In AI-augmented environments, security decisions are no longer the product of human cognition alone but are instead co-produced through distributed cognitive processes involving human analysts, machine learning models, automated agents, and decision-support systems. These systems operate collaboratively, with each component contributing partial, context-dependent interpretations of risk.
This creates what can be conceptualised as a hybrid cognitive security architecture, where intelligence is distributed across human and machine actors. Within this model, security effectiveness depends not only on individual human judgement or algorithmic accuracy but on the quality of interaction, alignment, and feedback loops between human and AI components.
Consequently, cybersecurity theory must move beyond human-centred assumptions and instead account for co-adaptive human–AI systems, where cognitive authority is shared, dynamic, and context-dependent.
5.9.3 From perimeter defence to inter-layer governance
The third theoretical contribution of this study is the shift from perimeter-based security models to a framework of inter-layer governance, where security failures are understood as emerging from misalignment between governance layers rather than breaches at system boundaries.
Traditional cybersecurity models are built around the concept of a defendable perimeter, assuming that threats originate externally and can be mitigated through layered technical defences such as firewalls, intrusion detection systems, and network segmentation. However, the increasing integration of cloud infrastructure, AI systems, autonomous agents, and distributed digital ecosystems has rendered the notion of a stable perimeter increasingly obsolete.
Instead, modern cybersecurity environments are characterised by multiple overlapping governance layers, including AI model governance, infrastructure-level controls, application-level security, and organisational policy frameworks. These layers are often developed and managed independently, resulting in inconsistent enforcement, fragmented visibility, and asynchronous policy application.
This study introduces the concept of inter-layer governance failure, where security vulnerabilities arise not from the failure of any single control system, but from the lack of coherence, alignment, and interoperability between multiple governance domains. In such environments, threats can propagate across layers by exploiting gaps between policy intent, technical implementation, and system behaviour.
Accordingly, cybersecurity theory must evolve to prioritise cross-layer integration, governance coherence, and systemic alignment, rather than focusing solely on strengthening individual defensive perimeters.
5.9.4 Synthesis of theoretical contribution
Taken together, these three contributions redefine cybersecurity as a discipline operating under conditions of systemic uncertainty, distributed cognition, and fragmented governance. The study demonstrates that contemporary cybersecurity challenges cannot be adequately addressed through isolated technical solutions or linear risk models.
Instead, effective cybersecurity theory must account for:
probabilistic system behaviour under AI influence
distributed human–machine cognition
and multi-layer governance interactions within complex socio-technical ecosystems
This reconceptualisation provides a foundation for future research into adaptive cybersecurity systems capable of operating effectively in environments characterised by autonomy, complexity, and continuous adversarial evolution.
5.10 Chapter summary
This chapter critically interpreted the thematic findings and demonstrated that cybersecurity in 2026 is defined by systemic transformation rather than incremental change. AI, agentic systems, human behavioural exploitation, regulatory fragmentation, and quantum cryptography collectively reshape cybersecurity into a dynamic, probabilistic, and multi-agent governance challenge.
The next chapter concludes the dissertation by summarising key findings, discussing practical implications for CISOs and organisations, and proposing directions for future research.
6. Conclusion and Recommendations
6.1 Introduction
This final chapter consolidates the key findings of the study, answers the overarching research aim, and presents actionable recommendations for organisations and cybersecurity leaders. It also outlines the theoretical, practical, and policy implications of the research, followed by a discussion of study limitations and directions for future research.
Across the preceding chapters, the evidence demonstrates that cybersecurity in 2026 is undergoing a structural transformation driven by artificial intelligence (AI), agentic autonomy, GenAI-enabled threats, regulatory fragmentation, and post-quantum cryptographic disruption. These forces collectively redefine how cyber risk is created, governed, and mitigated.
6.2 Summary of key findings
The study identified five dominant and interrelated thematic shifts:
1. AI as a dual-use force
AI simultaneously strengthens defensive cybersecurity capabilities (e.g., anomaly detection, SOC automation) while enabling scalable offensive cybercrime (e.g., phishing, malware generation). This creates a persistent asymmetry in which attacker innovation often outpaces defensive governance (Mohamed, 2025; Uddin et al., 2025).
2. Emergence of agentic AI and autonomy risk
Agentic systems introduce autonomous decision-making into enterprise environments, fundamentally disrupting traditional IAM and security governance models. These systems create “action-based vulnerabilities” that extend beyond conventional data-centric threats (Vinay, 2025; Arora and Hastings, 2025).
3. Collapse of traditional human security assumptions
GenAI undermines traditional security awareness models by enabling highly convincing, adaptive, and context-aware social engineering. Human vulnerability is increasingly a function of cognitive exploitation rather than knowledge deficiency (Sheng et al., 2010).
4. Regulatory fragmentation and shift to resilience
Global cybersecurity governance is becoming increasingly fragmented, driving organisations toward cyber resilience models focused on adaptability, recovery, and continuity rather than prevention alone (NIST, 2018; Wairagade, 2025).
5. Post-quantum cryptography as a latent systemic risk
Quantum computing presents a long-term but structurally significant threat to modern cryptographic systems. Migration to post-quantum cryptography (PQC) is complex, slow, and requires early strategic planning despite uncertain timelines (Bernstein et al., 2017).
6.3 Answer to the research aim
The aim of this study was to critically analyse emerging cybersecurity trends in 2026, particularly in relation to AI-driven transformation, autonomous systems, regulatory volatility, and cryptographic evolution. The findings demonstrate that cybersecurity is no longer primarily a technical discipline focused on perimeter defence. Instead, it has become a socio-technical governance system characterised by continuous adaptation under uncertainty.
Specifically, the research confirms that:
cybersecurity is shifting from deterministic control to probabilistic risk governance
AI is simultaneously amplifying defensive and offensive capabilities
human behaviour is being redefined as a cognitive attack surface
governance systems are fragmented and increasingly resilience-focused
cryptographic systems face long-term structural disruption
Therefore, the central conclusion is that cybersecurity in 2026 is best understood as an adaptive, AI-augmented ecosystem rather than a static defensive framework.
6.4 Theoretical implications
This study contributes to cybersecurity theory in three key ways:
6.4.1 Transition from control to uncertainty governance
Traditional cybersecurity assumes systems can be controlled through layered defence mechanisms. This study demonstrates that AI and agentic systems introduce non-deterministic behaviours that require governance models based on uncertainty tolerance rather than control assurance.
6.4.2 Redefinition of human risk models
Human users are no longer simply “weak links” in security systems. Instead, they are targets of AI-enhanced cognitive manipulation systems, requiring a shift from training-based mitigation to behaviourally adaptive security ecosystems.
6.4.3 Emergence of inter-layer governance failure
Security failures increasingly occur not at the perimeter but between governance layers (AI models, infrastructure controls, and policy systems). This introduces a new theoretical construct where misalignment between layers becomes a primary source of vulnerability.
6.5 Practical implications
6.5.1 Implications for CISOs and security leadership
The role of the CISO must evolve into a strategic cyber resilience executive function, incorporating:
AI governance and model oversight
enterprise-wide risk orchestration
regulatory alignment across jurisdictions
integration of cyber-physical and digital systems
Security leadership can no longer operate as a siloed technical function.
6.5.2 Implications for SOC operations
Security Operations Centres must transition from:
reactive alert processing → predictive AI-assisted defence
manual triage → hybrid human-AI decision systems
static playbooks → adaptive response orchestration
However, full automation is not viable due to risks of hallucination, adversarial manipulation, and model drift. Hybrid SOC architectures remain the most resilient approach.
6.5.3 Implications for organisational training
Traditional security awareness training is insufficient in GenAI environments. Organisations must adopt:
continuous behavioural monitoring systems
embedded real-time security nudges
adaptive phishing simulation frameworks
Security Behaviour and Culture Programmes (SBCPs)
This reflects a shift from knowledge transfer to behavioural conditioning and reinforcement.
6.5.4 Implications for governance and compliance
Organisations must prepare for increasing regulatory fragmentation by implementing:
unified global compliance mapping frameworks
automated compliance tracking systems
cross-jurisdictional governance structures
AI-specific risk policies aligned with emerging legislation
6.5.5 Implications for cryptographic strategy
Organisations should begin immediate preparation for post-quantum transition through:
cryptographic asset inventory mapping
hybrid classical–PQC deployment models
phased migration roadmaps
vendor and infrastructure readiness assessments
Early adoption is critical due to long infrastructure replacement cycles.
6.6 Strategic recommendations
Based on the findings, the following strategic recommendations are proposed:
Recommendation 1: Establish AI governance frameworks
Organisations should implement formal AI governance structures that include:
agent identity management systems
model behaviour auditing
access control for autonomous agents
AI lifecycle security monitoring
Recommendation 2: Deploy hybrid human-AI SOC architectures
Security operations should adopt hybrid models that:
leverage AI for detection and triage
retain human oversight for decision validation
implement escalation controls for autonomous responses
Recommendation 3: Transition to behavioural cybersecurity models
Replace static training with:
continuous behavioural analytics
real-time phishing detection feedback loops
embedded organisational security culture systems
Recommendation 4: Implement cyber resilience as a core operating model
Cyber resilience should be embedded across:
enterprise risk management
incident response planning
system design architecture
regulatory compliance strategy
Recommendation 5: Initiate post-quantum cryptography readiness programmes
Organisations should begin PQC transition planning immediately, including:
cryptographic dependency mapping
pilot hybrid encryption deployments
long-term migration planning aligned with NIST standards
6.7 Limitations of the study
This research is subject to several limitations:
reliance on secondary data limits access to real-time industry-specific implementations
rapid evolution of AI cybersecurity means findings may require frequent updating
limited availability of large-scale empirical studies on agentic AI systems
potential publication bias toward positive AI cybersecurity outcomes
Despite these limitations, the PRISMA-guided systematic review combined with thematic analysis provides a robust and academically defensible synthesis.
6.8 Future research directions
Future research should focus on:
empirical validation of agentic AI governance frameworks in enterprise environments
real-world performance benchmarking of AI-driven SOC systems
longitudinal studies on GenAI-driven social engineering effectiveness
development of measurable cyber resilience metrics
practical implementation pathways for post-quantum cryptographic migration
Additionally, interdisciplinary research combining cybersecurity, cognitive science, and AI safety will become increasingly important.
6.9 Final conclusion
This paper has demonstrated that cybersecurity is undergoing a profound structural transformation driven by AI, autonomy, and systemic uncertainty. The traditional model of perimeter-based defence is no longer sufficient in environments where adversaries and defenders alike are augmented by generative and agentic AI systems.
The future of cybersecurity lies in adaptive cyber resilience ecosystems, where governance, technology, and human behaviour operate as integrated and continuously evolving systems. Organisations that fail to adapt to this shift risk becoming structurally vulnerable in an increasingly autonomous digital threat landscape.
References
Abbas, R. et al. (2023) Artificial Intelligence (AI) in Cybersecurity: A Socio-Technical Research Roadmap. The Alan Turing Institute.
Anbiaee, Z. et al. (2026) Security threat modeling for emerging AI-agent protocols: A comparative analysis of MCP, A2A, Agora, and ANP. arXiv preprint.
Arora, S. and Hastings, J. (2025) ‘Securing Agentic AI Systems: A multilayer security framework’, arXiv preprint
Bernstein, D.J., Lange, T. and Peters, C. (2017) ‘Post-quantum cryptography’, Nature, 549(7671), pp. 188–194.
Biggio, B. and Roli, F. (2018) ‘Wild patterns: Ten years after the rise of adversarial machine learning’, Pattern Recognition, 84, pp. 317–331.
Braun, V. and Clarke, V. (2006) ‘Using thematic analysis in psychology’, Qualitative Research in Psychology, 3(2), pp. 77–101.
Ehtesham, A. et al. (2025) A survey of agent interoperability protocols: MCP, ACP, A2A, and ANP. arXiv preprint.
Ernst, N.A. and Treude, C. (2026) ‘GenAI is no silver bullet for qualitative research in software engineering’, arXiv preprint.
Mohsin, A. et al. (2025) ‘A unified framework for human–AI collaboration in Security Operations Centers with trusted autonomy’, arXiv preprint.
Mohamed, N. (2025) ‘Artificial intelligence and machine learning in cybersecurity: A deep dive into state-of-the-art techniques’, Knowledge and Information Systems, 67, pp. 6969–7055.
NIST (2018) Framework for Improving Critical Infrastructure Cybersecurity. National Institute of Standards and Technology.
Page, M.J. et al. (2021) ‘The PRISMA 2020 statement: an updated guideline for reporting systematic reviews’, BMJ, 372, n71.
Posey, C., Roberts, T.L. and Lowry, P.B. (2014) ‘The impact of organisational commitment on information security behaviour’, Journal of Management Information Systems, 31(4), pp. 122–151.
Sandhu, R. and Samarati, P. (1994) ‘Access control: principle and practice’, IEEE Communications Magazine, 32(9), pp. 40–48.
Sheng, S. et al. (2010) ‘Who falls for phish? A demographic analysis of phishing susceptibility’, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 373–382.
Shor, P.W. (1994) ‘Algorithms for quantum computation: discrete logarithms and factoring’, Proceedings 35th Annual Symposium on Foundations of Computer Science, pp. 124–134.
Singh, R. et al. (2025) ‘LLMs in the SOC: An empirical study of human-AI collaboration in Security Operations Centres’, arXiv preprint.
Sommer, R. and Paxson, V. (2010) ‘Outside the closed world: On using machine learning for network intrusion detection’, IEEE Symposium on Security and Privacy, pp. 305–316.
Sood, A.K., Zeadally, S. and Hong, E.K. (2025) ‘The paradigm of hallucinations in AI-driven cybersecurity systems: understanding taxonomy, classification outcomes, and mitigations’, Computers and Electrical Engineering, 124, Article 110307.
Srinivas, S. et al. (2025) ‘AI-augmented SOC: A survey of LLMs and agents for security automation’, Journal of Cybersecurity and Privacy, 5(4), 95.
Suggu, S.K. (2025) ‘Agentic AI Workflows in Cybersecurity: Opportunities, Challenges, and Governance via the MCP Model’, Journal of Information Systems Engineering and Management.
Uddin, M. et al. (2025) ‘Generative AI revolution in cybersecurity’, Artificial Intelligence Review, 58.
Vinay, V. (2025) ‘The evolution of agentic AI in cybersecurity’, arXiv preprint
von Solms, R. and van Niekerk, J. (2013) ‘From information security to cyber security’, Computers & Security, 38, pp. 97–102.
Wairagade, A. (2025). ‘Strategic Management of AI-Powered Cybersecurity Systems: A Systematic Review’. Journal of Engineering Research and Reports, 27(8), pp. 54–64.
Contact
Reach out via email for inquiries.
Subscribe to newsletter
info@grcadvisory.ch
© 2025. All rights reserved.