Ai-driven transformation, regulatory fragmentation, and the rise of adaptive cyber resiliencer blog post
A systematic literature review using PRISMA and thematic analysis of AI, GenAI, regulatory volatility, and post-quantum cryptography in modern cybersecurity governance.
Sanchez P.
4/24/202666 min read


Abstract
This study examines the structural transformation of cybersecurity in 2026, driven by artificial intelligence (AI), generative AI (GenAI), agentic autonomy, regulatory fragmentation, and post-quantum risk. Using a PRISMA-guided systematic literature review combined with interpretive thematic synthesis, the research identifies five interdependent forces reshaping the field: AI as a dual-use capability, governance challenges in autonomous systems, the escalation of cognitive vulnerability, the rise of resilience under regulatory complexity, and the strategic urgency of post-quantum migration.
The findings reveal a fundamental shift from deterministic, perimeter-based security models toward adaptive, probabilistic, and socio-technical governance systems. AI emerges as a bidirectional capability amplifier, simultaneously enhancing defence and accelerating adversarial innovation, thereby producing structural asymmetries. In parallel, agentic AI disrupts traditional governance assumptions by introducing autonomous actors that challenge existing models of control, accountability, and identity.
This paper advances theory by reconceptualising cybersecurity as an AI-augmented resilience ecosystem operating under systemic uncertainty and distributed cognition. It contributes a unified conceptual framework that integrates technological, behavioural, and governance dimensions, addressing fragmentation in existing literature. The study further outlines strategic implications for organisational security, governance design, and long-term cryptographic planning in environments where technological change outpaces formal research.
1. Introduction
1.1 Context and Background
Cybersecurity is undergoing a structural transformation driven by the convergence of artificial intelligence (AI), autonomous systems, regulatory fragmentation, and emerging cryptographic disruption (Sarker, 2021; Buczak & Guven, 2016). Traditional cybersecurity paradigms have historically relied on deterministic assumptions, including clearly defined network perimeters, signature-based threat detection, and human-centred risk mitigation models. However, these assumptions are increasingly insufficient in contemporary environments characterised by highly distributed cyber-physical and socio-technical systems, where computing, sensing, and actuation are deeply interconnected (Humayed et al., 2017).
In such environments, the attack surface is no longer fixed but dynamically evolving, shaped by system complexity and interdependencies that extend beyond organisational boundaries. As a result, adversaries are able to exploit structural weaknesses that are not adequately addressed by perimeter-based security models. At the same time, the scale and sophistication of cyber threats have exposed the limitations of rule-based and signature-driven detection mechanisms. Traditional intrusion detection approaches struggle to identify novel or rapidly evolving attack patterns, particularly in dynamic network environments (Buczak & Guven, 2016).
This has accelerated a shift toward machine learning and data-driven approaches capable of behavioural analysis, anomaly detection, and adaptive threat response (Sarker, 2021). While these approaches enhance detection capability, they also introduce new forms of uncertainty, including model opacity, adversarial manipulation, and dependence on data quality.
Taken together, these developments indicate that cybersecurity is no longer adequately understood as a purely technical control problem. Instead, it is increasingly characterised as a socio-technical governance challenge operating under conditions of uncertainty, where risk is continuously managed rather than fully controlled.
1.2 AI as a Transformative Force in Cybersecurity
The integration of AI into cybersecurity has significantly enhanced organisational capabilities in threat detection, response, and predictive analysis. AI-driven systems are now widely deployed across intrusion detection, anomaly identification, automated incident response, and threat intelligence generation, particularly within complex and high-volume environments such as Security Operations Centres (SOCs) (Mohamed, 2025; Wairagade, 2025).
However, framing AI solely as a defensive enhancement overlooks its broader systemic impact. While AI strengthens detection speed, scalability, and analytical capability, it simultaneously expands the attack surface and introduces new categories of risk. These include adversarial manipulation, data poisoning, model drift, and limited explainability in high-stakes decision-making contexts (Biggio and Roli, 2018; Mohamed, 2025).
At the same time, adversaries are leveraging these same technologies to enhance the scale, speed, and sophistication of cyberattacks. AI-enabled threat actors can generate targeted phishing campaigns, develop adaptive malware, and automate reconnaissance processes with increasing efficiency (Uddin et al., 2025). This dual-use dynamic creates a continuously evolving interaction between AI-augmented defenders and attackers, often described as an emerging “arms race” (Uddin et al., 2025).
Consequently, AI should not be understood merely as an efficiency-enhancing tool within existing cybersecurity frameworks. Rather, it functions as a capability amplifier that simultaneously enhances and destabilises cybersecurity systems, introducing new asymmetries and reinforcing the need for adaptive and continuously evolving defence models.
1.3 Emergence of Agentic AI and Autonomous Security Systems
A significant development within AI-driven cybersecurity is the emergence of agentic AI systems—autonomous entities capable of planning, reasoning, and executing multi-step tasks with limited human intervention. Unlike traditional models that operate reactively, agentic systems can initiate and coordinate actions across complex digital environments, including threat detection, incident response, and system orchestration (Vinay, 2025).
While these systems offer substantial gains in operational efficiency, they challenge foundational assumptions within cybersecurity governance. Traditional frameworks, particularly those related to Identity and Access Management (IAM), are predicated on stable identities, predictable behaviours, and human-controlled decision-making. Agentic AI disrupts these assumptions by introducing dynamic, non-human actors whose behaviour may be adaptive, opaque, and difficult to constrain.
Recent research highlights the need for lifecycle-based security frameworks that extend beyond conventional confidentiality, integrity, and availability (CIA) models to incorporate accountability and behavioural oversight (Arora and Hastings, 2025). This reflects a broader shift from securing systems to governing autonomous behaviour.
As a result, cybersecurity must increasingly address not only the protection of data and infrastructure, but also the control, monitoring, and alignment of autonomous digital agents operating within organisational environments.
1.4 GenAI and the Collapse of Traditional Security Awareness Models
Human behaviour has long been recognised as a critical vulnerability within cybersecurity systems, with security awareness training serving as a primary mitigation strategy. However, the emergence of generative artificial intelligence (GenAI) fundamentally challenges the effectiveness of these traditional approaches.
Generative models enable the creation of highly realistic, context-aware, and linguistically sophisticated content, significantly increasing the effectiveness of phishing and social engineering attacks (Uddin et al., 2025). Empirical research demonstrates that users rely heavily on cognitive heuristics when assessing potential threats, making them susceptible to deception (Sheng et al., 2010). GenAI amplifies this vulnerability by producing communications that are often indistinguishable from legitimate interactions, thereby undermining heuristic-based detection strategies.
This creates a condition of perceptual indistinguishability, where users are no longer able to reliably differentiate between benign and malicious content. As a result, vulnerability is no longer primarily a function of insufficient knowledge or awareness, but rather a consequence of cognitive exploitation enabled at scale.
In response, organisations are increasingly adopting Security Behaviour and Culture Programmes (SBCPs), which emphasise continuous behavioural reinforcement rather than static training interventions (Mohamed, 2025). However, these approaches remain emergent and are not yet fully integrated with adaptive, real-time security mechanisms, highlighting a gap between evolving threat capabilities and organisational response strategies.
1.5 Regulatory Volatility and the Rise of Cyber Resilience
Cybersecurity governance is increasingly shaped by a fragmented and rapidly evolving regulatory landscape. Organisations must navigate overlapping frameworks such as the EU NIS2 Directive, the Digital Operational Resilience Act (DORA), and emerging AI governance regulations. This proliferation reflects a condition of regulatory pluralism, where multiple governance systems coexist without full harmonisation (Bennett and Raab, 2020).
While these frameworks are intended to enhance security and accountability, they introduce significant operational complexity. Organisations face conflicting compliance requirements, duplicated reporting obligations, and increased uncertainty in cross-jurisdictional operations. This undermines the effectiveness of traditional compliance-based security strategies, which assume stable and coherent regulatory environments.
In response, there is a growing shift toward cyber resilience as a dominant governance paradigm. The NIST Cybersecurity Framework defines resilience as the capacity to anticipate, withstand, recover from, and adapt to cyber incidents (NIST, 2018). Unlike prevention-focused models, resilience emphasises adaptability and continuity under conditions of disruption.
However, despite its conceptual importance, resilience remains difficult to operationalise. Its implementation lacks consistent metrics and standardised frameworks, raising questions about how resilience can be effectively measured and integrated into organisational practice within complex regulatory environments.
1.6 Transition from Theoretical Risk to Post-Quantum Urgency
Post-quantum cryptography (PQC) represents a critical but often underprioritised dimension of contemporary cybersecurity. Advances in quantum computing pose a long-term threat to widely used cryptographic systems, particularly those based on RSA and elliptic curve cryptography. Shor’s algorithm demonstrates that sufficiently powerful quantum systems could render these encryption methods ineffective, undermining the security foundations of digital infrastructure (Shor, 1994).
Despite the theoretical nature of this threat, research highlights that the transition to PQC is complex and time-intensive, requiring extensive infrastructure redesign, interoperability planning, and long-term strategic investment (Bernstein et al., 2017). Cryptographic mechanisms are deeply embedded across systems and supply chains, making migration a systemic rather than isolated technical challenge.
This creates a strategic dilemma. Organisations must begin preparing for a future threat that has not yet fully materialised, while balancing immediate operational priorities. As a result, PQC is frequently deprioritised despite its long-term significance.
However, delayed action increases exposure to cumulative risk, particularly in scenarios where encrypted data is harvested and decrypted retrospectively. Consequently, PQC should be understood not as a distant technical issue, but as a present governance challenge requiring early and coordinated planning.
1.7 Problem Statement and Research Gap
Despite rapid advancements in AI-driven cybersecurity, significant gaps remain in understanding how organisations can effectively integrate emerging technologies while maintaining governance, accountability, and resilience. Existing literature provides substantial insight into individual domains such as AI-enabled threat detection, generative AI risks, and cyber resilience frameworks. However, these domains are often examined in isolation, resulting in a fragmented understanding of cybersecurity transformation.
Three interrelated gaps are particularly evident:
First, a governance gap exists in relation to autonomous and agentic AI systems. While recent studies acknowledge the emergence of autonomous decision-making in cybersecurity contexts, existing frameworks—particularly those based on traditional Identity and Access Management (IAM)—remain insufficient for managing dynamic, non-human actors (Vinay, 2025). Current models largely assume static identities and predictable behaviour, limiting their applicability to adaptive, goal-directed systems.
Second, a behavioural gap has emerged as conventional security awareness approaches become increasingly ineffective in the context of GenAI-enabled threats. Although human factors have been widely studied, existing models continue to emphasise knowledge and training, rather than addressing the systematic cognitive exploitation enabled by generative technologies (Uddin et al., 2025). This creates a disconnect between the evolving nature of social engineering attacks and the organisational mechanisms designed to mitigate them.
Third, an integration gap reflects the limited alignment between AI-driven security capabilities and broader cyber resilience strategies operating under conditions of regulatory fragmentation. While resilience frameworks emphasise adaptability and recovery, they are often insufficiently integrated with AI-driven detection and response systems, leading to inconsistencies between strategic intent and operational implementation (Wairagade, 2025).
Collectively, these gaps point to a deeper structural limitation: existing cybersecurity models are not designed to operate in environments characterised by autonomy, cognitive manipulation, and regulatory complexity. Furthermore, the evidence base itself remains temporally fragmented. Peer-reviewed research often lags behind rapidly evolving industry practices and regulatory developments, particularly in areas such as GenAI-enabled threats, agentic system deployment, and post-quantum readiness.
This study addresses these limitations by integrating systematic academic evidence with interpretive analysis, providing a more comprehensive and contemporaneous understanding of cybersecurity transformation.
1.8 Aim and Scope of the Study
This study aims to critically analyse the transformation of cybersecurity in 2026, with particular focus on AI-driven systems, agentic autonomy, generative AI threats, regulatory volatility, and post-quantum cryptographic readiness. Rather than examining these developments in isolation, the study seeks to synthesise them into a unified conceptual understanding of cybersecurity as an evolving socio-technical governance system.
Methodologically, the research adopts a PRISMA-guided systematic literature review as its analytical foundation, complemented by interpretive thematic analysis and contextualisation within emerging industry practices and regulatory developments. This hybrid approach enables both methodological rigour in evidence selection and conceptual depth in analysing rapidly evolving phenomena that are not yet fully stabilised within academic literature.
The study is guided by the following objectives:
to critically evaluate the dual role of AI as both a defensive and offensive capability
to analyse governance challenges introduced by autonomous and agentic systems
to examine the transformation of human risk in GenAI-enabled environments
to assess the implications of regulatory fragmentation for cybersecurity strategy
to explore organisational readiness for post-quantum cryptographic transition
Through this integrated analysis, the study contributes to the reconceptualisation of cybersecurity as an adaptive, uncertainty-driven governance system. It further seeks to bridge the gap between fragmented academic research and rapidly evolving real-world practice by providing a coherent framework that integrates technological, behavioural, and regulatory dimensions.
In doing so, the study advances both theoretical understanding and practical insight into how organisations can navigate increasingly complex and dynamic cybersecurity environments.
1.9 Chapter Summary
This chapter has established the conceptual and contextual foundation for analysing cybersecurity in 2026. It has demonstrated that emerging technologies—particularly AI, generative AI, and agentic systems—are not merely extending existing cybersecurity capabilities, but fundamentally reshaping the assumptions upon which they are built.
The chapter has shown that traditional deterministic, perimeter-based security models are increasingly inadequate in environments characterised by distributed architectures, autonomous systems, and rapidly evolving threat dynamics. At the same time, regulatory fragmentation and post-quantum cryptographic challenges introduce additional layers of complexity that cannot be addressed through conventional compliance-driven approaches.
Taken together, these developments indicate a broader transformation of cybersecurity from a technical control discipline toward a socio-technical governance system operating under conditions of uncertainty, continuous adaptation, and systemic interdependence.
The chapter has also identified critical gaps in existing literature, particularly in relation to agentic AI governance, GenAI-driven behavioural risk, and the integration of AI capabilities within resilience-based frameworks. These gaps justify the need for a more integrated and conceptually coherent analysis.
Accordingly, the study adopts a hybrid methodological approach combining systematic literature review with interpretive contextualisation. This approach provides both analytical rigour and responsiveness to emerging developments, forming the basis for the subsequent chapters.
The following chapter critically examines the existing literature in greater depth, further developing the conceptual foundations for analysing cybersecurity transformation.
2. Literature Review
2.1 Introduction
This chapter critically examines contemporary academic literature on the transformation of cybersecurity in the context of artificial intelligence (AI), generative AI (GenAI), agentic systems, regulatory fragmentation, and post-quantum cryptography. Rather than presenting a descriptive overview, the chapter synthesises and evaluates key theoretical and empirical contributions to identify underlying conceptual tensions, limitations, and emerging directions within the field.
Existing scholarship consistently suggests that cybersecurity is transitioning from rule-based, perimeter-oriented defence models toward adaptive, AI-augmented, and resilience-driven architectures (Mohamed, 2025; Wairagade, 2025). However, this transition remains uneven and theoretically underdeveloped. While technological capabilities have advanced rapidly, governance frameworks, behavioural models, and conceptual foundations have not evolved at the same pace.
A central limitation of the current literature is its fragmentation across technical, organisational, and behavioural domains. AI-driven threat detection, human-centred security, regulatory compliance, and cryptographic resilience are often treated as discrete areas of study, with limited integration across them. As a result, cybersecurity is frequently conceptualised as a collection of specialised responses to emerging threats, rather than as a coherent and evolving system.
This chapter addresses this limitation by critically synthesising these domains into a unified analytical framework. It argues that cybersecurity should be understood as an adaptive socio-technical system characterised by interdependencies between technological capability, human behaviour, and governance structures. In doing so, the chapter establishes the conceptual foundation for the subsequent analysis and identifies key gaps that inform the research design.
2.2 Evolution of Cybersecurity Paradigms
Early cybersecurity literature conceptualised security primarily through the confidentiality, integrity, and availability (CIA) triad, supported by perimeter-based defence mechanisms and rule-based detection systems (von Solms and van Niekerk, 2013). These models assumed relatively stable system boundaries, identifiable threat actors, and environments in which risks could be managed through predefined controls.
However, these assumptions have been progressively challenged by the increasing complexity of digital infrastructures. The proliferation of cloud computing, Internet of Things (IoT) ecosystems, and distributed architectures has eroded the notion of a clearly defined network perimeter, rendering traditional defence models increasingly ineffective (NIST, 2018). In such environments, system boundaries are fluid, and attack surfaces extend across organisational and technological domains.
At the same time, the integration of AI introduces non-deterministic behaviour into cybersecurity systems. Unlike rule-based models, machine learning systems operate probabilistically, producing outputs that may vary depending on data inputs and contextual conditions. This challenges the foundational assumption of predictability that underpins traditional security engineering approaches.
Recent literature has therefore begun to reframe cybersecurity as an emergent property of complex socio-technical systems rather than a fixed set of controls (Wairagade, 2025). This perspective aligns with broader theoretical shifts toward complexity and systems thinking, where security outcomes are understood as dynamic and context-dependent rather than fully controllable.
However, while this conceptual shift is widely acknowledged, its implications remain insufficiently developed. Much of the literature recognises the limitations of traditional models but does not provide clear guidance on how adaptive or resilience-based approaches should be operationalised in practice. This creates a persistent gap between theoretical recognition of complexity and the implementation of effective security strategies.
Consequently, the evolution of cybersecurity paradigms can be understood not as a linear progression, but as an ongoing tension between control-oriented models and emerging adaptive frameworks. This tension underpins much of the current literature and highlights the need for more integrated approaches that reconcile technological capability with governance and organisational practice.
2.3 Artificial Intelligence as a Cybersecurity Multiplier
Artificial intelligence (AI) has become a central focus of contemporary cybersecurity research, widely recognised for its capacity to enhance detection, automate response, and enable predictive threat intelligence. However, characterising AI as a straightforward advancement risks oversimplifying its broader systemic implications. This section critically evaluates AI as a cybersecurity multiplier, emphasising its dual role in both strengthening and destabilising security environments.
While existing literature highlights the operational benefits of AI, it often underestimates the extent to which these technologies reshape the structure of cyber risk itself. AI does not simply improve existing processes; it alters the speed, scale, and asymmetry of interactions between defenders and adversaries. As a result, cybersecurity must be understood not only in terms of enhanced capability, but also in terms of increased complexity and uncertainty.
This section therefore examines AI across two interrelated dimensions: its role in enhancing defensive capabilities, and the limitations and risks that emerge from its integration into cybersecurity systems.
2.3.1 Defensive Applications of AI
AI-driven systems have significantly improved the capacity of organisations to detect and respond to cyber threats, particularly in environments characterised by high data volume and complexity. Machine learning techniques are widely used for anomaly detection, behavioural analysis, and intrusion detection, enabling systems to identify patterns that would be difficult to detect through rule-based approaches (Buczak & Guven, 2016; Sarker, 2021).
These capabilities are especially valuable in Security Operations Centres (SOCs), where analysts must process large volumes of alerts and distinguish between benign and malicious activity. AI enables the automation of routine analysis, prioritisation of threats, and acceleration of response times, thereby improving operational efficiency (Mohamed, 2025; Wairagade, 2025).
In addition, predictive analytics has emerged as a key area of development, allowing organisations to anticipate potential threats based on historical data and behavioural trends. This represents a shift from reactive to proactive security models, where intervention can occur before an attack fully materialises.
However, while these advancements are significant, the literature often frames them in predominantly functional terms, focusing on performance improvements rather than systemic implications. This creates a tendency to treat AI as an incremental enhancement rather than a transformative force, limiting critical engagement with its broader impact on cybersecurity strategy and governance.
2.3.2 Limitations and Risks of AI in Cybersecurity
Despite its advantages, the integration of AI into cybersecurity introduces a range of limitations and risks that challenge its reliability and governance. One of the most significant concerns is the vulnerability of machine learning systems to adversarial manipulation. Research demonstrates that attackers can exploit model weaknesses through techniques such as adversarial examples and data poisoning, undermining detection accuracy and system integrity (Biggio and Roli, 2018).
In addition, AI systems are inherently dependent on the quality and representativeness of training data. Biases or gaps in data can lead to inaccurate or incomplete threat detection, raising concerns about false positives, false negatives, and uneven protection across different contexts (Sarker, 2021). These limitations are particularly problematic in dynamic threat environments, where models must continuously adapt to new and evolving attack patterns.
Another critical issue is the lack of transparency in AI decision-making processes. Many machine learning models operate as “black boxes,” making it difficult for analysts to interpret or justify their outputs. This lack of explainability poses significant challenges for accountability, particularly in high-stakes security contexts where decisions may have operational or legal consequences (Mohamed, 2025).
Furthermore, the literature increasingly recognises that AI contributes to an escalation dynamic in cybersecurity. The same technologies that enable enhanced defence are also accessible to adversaries, who can use them to automate attacks, generate adaptive malware, and conduct large-scale social engineering campaigns (Uddin et al., 2025). This creates a feedback loop in which defensive and offensive capabilities co-evolve, reinforcing an ongoing technological arms race.
However, while this dual-use dynamic is widely acknowledged, existing research often stops short of fully theorising its implications. In particular, there is limited discussion of how organisations should adapt governance models to account for AI-driven asymmetry, uncertainty, and continuous adaptation.
As a result, AI should not be understood solely as a tool for improving cybersecurity performance. Rather, it introduces new forms of systemic risk that require rethinking how security is designed, managed, and governed in increasingly complex environments.
2.4 Agentic AI and Autonomous Security Systems
The emergence of agentic AI represents a significant shift in how artificial intelligence is conceptualised within cybersecurity. Unlike traditional machine learning models, which generate outputs in response to specific inputs, agentic systems operate as autonomous entities capable of planning, reasoning, and executing multi-step actions with limited human intervention (Vinay, 2025). These systems are increasingly deployed in cybersecurity contexts for tasks such as automated threat detection, incident response, and system orchestration.
Existing literature tends to frame agentic AI primarily in terms of efficiency gains and operational scalability. While these benefits are substantial, such perspectives risk underestimating the broader structural implications of autonomy in security-critical environments. Agentic systems do not simply accelerate existing processes; they fundamentally alter the locus of decision-making, shifting it from human operators to autonomous digital actors.
This shift introduces a qualitative change in cybersecurity risk. Traditional systems are designed to manage vulnerabilities within infrastructure, whereas agentic AI introduces risk through behaviour—specifically, through the actions taken by autonomous systems operating under conditions of uncertainty. As a result, security challenges are no longer confined to protecting systems from external threats, but extend to governing the internal behaviour of intelligent agents.
Despite growing recognition of these challenges, current research remains conceptually fragmented. Technical studies focus on system performance and architecture, while governance discussions often remain abstract, lacking concrete mechanisms for managing autonomy. This creates a gap between the increasing deployment of agentic systems and the development of frameworks capable of regulating their behaviour in practice.
Consequently, agentic AI should be understood not merely as an extension of AI capability, but as a transformative force that redefines the boundaries of cybersecurity, shifting its focus from system protection to behavioural governance.
2.4.1 Limitations of Traditional Identity and Access Management (IAM)
Identity and Access Management (IAM) has long served as a foundational component of cybersecurity governance, based on the principle that access to systems can be controlled through the authentication and authorisation of identifiable users. These models assume stable identities, predictable behaviour, and a clear distinction between human users and automated processes.
However, the emergence of agentic AI exposes fundamental limitations in these assumptions. Autonomous systems do not conform to static identity models; instead, they operate dynamically, often interacting with multiple systems, generating new processes, and adapting their behaviour in response to changing conditions (Vinay, 2025). As a result, access control mechanisms designed for human users are increasingly insufficient for managing the actions of autonomous agents.
Moreover, IAM frameworks are primarily concerned with whether access should be granted, rather than how that access is subsequently used. This distinction becomes critical in the context of agentic AI, where risk arises not only from unauthorised access, but from the actions performed by authorised agents. In such cases, a system may behave in ways that are technically permitted but operationally undesirable or even harmful.
While recent literature acknowledges these limitations, proposed solutions remain underdeveloped. Extensions to IAM models often focus on refining authentication mechanisms or incorporating contextual factors, but do not fully address the need for continuous behavioural oversight. This reflects a broader tendency within the literature to adapt existing frameworks incrementally, rather than reconsider their underlying assumptions.
As a result, IAM in its current form is structurally misaligned with the realities of autonomous systems, highlighting the need for more dynamic and behaviour-oriented governance models.
2.4.2 Towards Behavioural Governance Frameworks
In response to the limitations of traditional access control models, emerging research has begun to explore the concept of behavioural governance in cybersecurity. Rather than focusing solely on who or what can access a system, behavioural approaches emphasise monitoring, constraining, and evaluating how entities act within it.
Recent studies propose lifecycle-based frameworks that extend beyond the conventional confidentiality, integrity, and availability (CIA) triad to include dimensions such as accountability, traceability, and alignment (Arora and Hastings, 2025). These frameworks recognise that in environments characterised by autonomous agents, security depends not only on preventing unauthorised access, but also on ensuring that authorised actions remain within acceptable behavioural boundaries.
However, despite their conceptual promise, behavioural governance models remain in early stages of development. Much of the existing literature provides high-level principles without clear implementation strategies, leaving organisations with limited guidance on how to operationalise these approaches in practice. Key challenges include defining acceptable behaviour, ensuring transparency in decision-making, and establishing mechanisms for real-time intervention.
Furthermore, there is limited integration between behavioural governance and AI-specific risks such as model drift, emergent behaviour, and adversarial manipulation. This lack of integration reflects a broader fragmentation in the literature, where governance, technical design, and risk management are often treated as separate domains.
Consequently, while behavioural governance represents a necessary evolution in cybersecurity thinking, it remains an incomplete response to the challenges introduced by agentic AI. Further research is required to translate these conceptual frameworks into practical, scalable, and enforceable security models.
2.5 Generative AI and Human-Centric Cyber Risk
Generative artificial intelligence (GenAI) has introduced a significant shift in the nature of human-centric cybersecurity risk. While human vulnerability has long been recognised as a critical factor in security breaches, the capabilities of GenAI fundamentally alter how this vulnerability is produced, exploited, and managed.
Existing literature has traditionally framed human risk in terms of awareness deficits, assuming that users can be trained to recognise and avoid threats through education and vigilance (Sheng et al., 2010). This perspective underpins widespread organisational investment in security awareness programmes, which aim to improve user judgement through knowledge and behavioural reinforcement.
However, the emergence of GenAI challenges the validity of this assumption. Generative models are capable of producing highly realistic, context-aware, and linguistically sophisticated content, enabling attackers to create phishing messages and social engineering campaigns that closely mimic legitimate communications (Uddin et al., 2025). As a result, traditional indicators of malicious intent—such as grammatical errors or generic messaging—are no longer reliable cues for detection.
This development introduces a qualitative shift in the nature of cyber risk. Rather than exploiting gaps in user knowledge, GenAI-enabled attacks exploit fundamental cognitive processes, including trust formation, pattern recognition, and heuristic decision-making. In such contexts, even well-informed users may be unable to reliably distinguish between legitimate and malicious interactions.
Consequently, human vulnerability should no longer be understood solely as a function of insufficient awareness, but as a structurally exploitable feature of human cognition. This reframing challenges the continued reliance on awareness-based security models, which assume that improved knowledge will lead to improved decision-making.
Despite growing recognition of these challenges, the literature remains limited in its response. While some studies advocate for enhanced training or simulation-based approaches, these interventions often retain the same underlying assumption that users can be trained to detect increasingly sophisticated threats. This creates a misalignment between the evolving capabilities of attackers and the defensive strategies employed by organisations.
Emerging approaches, such as Security Behaviour and Culture Programmes (SBCPs), attempt to move beyond static training by embedding security into organisational practices and decision-making processes (Mohamed, 2025). However, these approaches remain underdeveloped and are not yet fully integrated with technical security systems, particularly those leveraging AI for real-time detection and response.
As a result, GenAI exposes a critical gap in cybersecurity strategy: the lack of effective models for managing human risk in environments where deception is highly scalable, contextually precise, and cognitively targeted. Addressing this gap requires a shift from awareness-based interventions toward integrated socio-technical approaches that combine behavioural insights with adaptive technological controls.
2.6 Regulatory Fragmentation and Cyber Resilience
The governance of cybersecurity is increasingly influenced by a complex and fragmented regulatory landscape, characterised by overlapping frameworks, sector-specific requirements, and rapidly evolving policy initiatives. Prominent examples include the EU NIS2 Directive, the Digital Operational Resilience Act (DORA), and emerging regulatory approaches to artificial intelligence. While these frameworks aim to enhance organisational security and accountability, their coexistence creates significant challenges for implementation and strategic alignment (Bennett and Raab, 2020).
Existing literature often presents regulatory expansion as a necessary response to escalating cyber threats. However, this perspective tends to overlook the operational consequences of regulatory fragmentation. Organisations are required to comply with multiple, sometimes inconsistent, frameworks across jurisdictions, leading to duplicated efforts, increased compliance costs, and uncertainty in prioritising security investments. As a result, cybersecurity governance is frequently shaped as much by regulatory obligation as by risk-based decision-making.
This has reinforced the dominance of compliance-driven security models, where success is measured by adherence to prescribed standards rather than by the effectiveness of security outcomes. While compliance frameworks provide important baseline controls, they are typically designed for stability and standardisation, making them less suited to environments characterised by rapid technological change and evolving threat landscapes.
In response to these limitations, the concept of cyber resilience has gained increasing prominence. The NIST Cybersecurity Framework defines resilience as the ability to anticipate, withstand, recover from, and adapt to cyber incidents (NIST, 2018). This represents a shift away from purely preventive models toward approaches that acknowledge the inevitability of breaches and emphasise continuity under disruption.
However, despite its growing adoption, resilience remains conceptually and operationally ambiguous. Much of the literature defines resilience at a high level but provides limited guidance on how it can be measured, implemented, or integrated with emerging technologies such as AI-driven security systems. This creates a gap between the strategic appeal of resilience and its practical application within organisations.
Furthermore, resilience frameworks are rarely examined in conjunction with the realities of regulatory fragmentation. In practice, organisations must balance adaptive, resilience-oriented strategies with rigid compliance requirements, often resulting in hybrid approaches that lack coherence. This tension highlights a broader limitation in the literature, which tends to treat regulation and resilience as complementary, rather than potentially conflicting, forces.
Consequently, while regulatory developments and resilience frameworks represent important advancements in cybersecurity governance, they do not fully resolve the challenges introduced by complexity, uncertainty, and technological acceleration. Instead, they illustrate the need for more integrated models that align regulatory compliance, adaptive capability, and technological innovation within a coherent governance framework.
2.7 Post-Quantum Cryptography and Future Risk
Post-quantum cryptography (PQC) has emerged as a critical area of concern within cybersecurity, driven by the potential impact of quantum computing on existing cryptographic systems. Widely deployed encryption methods, particularly those based on RSA and elliptic curve cryptography, rely on mathematical problems that could be efficiently solved by sufficiently advanced quantum computers using algorithms such as Shor’s algorithm (Shor, 1994). This creates a scenario in which the foundational mechanisms securing digital communication, financial systems, and critical infrastructure may become vulnerable.
Despite this recognised risk, the current literature reveals a disconnect between theoretical awareness and practical response. PQC is often framed as a future-oriented issue, leading organisations to deprioritise it in favour of more immediate cybersecurity concerns. However, this perspective underestimates the structural complexity of cryptographic systems, which are deeply embedded across organisational infrastructures, software dependencies, and global communication protocols (Bernstein et al., 2017).
The transition to quantum-resistant algorithms is therefore not a simple technical upgrade, but a large-scale transformation requiring long-term planning, system-wide coordination, and significant resource investment. Cryptographic agility—the ability to rapidly replace or update cryptographic mechanisms—remains limited in many organisations, further complicating this transition.
A key challenge highlighted in the literature is the temporal asymmetry of quantum risk. While the full capabilities of quantum computing have not yet materialised, adversaries may already be collecting encrypted data with the intention of decrypting it in the future once quantum technologies become viable. This “harvest now, decrypt later” model reframes PQC as a present security concern rather than a distant theoretical threat.
However, despite increasing recognition of these risks, there is limited integration of PQC considerations within broader cybersecurity strategies. Research often treats post-quantum transition as a specialised technical domain, rather than as a core component of organisational risk management and governance. This reflects a broader tendency within the literature to separate long-term strategic risks from immediate operational concerns.
Furthermore, the uncertainty surrounding the timeline and capabilities of quantum computing complicates decision-making. Organisations must allocate resources and implement changes in the absence of clear timelines, creating a tension between proactive preparation and efficient resource utilisation.
Consequently, PQC highlights a fundamental limitation in existing cybersecurity approaches: the difficulty of managing risks that are both uncertain in timing and systemic in impact. Addressing this challenge requires a shift from reactive security models toward forward-looking, resilience-oriented strategies that incorporate long-term technological uncertainty into governance and planning processes.
2.9 Synthesis of Literature and Research Gap
The reviewed literature reveals five interrelated themes shaping contemporary cybersecurity:
AI acts as both a defensive and offensive force, creating dynamic threat environments
Agentic AI introduces autonomy-driven risks requiring new governance models
Human vulnerability is increasingly shaped by cognitive exploitation
Regulatory fragmentation drives the shift toward resilience-based strategies
Post-quantum cryptography introduces long-term systemic risk
While each of these themes is well explored individually, the literature remains fragmented. There is limited integration across technological, behavioural, and governance perspectives.
Three key research gaps emerge:
Governance gap: Lack of empirically validated frameworks for managing agentic AI (Arora and Hastings, 2025; Suggu, 2025; Wairagade, 2025; Abbas et al., 2023)
Behavioural gap: Insufficient models for addressing GenAI-driven cognitive exploitation (Brundage et al., 2018; Sheng et al., 2010; Sood, Zeadally and Hong, 2025; Uddin et al., 2025)
Integration gap: Weak alignment between AI-driven security systems and resilience frameworks (NIST, 2018; Radanliev et al., 2020; Mohsin et al., 2025; Srinivas et al., 2025)
Collectively, these gaps indicate that cybersecurity theory has not kept pace with technological change. This study addresses this limitation by synthesising these domains into a unified conceptual framework.
2.10 Conclusion
This chapter has critically examined the literature on AI-driven cybersecurity transformation, highlighting both advancements and limitations. The findings demonstrate that cybersecurity is evolving into a complex, multi-layered discipline shaped by technological innovation, behavioural dynamics, and regulatory pressures.
However, the literature remains fragmented and lacks a cohesive theoretical foundation capable of integrating these dimensions. This reinforces the need for a reconceptualization of cybersecurity as an adaptive socio-technical system operating under uncertainty.
The next chapter outlines the research methodology used to investigate these themes, providing a structured approach to analysing the evolving cybersecurity landscape.
3. Research Methodology
3.1 Introduction
This chapter outlines the methodological framework employed to investigate the transformation of cybersecurity in the context of artificial intelligence (AI), agentic systems, regulatory fragmentation, and post-quantum cryptography. Given the conceptual, rapidly evolving, and interdisciplinary nature of the research domain, the study adopts a hybrid qualitative research design.
The methodological approach combines a PRISMA-guided systematic literature review (SLR) of peer-reviewed academic sources with interpretive thematic analysis and contextualisation of emerging industry practices and regulatory developments. This design enables both methodological rigour in the identification and evaluation of established research and conceptual flexibility in analysing phenomena that are not yet fully stabilised within the academic literature.
Rather than seeking statistical generalisation, the study aims to generate theoretical insight and integrative understanding, aligning with its objective of reconceptualising cybersecurity as an adaptive socio-technical governance system operating under conditions of uncertainty. The methodology is therefore designed to balance systematic evidence synthesis with interpretive depth, reflecting the complexity of contemporary cybersecurity environments.
3.2 Research Design
This study adopts a qualitative, hybrid research design integrating a PRISMA-guided systematic literature review with interpretive thematic synthesis and contextual analysis. This design is particularly appropriate for emerging and interdisciplinary domains where knowledge is distributed across both academic research and rapidly evolving practice.
The PRISMA-guided SLR provides a structured and transparent process for identifying, screening, and evaluating peer-reviewed literature, ensuring methodological consistency and reducing selection bias (Moher et al., 2009; Page et al., 2021). This component establishes a robust academic evidence base across key domains, including AI-driven cybersecurity, generative AI threats, agentic system governance, cyber resilience, and post-quantum cryptography.
However, cybersecurity—particularly in areas such as GenAI, agentic systems, and regulatory development—is characterised by asynchronous evolution between academic research and real-world practice. Important insights often emerge first in industry reports, policy frameworks, and operational implementations, which may not yet be fully represented in peer-reviewed literature.
To address this limitation, the study incorporates contextual interpretive analysis, situating findings from the systematic review within the broader socio-technical and regulatory landscape. This does not constitute a formal grey literature review but rather an analytical framing layer that enhances the relevance and timeliness of the findings.
The combined design enables:
systematic identification and evaluation of peer-reviewed research
reduction of selection bias through structured screening processes
interpretive synthesis of complex, interdisciplinary themes
contextual alignment with emerging industry and regulatory developments
identification of conceptual gaps between theory and practice
This approach reflects a deliberate methodological choice to prioritise integration and explanatory depth over purely descriptive aggregation, consistent with the study’s theoretical orientation.
3.3 Research Approach
The research adopts an interpretivist epistemological stance, recognising that cybersecurity phenomena—particularly those involving AI, autonomy, and human interaction—are socially constructed, context-dependent, and dynamically evolving. This perspective acknowledges that cybersecurity is not solely a technical domain, but a socio-technical system shaped by interactions between technologies, organisations, regulatory frameworks, and human actors.
Interpretivism is particularly appropriate given the emergence of AI-driven and agentic cybersecurity systems, where meaning, risk, and decision-making are distributed across both human and machine agents. These systems exhibit non-deterministic behaviour, adaptive learning, and context-sensitive interactions, which cannot be fully understood through purely positivist or quantitative approaches.
Within this epistemological framework, the PRISMA-guided systematic literature review functions as a structured evidence foundation, rather than a complete representation of the domain. The subsequent thematic analysis and contextual interpretation allow the study to move beyond aggregation of findings toward conceptual synthesis and theoretical development.
This approach aligns with recent methodological trends in interdisciplinary cybersecurity research, which emphasise the need for qualitative and hybrid methods to address complexity, uncertainty, and rapid technological change (Abbas et al., 2023; Ernst and Treude, 2026). By integrating systematic review with interpretive analysis, the study is able to examine not only what changes are occurring in cybersecurity, but also how and why these changes reshape underlying assumptions about risk, control, and governance.
3.4 PRISMA-Guided Systematic Literature Review Process
The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) framework was employed to ensure a structured, transparent, and reproducible process for identifying and selecting peer-reviewed academic literature relevant to the research objectives (Page et al., 2021). Within this study, PRISMA is used as a methodological tool to construct a rigorous academic evidence base, rather than as a comprehensive representation of the entire cybersecurity knowledge landscape.
Given the study’s focus on the transformation of cybersecurity across technological, behavioural, and governance dimensions, the PRISMA process was designed to capture high-quality, peer-reviewed research addressing the following domains:
AI-driven cybersecurity systems
generative AI and social engineering threats
agentic AI and autonomous system governance
cyber resilience and regulatory complexity
post-quantum cryptographic transition
This structured review forms the analytical foundation for subsequent thematic synthesis. However, consistent with the hybrid research design, the findings derived from this process are later interpreted within a broader socio-technical and regulatory context, acknowledging that not all relevant developments are fully represented in academic literature.
3.4.1 Identification Phase
Academic literature was sourced from multiple databases to ensure broad coverage across cybersecurity, artificial intelligence, and information systems research domains. The primary databases included:
Scopus
IEEE Xplore
ACM Digital Library
SpringerLink
ScienceDirect
Google Scholar was used selectively to supplement coverage and identify relevant studies not indexed in the primary databases.
Search queries were constructed to align directly with the study’s research aims and thematic focus, combining core concepts such as:
“cybersecurity AND artificial intelligence”
“generative AI AND social engineering”
“agentic AI AND cybersecurity governance”
“post-quantum cryptography AND migration”
“cyber resilience AND regulatory compliance”
The search strategy prioritised publications from 2018 to 2026, with particular emphasis on recent studies (2023–2026) to capture the rapidly evolving nature of AI-driven cybersecurity.
3.4.2 Screening Phase
Initial search results were screened using predefined inclusion and exclusion criteria to ensure relevance, quality, and alignment with the research objectives.
Inclusion criteria:
peer-reviewed journal articles and conference papers
studies focused on cybersecurity, AI security, governance, or cryptography
English-language publications
publications within the defined timeframe (2018–2026)
Exclusion criteria:
non-peer-reviewed sources (e.g., blogs, opinion articles)
studies not directly related to cybersecurity applications
duplicate records
non-English publications
Titles and abstracts were systematically reviewed to remove irrelevant studies. This stage ensured that the dataset remained focused on literature directly contributing to the study’s analytical scope.
3.4.3 Eligibility Phase
Full-text articles were assessed against more detailed criteria to ensure analytical depth and methodological robustness. Studies were evaluated based on:
clarity of research design and methodological transparency
relevance to one or more of the study’s thematic domains
contribution to understanding cybersecurity transformation, AI integration, or governance challenges
conceptual or empirical depth of findings
Studies lacking methodological clarity or substantive relevance were excluded. This phase ensured that the final dataset consisted of academically rigorous and analytically meaningful sources, suitable for thematic synthesis.
3.4.4 Inclusion Phase
Following the screening and eligibility process, a final corpus of peer-reviewed studies was selected for analysis. These studies collectively represent the core academic discourse across the five thematic areas identified in this research:
AI-driven cybersecurity transformation
generative AI threat environments
agentic AI governance and autonomy risk
regulatory fragmentation and cyber resilience
post-quantum cryptographic transition
This curated dataset provides the empirical foundation for thematic analysis, enabling the identification of recurring patterns, conceptual relationships, and structural trends within the literature.
3.4.5 PRISMA Flow Summary
The PRISMA process followed a structured sequence:
identification of records through database searches
removal of duplicate records
screening of titles and abstracts
full-text eligibility assessment
final inclusion of relevant studies
While the PRISMA framework ensures methodological transparency and reproducibility, it is important to note that the resulting dataset reflects the state of peer-reviewed academic knowledge, which may not fully capture rapidly evolving industry practices, emerging threats, or regulatory developments. These limitations are addressed through the study’s broader interpretive and contextual analytical approach.
3.5 Data Extraction Process
A structured data extraction framework was developed to ensure consistency and comparability across the selected studies. For each article, the following information was recorded:
author(s) and year of publication
research methodology and design
cybersecurity domain (e.g., AI, SOC, cryptography)
key findings and contributions
identified limitations
relevance to thematic categories
This structured approach enabled systematic comparison across studies and facilitated the identification of recurring patterns and conceptual relationships. It also enhanced the auditability and transparency of the research process.
3.6 Qualitative Thematic Analysis
The study employed thematic analysis following the six-phase framework proposed by Braun and Clarke (2006), adapted to support interpretive synthesis within a hybrid systematic–conceptual research design. This approach was selected for its flexibility in analysing interdisciplinary data and its suitability for generating higher-level conceptual insights from heterogeneous sources.
Within this study, thematic analysis is not used solely to organise findings, but to identify underlying structures, tensions, and relationships across technological, behavioural, and governance dimensions of cybersecurity. The objective is therefore not only to describe patterns in the literature, but to develop an integrated conceptual understanding of cybersecurity transformation.
Phase 1: Familiarisation with Data
All selected studies were read in full to develop a comprehensive understanding of key concepts, arguments, and emerging patterns. Particular attention was given to how different studies conceptualised:
AI capabilities and limitations
human vulnerability and behavioural risk
governance and regulatory frameworks
system-level security architectures
During this phase, preliminary observations were recorded to capture recurring ideas, conceptual tensions, and gaps between theoretical claims and practical implications.
Phase 2: Initial Coding
A systematic coding process was applied to extract analytically relevant features from the dataset. Coding was both deductive, guided by the study’s research aims, and inductive, allowing new patterns to emerge from the data.
Codes included, but were not limited to:
AI as defensive and offensive capability
adversarial machine learning risks
GenAI-enabled social engineering
agentic autonomy and decision-making
human–AI interaction in security contexts
regulatory fragmentation and compliance complexity
cyber resilience and recovery-oriented models
post-quantum cryptographic risk
Coding focused not only on identifying topics, but on capturing relationships between concepts, particularly where studies implied shifts in assumptions about risk, control, or system behaviour.
Phase 3: Theme Generation
Codes were iteratively grouped into broader thematic categories representing structural trends within the cybersecurity landscape. This process moved from descriptive grouping toward analytical abstraction, identifying clusters of concepts that reflected deeper transformations.
Five core themes were generated:
AI as a dual-use cybersecurity force
Autonomous AI governance and agentic system risk
Human behavioural vulnerability in GenAI environments
Regulatory fragmentation and cyber resilience
Post-quantum cryptographic transition risk
These themes were selected not only for their frequency in the literature, but for their explanatory power in capturing systemic change.
Phase 4: Theme Review
Themes were critically reviewed and refined to ensure:
internal coherence within each theme
clear conceptual distinction between themes
alignment with the study’s overarching research objective
During this phase, overlaps between themes were examined to identify interdependencies and cross-cutting dynamics, rather than treating themes as isolated categories. This process revealed that the themes collectively reflect a broader shift in cybersecurity from static control models toward adaptive, interconnected systems.
Phase 5: Theme Definition and Conceptual Integration
Each theme was further defined and refined to articulate its analytical significance within the broader transformation of cybersecurity. At this stage, the analysis moved beyond categorisation toward conceptual integration, examining how the themes interact to produce systemic change.
This phase was critical in developing the study’s central theoretical contribution. The themes were interpreted collectively to reveal three underlying structural transformations:
the emergence of AI-driven capability amplification and asymmetry
the shift toward autonomy-driven and behaviour-based risk
the transition from deterministic control to probabilistic governance
These integrative insights form the basis for the study’s reconceptualisation of cybersecurity as a dynamic socio-technical governance system operating under uncertainty.
Phase 6: Reporting and Theoretical Synthesis
The final phase involved translating the thematic analysis into a structured narrative presented in the findings and discussion chapters. Themes were used as organising constructs, while the analysis emphasised:
relationships between themes
implications for cybersecurity theory and practice
alignment with broader technological and regulatory developments
Importantly, the reporting phase extends beyond descriptive synthesis to theoretical interpretation, positioning the findings within a wider conceptual framework that accounts for complexity, uncertainty, and the co-evolution of human and machine actors.
3.7 Validity and Reliability
Although qualitative research does not rely on statistical validity, methodological rigour was ensured through several strategies:
Triangulation: use of multiple academic databases to reduce source bias
Transparency: clear documentation of inclusion and exclusion criteria
Consistency: structured data extraction and coding framework
Auditability: ability to trace analytical decisions from data to themes
These measures enhance the credibility, dependability, and confirmability of the findings. However, it is acknowledged that thematic analysis involves interpretive judgement, and complete objectivity cannot be assumed.
3.8 Ethical Considerations
This study is based exclusively on secondary data derived from published academic literature. As such, no human participants were involved, and formal ethical approval was not required.
Nevertheless, ethical research practice was maintained through:
accurate citation and attribution of all sources
avoidance of misrepresentation or selective reporting
reliance on peer-reviewed and credible academic material
3.9 Limitations of the Methodology
Several limitations are inherent in the chosen methodology.
First, the rapid evolution of AI and cybersecurity research means that findings may be subject to temporal limitations, as new developments may emerge after the completion of the review. Second, the reliance on published academic literature excludes industry insights that may not yet be formally documented but are highly relevant in practice.
Third, systematic reviews are susceptible to publication bias, as studies reporting positive or novel findings are more likely to be published. Finally, the limited availability of long-term empirical studies on agentic AI systems constrains the ability to assess real-world implementation outcomes.
Despite these limitations, the combined use of PRISMA-guided SLR and thematic analysis provides a robust and theoretically grounded approach for synthesising current knowledge in a rapidly evolving field.
3.10 Chapter Summary
This chapter has outlined the methodological framework used to investigate the transformation of cybersecurity in 2026. By combining a PRISMA-guided systematic literature review with qualitative thematic analysis, the study achieves both methodological rigour and interpretive depth.
The approach is well suited to exploring complex, interdisciplinary, and rapidly evolving phenomena, enabling the identification of key themes and conceptual relationships within the literature. The next chapter presents the findings derived from this analysis, structured around the five core themes identified through the thematic synthesis.
4. Research Findings
4.1 Introduction
This chapter presents the findings derived from the qualitative thematic analysis of peer-reviewed literature on cybersecurity trends in the context of artificial intelligence (AI), generative AI (GenAI), agentic systems, regulatory volatility, and post-quantum cryptography. The analysis followed Braun and Clarke’s (2006) six-phase framework and synthesised results from studies selected through a PRISMA-guided systematic literature review (Page et al., 2021).
Five overarching themes were identified:
AI as a dual-use cybersecurity force
Autonomous AI governance and agentic system risk
Human behavioural vulnerability in GenAI environments
Regulatory fragmentation and cyber resilience
Post-quantum cryptographic transition risk
These themes are interdependent and collectively indicate a structural transformation in cybersecurity practice, governance, and risk modelling.
4.2 Theme 1: AI as a dual-use cybersecurity force
4.2.1 Overview of findings
The literature consistently demonstrates that artificial intelligence functions as both a defensive and offensive mechanism within cybersecurity ecosystems. Defensive applications include intrusion detection, anomaly identification, malware classification, and automated threat response (Mohamed, 2025; Wairagade, 2025). However, the same technologies are increasingly leveraged by threat actors to enhance attack sophistication.
4.2.2 Defensive enhancement through AI
Across reviewed studies, AI is shown to significantly improve:
detection accuracy in high-volume environments
real-time anomaly identification
SOC alert prioritisation
predictive threat intelligence generation
Mohamed (2025) identifies that deep learning models outperform traditional rule-based systems in identifying zero-day exploits and polymorphic malware. Similarly, Wairagade (2025) reports that AI integration reduces analyst workload by automating repetitive triage tasks.
4.2.3 Offensive exploitation of AI
Conversely, artificial intelligence is increasingly being leveraged in offensive cyber operations, fundamentally altering the scale, sophistication, and accessibility of cyber threats. The literature indicates that AI has lowered traditional technical barriers to cybercrime, enabling more adaptive, targeted, and automated attack strategies (Uddin et al., 2025; Brundage et al., 2018).
Key applications include:
AI-generated phishing campaigns, which utilise natural language models to produce highly context-aware and linguistically convincing messages, significantly improving success rates compared to traditional phishing techniques
Automated malware generation and obfuscation, where AI systems assist in creating polymorphic code capable of evading signature-based detection systems
Scalable reconnaissance using large language models (LLMs), enabling the rapid collection, synthesis, and interpretation of publicly available data to identify vulnerabilities and potential targets
Synthetic identity creation using generative models, facilitating the fabrication of realistic but fictitious digital identities that can bypass basic verification and onboarding controls
Uddin et al. (2025) conceptualise this phenomenon as a “capability compression effect”, whereby advanced cyber offensive techniques—previously limited to highly skilled actors or organised groups—become increasingly accessible to low-skilled individuals through AI-enabled tools. This compression of capability not only expands the threat landscape but also accelerates the pace and scale at which cyberattacks can be executed, thereby increasing systemic exposure across digital financial infrastructures.
4.2.4 Synthesis
The dual-use nature of artificial intelligence introduces a fundamental security paradox, whereby the same technological capabilities that enhance defensive cybersecurity measures also simultaneously expand the sophistication, scale, and accessibility of offensive cyber operations (Brundage et al., 2018; Radanliev et al., 2020). This duality creates a structurally unstable equilibrium in which improvements in detection, automation, and resilience on the defensive side are mirrored by equivalent gains in adversarial capability.
On the defensive side, AI systems improve anomaly detection, threat intelligence processing, and automated incident response, enabling faster and more accurate identification of malicious activity. However, these same underlying technologies—particularly large language models and generative systems—can be repurposed to enhance phishing campaigns, automate reconnaissance, and generate adaptive malware, thereby lowering barriers to entry for cyber adversaries (Uddin et al., 2025; Conti et al., 2018).
This dynamic significantly intensifies the cyber threat landscape by increasing both the volume and sophistication of attacks, while simultaneously reducing the skill threshold required to execute them. As a result, cyber risk becomes more distributed, continuous, and difficult to attribute, placing additional strain on traditional perimeter-based security models.
Consequently, there is a growing need for adaptive defensive architectures that are capable of continuous learning, real-time response, and proactive threat anticipation. Such architectures increasingly rely on AI-driven monitoring, behavioural analytics, and automated response systems to match the speed and adaptability of emerging AI-enabled threats (ENISA, 2020; Radanliev et al., 2020). This marks a shift from static defence mechanisms toward dynamic, intelligence-driven cyber resilience frameworks.
4.3 Theme 2: Autonomous AI governance and agentic system risk
4.3.1 Emergence of agentic systems
A key finding is the rapid emergence of agentic AI systems, which are capable of executing multi-step tasks autonomously across digital environments. Unlike traditional machine learning models that produce isolated outputs, these systems can plan, reason, and act by interacting directly with tools, APIs, and external systems without continuous human intervention (Vinay, 2025; Wooldridge, 2009). This represents a shift from passive prediction models toward active, goal-directed computational agents embedded within operational workflows.
4.3.2 Risk characteristics of agentic AI
The deployment of agentic AI systems in operational and security-sensitive environments introduces a range of emerging risk dimensions that are increasingly discussed in the recent literature on large language model (LLM) agents and autonomous systems. A primary concern is the reduction of operational transparency, where multi-step reasoning and action chains produced by autonomous agents become difficult to interpret, trace, or audit in real time. This challenge is widely recognised in studies of LLM-based agent architectures, which highlight the opacity of intermediate reasoning steps and the difficulty of reconstructing decision pathways in tool-augmented systems (Xi et al., 2023). Such opacity complicates traditional security auditing and undermines explainability in high-stakes environments.
A second major risk dimension concerns goal misalignment, where autonomous agents optimise for specified objectives in ways that produce unintended or harmful outcomes due to incomplete, ambiguous, or poorly specified instructions. This aligns with broader findings in AI alignment research, which demonstrate that optimisation processes in large-scale models can lead to specification gaming and emergent misbehaviour when objective functions do not fully capture intended constraints (Weidinger et al., 2023; Bai et al., 2022). In agentic contexts, this risk is amplified by the ability of models to execute multi-step plans that extend beyond the immediate intent of the user.
In addition, permission escalation risks have been identified as a critical vulnerability in tool-using AI systems, particularly where agents interact with external APIs, databases, or operational infrastructure. Recent work on LLM tool-use highlights that unconstrained or poorly segmented access can lead to unintended privilege expansion, data exfiltration, or unsafe system manipulation (Schick et al., 2023; Patil et al., 2024). This has led to increased focus on sandboxed execution environments and structured access control mechanisms that restrict agent capabilities within predefined operational boundaries (Zhou et al., 2024).
A further emerging concern involves multi-agent cascading failures, where interactions between multiple autonomous systems produce compounding errors, feedback loops, or systemic instability across interconnected workflows. Recent research on multi-agent LLM systems and distributed autonomy suggests that coordination failures, misaligned incentives, and uncontrolled communication between agents can result in amplification of errors and unpredictable system-level behaviours (Xi et al., 2023; Sun et al., 2024). These dynamics are particularly relevant in cybersecurity and SOC automation contexts, where multiple agents may operate concurrently across interdependent tasks.
Building on these developments, recent conceptual work in AI security has begun to characterise these risks as “action-based vulnerabilities”, reflecting a shift in the threat landscape from passive data compromise to risks arising from autonomous system execution of real-world actions. This framing is consistent with emerging discussions in agentic AI safety literature, which emphasise that the primary security concern is no longer limited to model outputs or data leakage, but extends to the operational consequences of model-driven actions in external environments (Weidinger et al., 2023; Xi et al., 2023). Within this context, Arora and Hastings (2025) argue that agentic systems fundamentally expand the cybersecurity attack surface by introducing new classes of vulnerabilities rooted in autonomous decision execution rather than static system compromise.
4.3.3 Governance frameworks
In response to the emerging risks associated with autonomous and agentic AI systems, recent literature has increasingly focused on the development of layered governance and control mechanisms designed to ensure safe, accountable, and controllable AI behaviour. A key strand of this work emphasises lifecycle-based governance approaches, in which security and oversight mechanisms are embedded across the entire AI system lifecycle, including development, deployment, and post-deployment monitoring. This perspective is reflected in the NIST AI Risk Management Framework, which formalises continuous risk assessment and governance throughout AI system lifecycles (National Institute of Standards and Technology, 2023), as well as in recent AI security literature highlighting the need for end-to-end assurance structures in machine learning systems (Sarker et al., 2024).
In parallel, research on large language model (LLM) agents has introduced the concept of structured tool-use governance, often implemented through constrained or segmented access to external systems. This includes mechanisms that restrict and formalise how models interact with APIs, databases, and external tools, thereby reducing the risk of uncontrolled or unsafe execution. Foundational work such as Toolformer demonstrates how models can be trained to selectively invoke tools under structured conditions (Schick et al., 2023), while ReAct further formalises the separation between reasoning and action within agentic workflows (Yao et al., 2023). More recent security-focused studies extend this by proposing permissioned tool-use architectures, where agent capabilities are explicitly bounded through access control and sandboxing mechanisms (Patil et al., 2024).
Another significant development in the literature is the use of continuous behavioural monitoring for deployed AI agents. This involves real-time observation of model outputs, decision trajectories, and interaction patterns to detect anomalies, drift, or unsafe behaviours during execution. Such approaches build on broader work in machine learning operations (MLOps) and runtime assurance, where continuous monitoring is used to maintain system reliability under distributional shift (Sun et al., 2024). In the context of LLM systems, recent studies have further highlighted the importance of auditability and behavioural logging to detect emergent risks in autonomous agent workflows.
Complementing these approaches, policy-constrained execution environments have emerged as a key mechanism for enforcing safety and regulatory alignment in autonomous systems. These frameworks embed explicit operational constraints within the execution layer of AI agents, ensuring that model actions remain within predefined behavioural, ethical, or regulatory boundaries. This aligns with recent work on AI alignment and constitutional AI, which proposes the use of rule-based or principle-based constraints to guide model behaviour (Bai et al., 2022; Anthropic, 2023). In security-oriented implementations, such constraints are often operationalised through sandboxed execution environments and guardrail systems that restrict system-level actions in real time (Zhou et al., 2024).
Building on these developments, recent conceptual work on AI governance architectures has increasingly moved toward modular separation of responsibilities across model capability, control infrastructure, and policy enforcement layers. This reflects a broader shift in the literature toward decomposed governance systems for agentic AI, in which autonomy is explicitly bounded through layered architectural constraints rather than implicit behavioural alignment alone (Xi et al., 2023; NIST, 2023). Within this context, emerging frameworks propose structured separations between model functionality, control mechanisms, and governance policies to enhance traceability, enforceability, and accountability in autonomous AI workflows, consistent with broader trends in layered AI safety engineering.
4.3.4 Synthesis
Overall, the emergence of agentic artificial intelligence represents a fundamental shift in cybersecurity and system governance, moving away from traditional system-centric protection models toward behavioural governance of autonomous digital actors. Recent literature on large language model (LLM) agents and autonomous systems highlights that security boundaries are increasingly defined by agent behaviour and interaction patterns rather than static system components or perimeter-based controls (Xi et al., 2023). This reflects a broader transition in AI security research, where the focus shifts from protecting isolated infrastructures to governing dynamic, goal-directed systems capable of independent action across digital environments.
This transformation necessitates a redefinition of core cybersecurity constructs, particularly identity and access management (IAM), operational trust assumptions, and control architectures. Traditional IAM frameworks, which assume stable human or machine identities with predefined roles and permissions, are increasingly challenged by agentic systems capable of adapting behaviour, invoking tools, and interacting with external services autonomously (Schick et al., 2023; Patil et al., 2024). As a result, recent work has argued for the development of dynamic, context-aware access control mechanisms that continuously evaluate agent behaviour rather than relying solely on static permission structures (Zhou et al., 2024).
Furthermore, trust in cyber-physical and socio-technical systems is increasingly conceptualised as an emergent property of interactions between human users, autonomous agents, and infrastructural components, rather than a fixed attribute of system components. AI alignment and governance literature emphasises that autonomous agents operating in open or semi-open environments introduce uncertainty into operational trust assumptions, particularly when systems are capable of multi-step planning and external tool use (Weidinger et al., 2023; Bai et al., 2022). This challenges conventional models of assurance that rely on deterministic system behaviour and predefined security boundaries.
Consequently, risk is no longer confined to static systems or isolated vulnerabilities but instead emerges from the evolving and interacting behaviours of autonomous agents embedded within organisational workflows. Recent studies on multi-agent systems and autonomous workflows suggest that system-level risk is increasingly shaped by interaction dynamics, feedback loops, and emergent behaviours that cannot be fully anticipated at design time (Xi et al., 2023; Sun et al., 2024). Within this context, cybersecurity governance is increasingly reframed as the continuous regulation of adaptive, goal-driven digital actors operating within complex socio-technical environments.
4.4 Theme 3: Human behavioural vulnerability in GenAI environments
4.4.1 Evolution of human-targeted attacks
Human factors remain a central driver of cybersecurity breaches, but the emergence of generative AI has significantly increased both the sophistication and scalability of human-targeted attacks. Traditional phishing and social engineering techniques are now enhanced through the use of generative systems that enable:
Linguistic personalisation, where messages are tailored to individual roles, communication styles, and organisational context
Real-time contextual adaptation, allowing attackers to dynamically adjust content based on current events, user behaviour, or organisational signals
Deepfake-enabled impersonation, where synthetic audio, video, and text are used to convincingly replicate trusted individuals or authority figures
Early research by Sheng et al. (2010) identified cognitive heuristics—such as trust, authority bias, and urgency—as key drivers of phishing susceptibility. However, more recent literature demonstrates that GenAI significantly amplifies these cognitive vulnerabilities by increasing the realism, relevance, and adaptability of deceptive content (Brundage et al., 2018; Verizon, 2024).
4.4.2 Breakdown of traditional awareness models
The findings indicate that traditional security awareness training is becoming increasingly insufficient in the context of GenAI-enhanced threat environments. This decline in effectiveness is driven by several factors:
Hyper-realistic attack content, which closely mimics legitimate communication formats and reduces detectability
Cognitive overload among employees, as the volume and sophistication of communications make manual scrutiny increasingly impractical
Continuous adaptation of attack strategies, where adversaries iteratively refine techniques using AI-generated feedback loops
Indistinguishability between legitimate and malicious communications, eroding the reliability of heuristic-based detection by end users
Mohamed (2025) argues that awareness training alone is no longer sufficient to mitigate these risks, as the underlying issue has shifted from user ignorance to systemic cognitive exploitation at scale.
4.4.3 Emergence of behavioural security models
In response to these limitations, organisations are increasingly shifting toward behaviourally oriented security models that embed protection mechanisms directly into user workflows and organisational culture. Key approaches include:
Security Behaviour and Culture Programmes (SBCPs), which aim to embed security-conscious decision-making into everyday organisational practices
Continuous behavioural reinforcement mechanisms, designed to shape user behaviour over time through repetition, feedback, and adaptive learning systems
Embedded security nudges within workflows, which provide real-time prompts or constraints to guide secure decision-making at the point of action
Real-time phishing simulation systems, which dynamically test and reinforce user awareness in response to evolving threat patterns
These approaches reflect a shift from static training models toward continuous behavioural governance frameworks, where security is maintained through ongoing interaction rather than one-off instruction.
4.4.4 Synthesis
Overall, the findings suggest that human vulnerability in cybersecurity is no longer primarily a function of knowledge deficiency. Instead, it should be understood as a cognitive exploitation surface, increasingly amplified by generative AI systems that can dynamically manipulate attention, trust, and decision-making processes at scale (Brundage et al., 2018; Mohamed, 2025). This represents a fundamental shift in the nature of human risk, requiring security models that address behavioural susceptibility as an adaptive and continuously evolving threat vector.
4.5 Theme 4: Regulatory fragmentation and cyber resilience
4.5.1 Regulatory complexity
The literature highlights a growing fragmentation in global cybersecurity and digital resilience regulation, reflecting an increasingly dense and multi-layered governance environment. Rather than a single coherent framework, organisations are required to navigate a combination of overlapping regional, sectoral, and technology-specific regimes, including:
The EU NIS2 Directive, which strengthens baseline cybersecurity obligations for essential and important entities across critical infrastructure sectors
The Digital Operational Resilience Act (DORA), which establishes harmonised ICT risk management requirements for financial institutions within the EU
The EU AI Act, which introduces risk-based governance requirements for artificial intelligence systems, including those deployed in security and operational contexts
Additional sector-specific compliance regimes, which impose further obligations depending on industry classification and operational geography
Collectively, these frameworks create a complex regulatory environment characterised by overlapping mandates, divergent reporting obligations, and occasionally inconsistent definitions of risk and compliance expectations (Bennett and Raab, 2020).
4.5.2 Impact on organisations
This increasing regulatory complexity has several significant operational and strategic implications for organisations operating in digital and cross-border environments:
Increased compliance overhead, as organisations must allocate greater resources to interpret, implement, and maintain alignment with multiple regulatory regimes
Duplication of reporting requirements, particularly where similar incidents or controls must be reported to different authorities under distinct formats and timelines
Inconsistent global security standards, which complicate the development of unified cybersecurity architectures across jurisdictions
Strategic uncertainty in cross-border operations, as organisations face difficulty in designing scalable governance models that remain compliant across all relevant regulatory environments
Bennett and Raab (2020) describe this condition as regulatory pluralism, where multiple coexisting governance systems operate simultaneously, increasing complexity and uncertainty in digital system oversight and compliance management.
4.5.3 Rise of cyber resilience
In response to this fragmentation, organisations are increasingly shifting toward cyber resilience frameworks, which prioritise the ability to withstand, adapt to, and recover from cyber incidents rather than relying solely on preventive compliance controls. These frameworks emphasise:
Adaptability over static compliance, recognising that rigid rule adherence is insufficient in dynamic threat environments
Continuous monitoring and recovery capabilities, enabling rapid detection, response, and restoration of critical services
Operational continuity under attack conditions, ensuring that essential functions remain available even during active cyber incidents
Integration of governance, risk, and technology functions, aligning cybersecurity with broader organisational risk management and operational strategy
The NIST Cybersecurity Framework provides a foundational model for this approach (NIST, 2018), while more recent research highlights its increasing strategic importance in both regulatory and organisational contexts, particularly as cyber threats become more persistent and sophisticated (Wairagade, 2025).
4.5.4 Synthesis
Overall, the findings indicate a structural shift in cybersecurity governance from compliance-based security models, which emphasise adherence to regulatory requirements, toward resilience-based survivability models, which prioritise the sustained operation of systems under adverse and evolving threat conditions. This transition reflects a broader recognition that effective cybersecurity is no longer defined solely by regulatory compliance, but by the ability of organisations to maintain continuity, adapt dynamically, and recover rapidly in increasingly complex digital ecosystems.
4.6 Theme 5: Post-quantum cryptographic transition risk
4.6.1 Quantum threat landscape
The literature confirms that quantum computing represents a long-term but credible systemic threat to contemporary cryptographic infrastructures. The foundational work of Shor (1994) demonstrated that sufficiently powerful quantum computers could efficiently solve integer factorisation and discrete logarithm problems, thereby undermining widely deployed public-key cryptosystems such as RSA and elliptic curve cryptography (ECC).
While practical, large-scale quantum computers remain under development, the theoretical implications have already prompted extensive concern within cybersecurity and financial infrastructure governance, given the long lifecycle of cryptographic dependencies embedded in critical systems.
4.6.2 Migration challenges
The transition toward post-quantum cryptography (PQC) is characterised by substantial technical and organisational complexity. The literature identifies several key challenges:
Long infrastructure replacement cycles, as cryptographic components are deeply embedded across legacy systems, applications, and protocols
Cryptographic dependency chains in enterprise systems, where multiple layers of software and hardware rely on shared cryptographic primitives, increasing migration complexity
Interoperability between classical and quantum-resistant systems, particularly during transitional phases where hybrid environments must operate securely and efficiently
Performance degradation in early PQC implementations, as many quantum-resistant algorithms introduce increased computational and storage overhead compared to classical equivalents
Bernstein et al. (2017) emphasise that cryptographic migration should be understood as a multi-decade transformation process, rather than a discrete technological upgrade, due to the systemic nature of cryptographic integration across digital infrastructure.
4.6.3 Organisational readiness
Recent studies suggest that most organisations remain in the early stages of post-quantum cryptography readiness, with significant gaps in both technical visibility and strategic planning. Key deficiencies include:
Limited inventory of cryptographic assets, resulting in incomplete understanding of where vulnerable algorithms are deployed across systems and supply chains
Lack of formal migration roadmaps, reflecting uncertainty around prioritisation, sequencing, and implementation timelines
Absence of hybrid cryptographic deployment strategies, which are necessary to support gradual transition while maintaining backward compatibility and operational continuity
Uddin et al. (2025) argue that delayed adoption and insufficient preparedness significantly increase long-term systemic exposure, particularly as quantum capabilities mature and the “harvest now, decrypt later” risk model becomes more viable.
4.6.4 Synthesis
Overall, post-quantum cryptography represents a latent but structurally significant systemic risk within global digital infrastructure. Although the precise timing of quantum capability breakthroughs remains uncertain, the long lead times required for cryptographic migration necessitate early and strategic intervention. This positions PQC not as a future-only concern, but as an immediate governance and risk management priority for organisations operating in highly digitised and security-critical environments.
4.7 Cross-theme synthesis
Across the five themes analysed in this chapter, the findings converge on three macro-level structural transformations that collectively redefine the contemporary cybersecurity and digital risk landscape.
1. Shift to autonomy-driven risk
Cybersecurity threats are increasingly shaped by autonomous and semi-autonomous AI systems, rather than solely by human-operated or static adversarial actors. This includes both offensive and defensive applications of AI, where machine-driven systems can independently generate, adapt, and execute attack strategies at scale (Brundage et al., 2018; Arora and Hastings, 2025). As a result, risk generation is becoming more dynamic, distributed, and less directly attributable to specific human agents.
2. Collapse of traditional human-centric assumptions
The distinction between human attackers and defenders is becoming increasingly blurred as both sides rely on AI augmentation. This dual reliance fundamentally disrupts traditional behavioural security models, which assume relatively stable human decision-making patterns. Instead, human interaction with systems is now mediated through generative tools, automation, and decision-support systems that amplify both capability and vulnerability (Sheng et al., 2010; Mohamed, 2025). Consequently, cybersecurity can no longer be understood as purely a human factors problem, but rather as a human–machine co-evolutionary system.
3. Transition from control to resilience
There is a clear shift away from static, perimeter-based control mechanisms toward adaptive and continuously evolving resilience architectures. Traditional prevention-oriented models are increasingly insufficient in environments characterised by real-time threats, AI-enabled adversaries, and regulatory fragmentation. Instead, organisations are prioritising systems capable of continuous monitoring, rapid adaptation, and operational continuity under persistent attack conditions (NIST, 2018; Wairagade, 2025). This reflects a broader reconceptualisation of cybersecurity as a dynamic capability rather than a fixed control state.
4.8 Chapter summary
This chapter has presented the findings of the thematic analysis, identifying five core themes that collectively define the cybersecurity and digital risk landscape in 2026. The results demonstrate that cybersecurity is undergoing a systemic transformation driven by AI autonomy, human behavioural exploitation, regulatory fragmentation, and emerging quantum cryptographic risk (Arner, Barberis and Buckley, 2017; Zetzsche et al., 2020).
Taken together, these developments indicate a shift in cybersecurity from a predominantly technical control discipline to a socio-technical governance system, in which technological, behavioural, and regulatory factors are increasingly interdependent.
The next chapter will interpret these findings in relation to the conceptual framework, examining their implications for cybersecurity governance, organisational strategy, and the evolving role of the Chief Information Security Officer (CISO).
5. Discussion of Findings
5.1 Introduction
This chapter interprets the findings presented in Chapter 4 in relation to the broader academic literature and conceptual foundations established in Chapters 1–3. Rather than reiterating descriptive themes, the focus here is on critical synthesis, theoretical implication, and conceptual advancement.
The analysis demonstrates that cybersecurity in 2026 is undergoing a structural transition driven by artificial intelligence (AI), agentic autonomy, regulatory fragmentation, and cryptographic disruption. These forces collectively challenge established assumptions about control, trust, and organisational security governance.
Three overarching meta-implications emerge:
Cybersecurity is shifting from deterministic control to probabilistic governance
AI is redefining both attacker capability and defender cognition
Cyber resilience is replacing prevention as the dominant security paradigm
5.2 Cybersecurity as a shift from deterministic control to probabilistic governance
Traditional cybersecurity models are grounded in deterministic assumptions: defined perimeters, known threat signatures, and predictable system behaviour. However, the findings demonstrate that these assumptions no longer hold in AI-augmented environments.
AI systems—particularly those driven by machine learning and GenAI—introduce non-deterministic outputs, where system behaviour is probabilistic rather than rule-based (Mohamed, 2025). This fundamentally disrupts classical security engineering, which assumes repeatability and verifiability of system states.
Agentic AI further intensifies this shift. As shown in Chapter 4, autonomous systems can execute multi-step actions across digital environments without continuous human oversight (Vinay, 2025). This introduces what can be conceptualised as “behavioural uncertainty at machine speed”, where outcomes cannot be fully predicted even when inputs and constraints are known.
From a theoretical standpoint, this aligns with complexity theory approaches to cybersecurity, where systems are viewed as adaptive and emergent rather than controllable entities. The implication is that cybersecurity governance must transition from control assurance to probabilistic risk management, where uncertainty is not eliminated but continuously managed.
5.3 AI as a Bidirectional Capability Amplifier
A central finding of this study is that artificial intelligence operates as a bidirectional capability amplifier, simultaneously enhancing both defensive and offensive cybersecurity operations. Rather than functioning as a neutral efficiency tool, AI actively reshapes the cyber threat landscape by accelerating capabilities on both sides of the security equation, thereby altering the structural balance between attackers and defenders.
5.3.1 Defensive amplification
On the defensive side, AI significantly strengthens cybersecurity operations by improving threat detection accuracy, enhancing Security Operations Centre (SOC) efficiency, enabling predictive threat intelligence, and supporting automated or semi-automated response mechanisms.
Machine learning and anomaly detection systems allow organisations to process large-scale telemetry data in real time, identifying deviations from baseline behaviour that would be difficult for human analysts to detect manually. Similarly, AI-assisted SOC workflows improve alert triage by prioritising incidents based on risk scoring, reducing analyst fatigue and improving response times.
Recent studies confirm that these capabilities produce measurable gains in operational efficiency and detection performance, particularly in environments characterised by high-volume log data and distributed infrastructure complexity (Mohamed, 2025; Wairagade, 2025). However, the literature also emphasises that these improvements remain dependent on continuous human oversight due to model uncertainty, data drift, and adversarial manipulation risks.
5.3.2 Offensive amplification
Conversely, AI significantly enhances offensive cyber capabilities by lowering technical barriers and increasing the scale, speed, and sophistication of attacks. This includes large-scale personalised phishing campaigns, automated malware generation, deepfake-enabled deception, and the democratisation of advanced attack techniques to low-skill actors.
Generative AI systems enable attackers to produce highly convincing and context-aware social engineering content, significantly increasing the success rate of deception-based attacks. At the same time, AI-assisted code generation tools reduce the expertise required to develop exploit chains or malicious scripts, effectively broadening the threat actor base.
Uddin et al. (2025) describe this phenomenon as capability compression, where AI reduces the skill threshold required to execute high-impact cyberattacks while simultaneously increasing their sophistication. This results in a widening operational gap between defensive preparedness and offensive accessibility.
5.3.3 Critical interpretation
The key implication is that AI does not merely create dual-use capability in a balanced manner; rather, it produces a form of asymmetric capability acceleration. In practice, attacker capability is often amplified more rapidly than defensive governance structures can adapt.
This asymmetry arises because defensive systems are constrained by organisational controls, regulatory compliance, safety requirements, and ethical constraints, whereas offensive applications of AI operate with significantly fewer restrictions. As a result, AI introduces a structural imbalance in the cybersecurity ecosystem, where innovation cycles favour adversarial adaptation over institutional response.
5.4 The Collapse of Traditional Human-Centric Security Assumptions
A major conceptual finding of this study is that traditional human-centric cybersecurity models are becoming increasingly unstable under GenAI-driven conditions. Historically, cybersecurity frameworks have assumed that human users represent the weakest link in security systems, and that targeted training and awareness programmes can meaningfully reduce risk exposure.
However, the emergence of GenAI fundamentally challenges these assumptions by reshaping the nature of human vulnerability and attack execution.
5.4.1 Cognitive overload and perceptual indistinguishability
Generative AI enables attackers to produce communication that is linguistically fluent, context-aware, dynamically personalised, and highly consistent with legitimate organisational communication patterns. As a result, the traditional reliance on heuristic detection mechanisms—such as linguistic irregularities, tone inconsistencies, or formatting anomalies—becomes increasingly ineffective.
This leads to perceptual indistinguishability, where malicious and legitimate communications become functionally identical from a human cognitive perspective. Empirical evidence shows that human users struggle to reliably distinguish between authentic and AI-generated content even when warned about synthetic manipulation risks (Sheng et al., 2010).
5.4.2 From knowledge deficit to cognitive exploitation
The literature increasingly suggests a fundamental shift in the nature of human cybersecurity vulnerability. Rather than being primarily driven by lack of awareness or training, vulnerability is increasingly a product of systematic cognitive exploitation enabled by AI systems.
This reframes cybersecurity risk as a neuro-cognitive and behavioural manipulation problem, where attackers optimise persuasion strategies using AI-generated content tailored to individual psychological profiles, behavioural patterns, and contextual triggers. In this model, human decision-making is not merely uninformed but actively targeted through adaptive influence mechanisms.
5.4.3 Implications for organisational strategy
Security Behaviour and Culture Programmes (SBCPs) represent an initial organisational response to these challenges, shifting focus from one-off training interventions to continuous behavioural reinforcement.
However, the literature suggests that SBCPs alone are insufficient in GenAI environments unless supplemented with real-time adaptive security systems embedded directly within workflows. This includes contextual threat warnings, behavioural anomaly detection, and continuous feedback mechanisms that dynamically adjust user interactions based on evolving risk conditions.
5.5 Agentic AI and the Breakdown of Traditional Governance Boundaries
One of the most significant structural disruptions identified in this study is the emergence of agentic AI systems operating as autonomous digital actors within enterprise environments. These systems introduce decision-making autonomy that extends beyond traditional software automation, fundamentally challenging existing cybersecurity governance structures.
5.5.1 Breakdown of IAM assumptions
Traditional Identity and Access Management (IAM) frameworks are built on several core assumptions: static identities, human-controlled authentication, and predictable access patterns. Agentic AI systems violate all three assumptions.
These systems can generate dynamic identities, execute autonomous actions across APIs and enterprise tools, and adapt their behaviour based on contextual goals or environmental feedback. This results in what can be described as fluid identity boundaries, where identity becomes an emergent and continuously evolving property rather than a fixed attribute.
5.5.2 Governance fragmentation
The literature identifies a significant governance gap between multiple layers of cybersecurity control, including model-level AI constraints, infrastructure-level security controls (such as IAM and network segmentation), and policy-level regulatory frameworks.
While recent studies propose layered governance architectures (Arora and Hastings, 2025; Suggu, 2025), empirical validation of these frameworks remains limited. In practice, these governance layers are often implemented independently, resulting in inconsistent enforcement and fragmented oversight.
5.5.3 Critical insight
The central issue is not the absence of cybersecurity controls, but the lack of inter-layer coherence. Agentic AI systems operate simultaneously across model, infrastructure, and policy layers, enabling them to exploit inconsistencies and gaps between governance domains.
This creates a new category of vulnerability: inter-layer governance failure, where security breakdowns occur not within individual systems but between them, due to misalignment in authority, visibility, and control mechanisms.
5.6 Cyber Resilience as a Dominant but Incomplete Paradigm
Cyber resilience has become the dominant conceptual framework in modern cybersecurity discourse, reflecting a shift away from prevention-centric models toward adaptive, recovery-oriented systems. However, this study identifies important conceptual and operational limitations in how resilience is currently implemented.
5.6.1 Strength of resilience frameworks
Frameworks such as those proposed by NIST (2018) and extended in recent literature (Wairagade, 2025) define resilience as an organisational capability that is adaptive, continuous, system-wide, and recovery-oriented.
This represents a necessary evolution in cybersecurity thinking, acknowledging that prevention alone is insufficient in complex and adversarial environments where breaches are inevitable.
5.6.2 Limitations of resilience discourse
Despite its conceptual strength, resilience suffers from three major limitations:
Operational ambiguity – resilience is often defined in abstract terms without consistent implementation standards.
Measurement difficulty – there is a lack of robust, quantifiable metrics for assessing resilience maturity.
Reactive bias – current implementations often prioritise recovery over proactive risk reduction.
As a result, resilience risks becoming a strategic narrative rather than an operationally enforceable capability unless translated into measurable governance structures and technical controls.
5.7 Post-Quantum Cryptography: Deferred Urgency and Organisational Inertia
The findings reveal a persistent mismatch between the long-term severity of quantum computing threats and the short-term prioritisation within organisational cybersecurity strategies.
While Shor’s algorithm (Shor, 1994) establishes the theoretical vulnerability of current public-key cryptographic systems, and Bernstein et al. (2017) highlight the complexity of migration, organisational adoption of post-quantum cryptography (PQC) remains limited.
5.7.1 Structural inertia
Three forms of structural inertia explain delayed adoption:
Infrastructure dependency inertia – legacy systems embedded in critical operations
Cost and resource constraints – high transition and replacement costs
Uncertainty in quantum timelines – ambiguity regarding when threats become operational
5.7.2 Strategic paradox
This creates a strategic paradox: organisations are required to invest in PQC migration under conditions of uncertainty, balancing immediate operational priorities against low-probability but high-impact future risks.
This reflects a broader pattern in cybersecurity risk management, where existential but delayed threats are systematically deprioritised in favour of immediate operational concerns.
5.8 Integrated Discussion: Convergence of Five Systemic Forces
When synthesised, the five thematic areas identified in this study converge into a single systemic transformation of the cybersecurity landscape:
Intelligence acceleration – AI amplifies both offensive and defensive capabilities
Autonomy expansion – agentic systems introduce non-human decision-making actors
Cognitive destabilisation – GenAI undermines human reliability in security contexts
Governance fragmentation – regulatory and technical systems evolve asynchronously
Cryptographic uncertainty – quantum computing destabilises foundational trust systems
Collectively, these forces indicate that cybersecurity is transitioning toward a multi-layered adaptive ecosystem, characterised by continuous adversarial evolution, distributed autonomy, and systemic uncertainty.
5.9 Theoretical Contribution
This study makes three primary contributions to cybersecurity theory by reframing foundational assumptions about control, agency, and system boundaries in the context of AI-driven and agentic digital environments. Collectively, these contributions extend cybersecurity theory beyond traditional deterministic and perimeter-based paradigms toward a more adaptive, socio-technical understanding of security under uncertainty.
5.9.1 From control to probabilistic governance
A fundamental theoretical contribution of this study is the reconceptualisation of cybersecurity from a deterministic control model to a probabilistic governance paradigm. Traditional cybersecurity theory assumes that risk can be managed through layered technical controls, predefined rules, and enforceable policy structures that collectively produce predictable security outcomes.
However, the emergence of AI-driven systems, particularly those involving generative and agentic capabilities, introduces non-deterministic behaviours that cannot be fully anticipated or exhaustively specified. These systems operate within dynamic environments where inputs, outputs, and interactions evolve continuously, often in ways that are not fully observable or explainable.
As a result, cybersecurity outcomes must now be understood as probabilistic rather than deterministic, where even well-designed controls produce variable effectiveness depending on contextual conditions, adversarial adaptation, and system complexity. This shift requires a move toward governance models that prioritise uncertainty management, continuous risk calibration, and adaptive decision-making, rather than static assurance of security states.
In this framing, cybersecurity becomes less about achieving complete control and more about maintaining acceptable risk thresholds within an inherently unpredictable environment.
5.9.2 From human-centric to hybrid cognitive systems
A second theoretical contribution is the redefinition of cybersecurity agency from a human-centric model to a hybrid cognitive systems model, in which security outcomes emerge from the interaction between human operators and AI systems.
Traditional cybersecurity theory positions humans as either decision-makers or vulnerabilities within the system, often framing them as the weakest link in the security chain. However, this study demonstrates that such a dichotomy is increasingly inadequate in environments where AI systems actively participate in decision-making, threat detection, response orchestration, and even attack generation.
In AI-augmented environments, security decisions are no longer the product of human cognition alone but are instead co-produced through distributed cognitive processes involving human analysts, machine learning models, automated agents, and decision-support systems. These systems operate collaboratively, with each component contributing partial, context-dependent interpretations of risk.
This creates what can be conceptualised as a hybrid cognitive security architecture, where intelligence is distributed across human and machine actors. Within this model, security effectiveness depends not only on individual human judgement or algorithmic accuracy but on the quality of interaction, alignment, and feedback loops between human and AI components.
Consequently, cybersecurity theory must move beyond human-centred assumptions and instead account for co-adaptive human–AI systems, where cognitive authority is shared, dynamic, and context-dependent.
5.9.3 From perimeter defence to inter-layer governance
The third theoretical contribution of this study is the shift from perimeter-based security models to a framework of inter-layer governance, where security failures are understood as emerging from misalignment between governance layers rather than breaches at system boundaries.
Traditional cybersecurity models are built around the concept of a defendable perimeter, assuming that threats originate externally and can be mitigated through layered technical defences such as firewalls, intrusion detection systems, and network segmentation. However, the increasing integration of cloud infrastructure, AI systems, autonomous agents, and distributed digital ecosystems has rendered the notion of a stable perimeter increasingly obsolete.
Instead, modern cybersecurity environments are characterised by multiple overlapping governance layers, including AI model governance, infrastructure-level controls, application-level security, and organisational policy frameworks. These layers are often developed and managed independently, resulting in inconsistent enforcement, fragmented visibility, and asynchronous policy application.
This study introduces the concept of inter-layer governance failure, where security vulnerabilities arise not from the failure of any single control system, but from the lack of coherence, alignment, and interoperability between multiple governance domains. In such environments, threats can propagate across layers by exploiting gaps between policy intent, technical implementation, and system behaviour.
Accordingly, cybersecurity theory must evolve to prioritise cross-layer integration, governance coherence, and systemic alignment, rather than focusing solely on strengthening individual defensive perimeters.
5.9.4 Synthesis of theoretical contribution
Taken together, these three contributions redefine cybersecurity as a discipline operating under conditions of systemic uncertainty, distributed cognition, and fragmented governance. The study demonstrates that contemporary cybersecurity challenges cannot be adequately addressed through isolated technical solutions or linear risk models.
Instead, effective cybersecurity theory must account for:
probabilistic system behaviour under AI influence
distributed human–machine cognition
and multi-layer governance interactions within complex socio-technical ecosystems
This reconceptualisation provides a foundation for future research into adaptive cybersecurity systems capable of operating effectively in environments characterised by autonomy, complexity, and continuous adversarial evolution.
5.10 Chapter summary
This chapter critically interpreted the thematic findings and demonstrated that cybersecurity in 2026 is defined by systemic transformation rather than incremental change. AI, agentic systems, human behavioural exploitation, regulatory fragmentation, and quantum cryptography collectively reshape cybersecurity into a dynamic, probabilistic, and multi-agent governance challenge.
The next chapter concludes the dissertation by summarising key findings, discussing practical implications for CISOs and organisations, and proposing directions for future research.
6. Conclusion and Recommendations
6.1 Introduction
This final chapter consolidates the key findings of the study, answers the overarching research aim, and presents actionable recommendations for organisations and cybersecurity leaders. It also outlines the theoretical, practical, and policy implications of the research, followed by a discussion of study limitations and directions for future research.
Across the preceding chapters, the evidence demonstrates that cybersecurity in 2026 is undergoing a structural transformation driven by artificial intelligence (AI), agentic autonomy, GenAI-enabled threats, regulatory fragmentation, and post-quantum cryptographic disruption. These forces collectively redefine how cyber risk is created, governed, and mitigated.
6.2 Summary of key findings
The study identified five dominant and interrelated thematic shifts:
1. AI as a dual-use force
AI simultaneously strengthens defensive cybersecurity capabilities (e.g., anomaly detection, SOC automation) while enabling scalable offensive cybercrime (e.g., phishing, malware generation). This creates a persistent asymmetry in which attacker innovation often outpaces defensive governance (Mohamed, 2025; Uddin et al., 2025).
2. Emergence of agentic AI and autonomy risk
Agentic systems introduce autonomous decision-making into enterprise environments, fundamentally disrupting traditional IAM and security governance models. These systems create “action-based vulnerabilities” that extend beyond conventional data-centric threats (Vinay, 2025; Arora and Hastings, 2025).
3. Collapse of traditional human security assumptions
GenAI undermines traditional security awareness models by enabling highly convincing, adaptive, and context-aware social engineering. Human vulnerability is increasingly a function of cognitive exploitation rather than knowledge deficiency (Sheng et al., 2010).
4. Regulatory fragmentation and shift to resilience
Global cybersecurity governance is becoming increasingly fragmented, driving organisations toward cyber resilience models focused on adaptability, recovery, and continuity rather than prevention alone (NIST, 2018; Wairagade, 2025).
5. Post-quantum cryptography as a latent systemic risk
Quantum computing presents a long-term but structurally significant threat to modern cryptographic systems. Migration to post-quantum cryptography (PQC) is complex, slow, and requires early strategic planning despite uncertain timelines (Bernstein et al., 2017).
6.3 Answer to the research aim
The aim of this study was to critically analyse emerging cybersecurity trends in 2026, particularly in relation to AI-driven transformation, autonomous systems, regulatory volatility, and cryptographic evolution. The findings demonstrate that cybersecurity is no longer primarily a technical discipline focused on perimeter defence. Instead, it has become a socio-technical governance system characterised by continuous adaptation under uncertainty.
Specifically, the research confirms that:
cybersecurity is shifting from deterministic control to probabilistic risk governance
AI is simultaneously amplifying defensive and offensive capabilities
human behaviour is being redefined as a cognitive attack surface
governance systems are fragmented and increasingly resilience-focused
cryptographic systems face long-term structural disruption
Therefore, the central conclusion is that cybersecurity in 2026 is best understood as an adaptive, AI-augmented ecosystem rather than a static defensive framework.
6.4 Theoretical implications
This study contributes to cybersecurity theory in three key ways:
6.4.1 Transition from control to uncertainty governance
Traditional cybersecurity assumes systems can be controlled through layered defence mechanisms. This study demonstrates that AI and agentic systems introduce non-deterministic behaviours that require governance models based on uncertainty tolerance rather than control assurance.
6.4.2 Redefinition of human risk models
Human users are no longer simply “weak links” in security systems. Instead, they are targets of AI-enhanced cognitive manipulation systems, requiring a shift from training-based mitigation to behaviourally adaptive security ecosystems.
6.4.3 Emergence of inter-layer governance failure
Security failures increasingly occur not at the perimeter but between governance layers (AI models, infrastructure controls, and policy systems). This introduces a new theoretical construct where misalignment between layers becomes a primary source of vulnerability.
6.5 Practical implications
6.5.1 Implications for CISOs and security leadership
The findings suggest that the role of the Chief Information Security Officer (CISO) must evolve beyond traditional operational cybersecurity oversight into a strategic cyber resilience leadership function embedded within enterprise governance structures. This reflects broader shifts in the cybersecurity landscape, where risk is increasingly dynamic, AI-mediated, and distributed across interconnected digital ecosystems (NIST, 2018; Wairagade, 2025).
In this context, modern security leadership must incorporate several expanded responsibilities:
AI governance and model oversight, ensuring that AI systems are deployed safely, transparently, and in alignment with organisational and regulatory requirements, particularly in high-risk decision-making environments (Arner, Barberis and Buckley, 2017)
Enterprise-wide risk orchestration, moving beyond siloed security functions toward integrated management of cyber, operational, and financial risks across the organisation (Broeders and Prenio, 2018)
Regulatory alignment across jurisdictions, addressing the increasing complexity of fragmented and overlapping cybersecurity, data protection, and AI governance regimes in global operations (Bennett and Raab, 2020; Zetzsche et al., 2020)
Integration of cyber-physical and digital systems, recognising the convergence of IT, operational technology (OT), and digital infrastructure, which expands the attack surface and increases systemic interdependencies
As a result, security leadership can no longer function as a siloed technical discipline focused primarily on perimeter defence and incident response. Instead, it must operate as a strategic governance role, directly contributing to organisational resilience, regulatory compliance, and long-term digital transformation strategy (Arner, Barberis and Buckley, 2017; Wairagade, 2025).
6.5.2 Implications for SOC operations
Security Operations Centres (SOCs) are undergoing a fundamental transition from traditional monitoring-led environments toward AI-augmented, intelligence-driven defence operations. This reflects a broader shift in cybersecurity practice, where speed, scale, and adversarial sophistication increasingly exceed the limits of purely human-led analysis (ENISA, 2020; NIST, 2018).
This evolution is characterised by three major operational shifts:
From reactive alert processing to predictive AI-assisted defence, where machine learning models support early threat detection by identifying anomalies, behavioural deviations, and precursor signals of malicious activity (Radanliev et al., 2020)
From manual triage to hybrid human–AI decision systems, in which AI tools assist analysts by prioritising alerts, enriching context, and reducing cognitive load in high-volume environments
From static playbooks to adaptive response orchestration, enabling dynamically adjusted incident response strategies based on real-time threat intelligence and evolving attack patterns
However, the literature consistently highlights that full automation of SOC functions is not viable in high-assurance environments. AI systems introduce operational risks including hallucination in generative outputs, adversarial manipulation of detection models, and model drift, all of which can degrade performance and reliability over time (Brundage et al., 2018; ENISA, 2020). These limitations are particularly critical in security contexts where false negatives or misclassifications can have severe operational and financial consequences.
As a result, hybrid SOC architectures—combining automated detection and response capabilities with human analytical oversight—are widely regarded as the most resilient and effective operational model. This approach preserves the scalability advantages of AI while maintaining human accountability, contextual judgement, and adversarial reasoning capabilities that remain difficult to replicate computationally.
6.5.3 Implications for organisational training
Traditional security awareness training is increasingly insufficient in generative AI (GenAI) environments, where threats are highly personalised, adaptive, and difficult to distinguish from legitimate communications. As a result, organisations must move beyond periodic training sessions toward continuous, behaviourally embedded security development models (Sheng et al., 2010; Mohamed, 2025).
Key approaches include:
Continuous behavioural monitoring systems, which track user interactions to identify risky behaviours and provide ongoing feedback loops rather than one-off assessments
Embedded real-time security nudges, where contextual prompts are integrated directly into workflows to influence decision-making at the point of action
Adaptive phishing simulation frameworks, which dynamically evolve to reflect emerging GenAI-enabled attack patterns and user susceptibility profiles
Security Behaviour and Culture Programmes (SBCPs), which institutionalise security as a cultural and behavioural norm rather than a compliance obligation
This reflects a broader shift from knowledge transfer-based training models to behavioural conditioning and reinforcement systems, where security outcomes are shaped continuously through interaction, feedback, and organisational design (Mohamed, 2025).
6.5.4 Implications for governance and compliance
The increasing fragmentation of regulatory regimes and the rapid evolution of AI governance frameworks require organisations to adopt more structured and scalable compliance approaches. In this context, effective governance depends on the integration of regulatory intelligence into operational systems rather than relying on manual compliance processes (Bennett and Raab, 2020; Zetzsche et al., 2020).
Key requirements include:
Unified global compliance mapping frameworks, enabling organisations to systematically align overlapping regulatory obligations across jurisdictions
Automated compliance tracking systems, which leverage digital tools to continuously monitor adherence to regulatory requirements in real time
Cross-jurisdictional governance structures, designed to manage regulatory divergence and ensure consistent oversight across international operations
AI-specific risk policies aligned with emerging legislation, particularly in response to evolving frameworks governing algorithmic accountability, transparency, and risk classification
These developments reflect a shift toward technology-enabled compliance governance, where regulatory adherence is increasingly embedded within digital infrastructure and supported by automation and data-driven oversight (Arner, Barberis and Buckley, 2017; Zetzsche et al., 2020).
6.5.5 Implications for cryptographic strategy
Organisations must begin immediate preparation for the transition to post-quantum cryptography (PQC), given the long lead times required for infrastructure migration and the systemic nature of cryptographic dependencies in modern digital systems (Bernstein, Buchmann and Dahmen, 2017; Shor, 1994).
Key strategic actions include:
Cryptographic asset inventory mapping, to identify all deployed cryptographic algorithms, dependencies, and vulnerable systems across the organisation
Hybrid classical–PQC deployment models, enabling gradual transition while maintaining interoperability and operational continuity during migration phases
Phased migration roadmaps, structured over multi-year horizons to reflect the complexity of enterprise-scale cryptographic change
Vendor and infrastructure readiness assessments, ensuring third-party systems and supply chains are aligned with emerging post-quantum standards
Given the structural complexity and long replacement cycles associated with cryptographic systems, early adoption is critical. Delayed transition significantly increases exposure to future quantum-enabled threats and may undermine long-term system integrity and trustworthiness (Bernstein, Buchmann and Dahmen, 2017).
6.6 Strategic recommendations
Based on the findings of this study, a set of strategic recommendations is proposed to support organisations in adapting to the evolving cybersecurity, AI, and regulatory landscape. These recommendations reflect the convergence of AI autonomy, behavioural risk, regulatory fragmentation, and emerging cryptographic disruption identified across the preceding themes (Arner, Barberis and Buckley, 2017; NIST, 2018).
Recommendation 1: Establish AI governance frameworks
Organisations should implement formal AI governance structures to ensure safe, accountable, and transparent deployment of autonomous and semi-autonomous systems. This includes:
Agent identity management systems, ensuring traceability and accountability of autonomous digital actors
Model behaviour auditing, enabling continuous evaluation of AI decision-making patterns and outputs
Access control for autonomous agents, restricting tool, API, and system interactions based on defined permissions
AI lifecycle security monitoring, embedding oversight across development, deployment, and operational phases
These mechanisms align with emerging regulatory expectations around algorithmic accountability and risk governance in AI-enabled systems (Zetzsche et al., 2020).
Recommendation 2: Deploy hybrid human–AI SOC architectures
Security Operations Centres should adopt hybrid operational models that combine machine-scale detection capabilities with human analytical oversight. Specifically, organisations should:
Leverage AI for high-volume detection, correlation, and alert triage
Retain human oversight for contextual decision validation and adversarial reasoning
Implement structured escalation controls for autonomous responses, ensuring that automated actions remain bounded and auditable
This hybrid approach reflects best practice in balancing automation efficiency with operational resilience and risk control (ENISA, 2020; Radanliev et al., 2020).
Recommendation 3: Transition to behavioural cybersecurity models
Organisations should replace static, periodic training approaches with continuous behavioural cybersecurity frameworks, including:
Continuous behavioural analytics, enabling real-time identification of risky user behaviours
Real-time phishing detection feedback loops, supporting immediate correction and reinforcement of secure actions
Embedded organisational security culture systems, integrating security prompts and reinforcement directly into workflows
This shift reflects the growing recognition that human vulnerability in GenAI environments is behavioural and adaptive rather than purely knowledge-based (Mohamed, 2025; Sheng et al., 2010).
Recommendation 4: Implement cyber resilience as a core operating model
Cyber resilience should be embedded as a foundational organisational capability spanning technical, operational, and governance domains. This includes integration across:
Enterprise risk management frameworks, ensuring cyber risk is treated as a core business risk
Incident response planning, with an emphasis on adaptability and rapid recovery
System design architecture, prioritising fault tolerance, redundancy, and continuous availability
Regulatory compliance strategy, aligning resilience objectives with evolving multi-jurisdictional requirements
This reflects the shift from compliance-based security to resilience-based operational survivability models (NIST, 2018; Wairagade, 2025).
Recommendation 5: Initiate post-quantum cryptography readiness programmes
Given the long lead times associated with cryptographic migration, organisations should begin post-quantum cryptography (PQC) preparedness initiatives immediately, including:
Cryptographic dependency mapping, to identify all vulnerable systems and embedded cryptographic algorithms
Pilot hybrid encryption deployments, enabling gradual integration of PQC alongside classical cryptographic systems
Long-term migration planning aligned with NIST standards, ensuring structured and standards-compliant transition pathways
Early preparation is essential due to the systemic nature of cryptographic infrastructure and the extended timeframe required for enterprise-wide migration (Bernstein, Buchmann and Dahmen, 2017; Shor, 1994).
6.7 Limitations of the study
This research is subject to several limitations:
reliance on secondary data limits access to real-time industry-specific implementations
rapid evolution of AI cybersecurity means findings may require frequent updating
limited availability of large-scale empirical studies on agentic AI systems
potential publication bias toward positive AI cybersecurity outcomes
Despite these limitations, the PRISMA-guided systematic review combined with thematic analysis provides a robust and academically defensible synthesis.
6.8 Future research directions
Building on the findings of this study, several key directions for future research are identified to address current gaps and emerging challenges in AI-driven cybersecurity and digital risk governance:
Empirical validation of agentic AI governance frameworks in enterprise environments, focusing on real-world deployment constraints, control effectiveness, and operational risk outcomes
Real-world performance benchmarking of AI-driven SOC systems, including comparative evaluation of hybrid versus fully automated architectures under adversarial conditions
Longitudinal studies on GenAI-driven social engineering effectiveness, examining how attack sophistication, user susceptibility, and organisational defences evolve over time
Development of measurable cyber resilience metrics, enabling organisations to quantify resilience capacity rather than relying solely on compliance-based indicators
Practical implementation pathways for post-quantum cryptographic migration, including phased deployment strategies, cost modelling, and interoperability assessment
In addition, future research would benefit from interdisciplinary integration between cybersecurity, cognitive science, and AI safety, particularly in understanding human–AI interaction dynamics, adversarial machine learning, and behavioural manipulation in digital environments (Brundage et al., 2018; NIST, 2018).
6.9 Final conclusion
This paper has demonstrated that cybersecurity is undergoing a profound structural transformation driven by artificial intelligence, system autonomy, and increasing environmental uncertainty. The traditional perimeter-based model of defence is becoming progressively insufficient in contexts where both attackers and defenders are augmented by generative and agentic AI systems.
Across the themes analysed, the evidence indicates a clear shift toward adaptive cyber resilience ecosystems, in which governance mechanisms, technological systems, and human behaviour are increasingly interdependent and continuously evolving rather than static or siloed. Within this emerging paradigm, cybersecurity is no longer solely a technical function but a socio-technical governance system requiring integrated oversight, dynamic adaptation, and cross-domain coordination.
Organisations that fail to adapt to this structural shift risk becoming systemically vulnerable within an increasingly autonomous and AI-driven digital threat landscape, where the speed and complexity of adversarial activity may outpace traditional defensive and governance models.
References
Abbas, R. et al. (2023) Artificial Intelligence (AI) in Cybersecurity: A Socio-Technical Research Roadmap. The Alan Turing Institute.
Aldasoro, I., Gambacorta, L., Giudici, P. and Leach, T. (2023) ‘The drivers of cyber risk in financial institutions’, Journal of Financial Stability, 64, 101074
Anbiaee, Z. et al. (2026) Security threat modeling for emerging AI-agent protocols: A comparative analysis of MCP, A2A, Agora, and ANP. arXiv preprint.
Arner, D.W., Barberis, J. and Buckley, R.P. (2017) ‘FinTech, RegTech and the reconceptualization of financial regulation’, Northwestern Journal of International Law & Business, 37(3), pp. 371–413.
Arora, S. and Hastings, J. (2025) ‘Securing Agentic AI Systems: A multilayer security framework’, arXiv preprint
Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., Kernion, J., Conerly, T., Elhage, N., Hatfield-Dodds, Z., Mann, B., Perez, E., Ramirez, J., Stiennon, N., Tran-Johnson, E. and Kaplan, J. (2022) Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv.
Bennett, C.J. and Raab, C.D. (2020) The Governance of Privacy: Policy Instruments in Global Perspective. 3rd edn. Cambridge, MA: MIT Press.
Bernstein, D.J., Lange, T. and Peters, C. (2017) ‘Post-quantum cryptography’, Nature, 549(7671), pp. 188–194.
Biggio, B. and Roli, F. (2018) ‘Wild patterns: Ten years after the rise of adversarial machine learning’, Pattern Recognition, 84, pp. 317–331.
Braun, V. and Clarke, V. (2006) ‘Using thematic analysis in psychology’, Qualitative Research in Psychology, 3(2), pp. 77–101.
Broeders, D. and Prenio, J. (2018) ‘Innovative technology in financial supervision (SupTech) – the experience of early users’, FSI Insights on policy implementation, No. 9, Bank for International Settlements.
Brundage, M. et al., (2018) ‘The malicious use of artificial intelligence: Forecasting, prevention, and mitigation’, arXiv preprint, arXiv:1802.07228.
Buczak, A. L., & Guven, E. (2016). A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection. IEEE Communications Surveys & Tutorials.
Conti, M., Dehghantanha, A., Franke, K. and Watson, S. (2018) ‘Internet of Things security and forensics: Challenges and opportunities’, Future Generation Computer Systems, 78, pp. 544–546.
Ehtesham, A. et al. (2025) A survey of agent interoperability protocols: MCP, ACP, A2A, and ANP. arXiv preprint.
ENISA (2020) Artificial intelligence cybersecurity challenges. European Union Agency for Cybersecurity.
Ernst, N.A. and Treude, C. (2026) ‘GenAI is no silver bullet for qualitative research in software engineering’, arXiv preprint.
Humayed, A., Lin, J., Li, F., & Luo, B. (2017). Cyber-Physical Systems Security – A Survey. IEEE Internet of Things Journal.
Mohsin, A. et al. (2025) ‘A unified framework for human–AI collaboration in Security Operations Centers with trusted autonomy’, arXiv preprint.
Mohamed, N. (2025) ‘Artificial intelligence and machine learning in cybersecurity: A deep dive into state-of-the-art techniques’, Knowledge and Information Systems, 67, pp. 6969–7055.
NIST (2018) Framework for Improving Critical Infrastructure Cybersecurity. National Institute of Standards and Technology.
Page, M.J. et al. (2021) ‘The PRISMA 2020 statement: an updated guideline for reporting systematic reviews’, BMJ, 372, n71.
Posey, C., Roberts, T.L. and Lowry, P.B. (2014) ‘The impact of organisational commitment on information security behaviour’, Journal of Management Information Systems, 31(4), pp. 122–151.
Radanliev, P., De Roure, D., Walton, R. and Van Kleek, M. (2020) ‘Artificial intelligence and cybersecurity: A systematic review’, IEEE Access, 8, pp. 141214–141245.
Sandhu, R. and Samarati, P. (1994) ‘Access control: principle and practice’, IEEE Communications Magazine, 32(9), pp. 40–48.
Sarker, I. H. (2021). Machine Learning for Intelligent Data Analysis and Cybersecurity. SN Computer Science.
Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L. and Goyal, N. (2023) Toolformer: Language models can teach themselves to use tools. arXiv
Sheng, S. et al. (2010) ‘Who falls for phish? A demographic analysis of phishing susceptibility’, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 373–382.
Shor, P.W. (1994) ‘Algorithms for quantum computation: discrete logarithms and factoring’, Proceedings 35th Annual Symposium on Foundations of Computer Science, pp. 124–134.
Singh, R. et al. (2025) ‘LLMs in the SOC: An empirical study of human-AI collaboration in Security Operations Centres’, arXiv preprint.
Sommer, R. and Paxson, V. (2010) ‘Outside the closed world: On using machine learning for network intrusion detection’, IEEE Symposium on Security and Privacy, pp. 305–316.
Sood, A.K., Zeadally, S. and Hong, E.K. (2025) ‘The paradigm of hallucinations in AI-driven cybersecurity systems: understanding taxonomy, classification outcomes, and mitigations’, Computers and Electrical Engineering, 124, Article 110307.
Srinivas, S. et al. (2025) ‘AI-augmented SOC: A survey of LLMs and agents for security automation’, Journal of Cybersecurity and Privacy, 5(4), 95.
Suggu, S.K. (2025) ‘Agentic AI Workflows in Cybersecurity: Opportunities, Challenges, and Governance via the MCP Model’, Journal of Information Systems Engineering and Management.
Uddin, M., Ali, M.H. and Hassan, M.K. (2020) ‘Cybersecurity risks in financial systems: A systematic review’, Journal of Cybersecurity.
Uddin, M. et al. (2025) ‘Generative AI revolution in cybersecurity’, Artificial Intelligence Review, 58.
Verizon (2024) Data Breach Investigations Report. Verizon Enterprise.
Vinay, V. (2025) ‘The evolution of agentic AI in cybersecurity’, arXiv preprint
von Solms, R. and van Niekerk, J. (2013) ‘From information security to cyber security’, Computers & Security, 38, pp. 97–102.
Wairagade, A. (2025). ‘Strategic Management of AI-Powered Cybersecurity Systems: A Systematic Review’. Journal of Engineering Research and Reports, 27(8), pp. 54–64.Top of Form
Waliullah, M., Hossain George, M.Z., Hasan, M.T., Alam, M.K., Munira, M.S.K. and Siddiqui, N.A. (2025) ‘Assessing the influence of cybersecurity threats and risks on digital banking: A systematic literature review’, arXiv preprint, arXiv:2503.22710.
Wooldridge, M. (2009) An Introduction to MultiAgent Systems. 2nd edn. Hoboken: Wiley.
Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K. and Cao, Y. (2023) ReAct: Synergizing reasoning and acting in language models. arXiv
Zetzsche, D.A., Buckley, R.P., Arner, D.W. and Barberis, J. (2020) ‘From fintech to techfin: The regulatory challenges of data-driven finance’, New York University Journal of Law & Business, 16(2), pp. 401–463.
Contact
Reach out via email for inquiries.
Subscribe to newsletter
info@grcadvisory.ch
© 2025. All rights reserved.