Artificial Intelligence in Banking Supervision and Financial Institutions

This paper argues that the increasing integration of artificial intelligence into Swiss banking supervision and financial institutions is transforming prudential governance into a predictive, data-driven, and technologically mediated system that enhances supervisory capability while simultaneously creating significant challenges relating to accountability, explainability, operational resilience, and the preservation of human judgement within financial regulation.

Sanchez P.

5/15/202652 min read

Abstract

This paper examines the transformation of banking supervision in Switzerland through the growing integration of artificial intelligence (AI), machine learning (ML), and supervisory technologies (SupTech). It analyses how AI is reshaping both prudential supervision and banking operations, with particular emphasis on the Swiss Financial Market Supervisory Authority (FINMA) and the broader implications of algorithmic governance for contemporary risk-based financial regulation.

Situated within the post-global financial crisis evolution of prudential supervision, the paper argues that AI is accelerating a structural transition from retrospective, compliance-oriented regulatory models towards predictive, data-driven, and continuously adaptive forms of governance (Arner, Barberis and Buckley, 2020; BIS, 2023). In this context, FINMA’s Data Innovation Lab represents a significant institutional innovation, reflecting regulators’ increasing reliance on advanced analytics, machine learning, and interdisciplinary technological expertise to manage the growing complexity, interconnectedness, and digitalisation of financial systems.

The analysis further demonstrates that AI adoption is not confined to supervisory authorities but is deeply embedded within banking institutions themselves. Swiss and international banks increasingly deploy machine learning systems across core operational functions, including credit assessment, fraud detection, anti-money laundering (AML) compliance, algorithmic trading, customer analytics, and operational risk management (Fuster et al., 2022). This parallel technological transformation produces a highly interconnected supervisory ecosystem in which regulators and regulated entities increasingly depend upon similar computational infrastructures, data architectures, and predictive systems. Consequently, AI intensifies new forms of institutional interdependence, technological opacity, and systemic vulnerability within contemporary financial governance.

Across the study, four interrelated governance challenges emerge as central to AI-enabled financial systems: explainability, accountability, data governance, and operational resilience. While AI systems substantially enhance analytical capability, efficiency, and early risk detection, they simultaneously generate significant legal, organisational, and epistemological concerns associated with model opacity, automation bias, cybersecurity vulnerabilities, third-party dependencies, and the diffusion of institutional responsibility (Barocas, Hardt and Narayanan, 2019; FSB, 2023). The paper demonstrates that these tensions are particularly acute within prudential supervision, where regulatory legitimacy depends upon transparency, proportionality, and the capacity to justify supervisory intervention.

The paper therefore argues that the governance challenges associated with AI cannot be resolved through technological optimisation alone. Effective AI governance in banking and supervision instead requires hybrid institutional frameworks in which algorithmic systems augment—but do not replace—human judgement, legal accountability, and prudential interpretation. Drawing on the Swiss case, the analysis demonstrates that the future of financial supervision depends upon maintaining a careful balance between technological capability and institutional legitimacy, particularly within increasingly automated and data-intensive regulatory environments.

Ultimately, the paper contributes to broader debates on algorithmic governance by demonstrating that AI should not be understood merely as a technical tool within financial regulation, but as a transformative force reshaping the epistemological, organisational, and legal foundations of prudential governance itself.

1. Introduction

The rapid advancement of artificial intelligence (AI), machine learning (ML), and advanced data analytics is transforming both the operational structure of financial institutions and the nature of banking supervision. Across global financial systems, AI technologies are increasingly embedded within core banking functions including credit assessment, fraud detection, anti-money laundering (AML) monitoring, algorithmic trading, customer analytics, and regulatory compliance (Fuster et al., 2022; BIS, 2024). Simultaneously, financial regulators are adopting supervisory technologies (SupTech) that utilise predictive analytics, natural language processing (NLP), anomaly detection, and automated monitoring systems to strengthen supervisory oversight and enhance institutional risk assessment (Arner, Barberis and Buckley, 2020).

These developments reflect a broader transformation in financial governance in which supervisory authority is becoming increasingly data-driven, predictive, and technologically mediated. Traditional banking supervision relied primarily on retrospective analysis, periodic reporting, and manual supervisory review. However, the growing complexity, interconnectedness, and digitalisation of financial systems have exposed limitations in conventional supervisory approaches, particularly following the global financial crisis of 2007–2008 and more recent episodes of banking instability associated with rapid digital information flows and depositor behaviour (FSB, 2022). In response, supervisory authorities increasingly seek to enhance regulatory responsiveness through AI-enabled analytical systems capable of processing large volumes of structured and unstructured financial data in near real time.

Within this evolving regulatory environment, Switzerland represents an important case study. The Swiss banking sector occupies a significant position within the global financial system and is characterised by high levels of international integration, systemic concentration, and regulatory sophistication. The Swiss Financial Market Supervisory Authority (FINMA) has progressively incorporated data analytics and AI-assisted supervisory mechanisms into its risk-based supervisory framework, including the establishment of the Data Innovation Lab and the development of AI-supported monitoring and analytical tools. These developments position Switzerland within a broader international movement towards technologically enhanced prudential supervision while simultaneously raising important legal, operational, and governance questions.

The increasing integration of AI into supervisory processes reflects a broader shift towards what scholars describe as algorithmic governance. Algorithmic governance refers to systems in which decision-making, monitoring, and institutional control are increasingly mediated through computational models, automated analysis, and predictive data systems (Yeung, 2018). Within banking supervision, this transformation is particularly significant because supervisory authority increasingly depends upon the capacity to interpret complex datasets, identify emerging systemic vulnerabilities, and anticipate institutional deterioration before risks materialise into financial instability. AI technologies therefore offer regulators substantial opportunities to strengthen early-warning capabilities, improve peer-group analysis, enhance operational efficiency, and support more proactive forms of prudential supervision (BIS, 2024).

However, the growing use of AI within banking supervision also introduces substantial governance challenges. Many advanced machine learning systems operate through opaque computational processes that are difficult to interpret, raising concerns regarding explainability, accountability, procedural legitimacy, and regulatory transparency (Barocas, Hardt and Narayanan, 2019). In prudential regulation, where supervisory decisions may have significant legal and economic consequences, excessive reliance on opaque algorithmic systems may undermine institutional trust and complicate supervisory accountability. Furthermore, AI-driven supervision creates new operational risks associated with cybersecurity, third-party technological dependencies, data governance, and model reliability.

These tensions are particularly important within risk-based supervisory frameworks. Risk-based supervision prioritises supervisory resources according to institutional size, systemic relevance, and risk exposure rather than relying solely on formal rule compliance (BCBS, 2015). The integration of AI into such frameworks potentially transforms not only the efficiency of supervision, but also the epistemological foundations of supervisory decision-making itself. Supervisory judgement increasingly relies upon predictive models, statistical approximations, and algorithmic risk assessments capable of shaping regulatory priorities and institutional intervention. Consequently, AI is not merely an operational tool within financial regulation; it is becoming part of the broader governance architecture through which financial systems are monitored and controlled.

At the same time, banks themselves are increasingly adopting AI systems across operational and strategic functions. Financial institutions utilise machine learning technologies to optimise lending decisions, improve fraud detection, automate compliance monitoring, personalise customer services, and strengthen operational efficiency. Research demonstrates that AI-based systems may significantly improve predictive accuracy within financial decision-making environments, particularly in areas such as credit risk assessment and transactional monitoring (Fuster et al., 2022). Nevertheless, these systems also create substantial legal and organisational challenges relating to data protection, discrimination, explainability, operational resilience, and governance accountability.

The simultaneous adoption of AI by both regulators and regulated institutions creates a complex and interdependent supervisory environment in which technological innovation and regulatory uncertainty evolve together. Supervisory authorities increasingly depend upon sophisticated technological infrastructures to oversee financial institutions that themselves rely upon highly complex AI-enabled systems. This convergence creates new forms of institutional dependency and raises broader questions regarding the future relationship between human judgement, algorithmic decision-making, and prudential governance.

Against this background, this paper examines how artificial intelligence is transforming risk-based banking supervision in Switzerland and analyses the governance challenges emerging from this transformation. The paper focuses particularly on FINMA’s evolving use of AI-enabled supervisory technologies and the broader implications of algorithmic governance within prudential supervision and banking operations.

The paper argues that AI is fundamentally reshaping banking supervision from a predominantly retrospective and compliance-oriented activity into a more predictive, data-driven, and technologically integrated form of governance. While AI-enhanced supervisory systems substantially improve analytical capacity, risk detection, and operational efficiency, they simultaneously create significant challenges relating to explainability, accountability, operational resilience, and regulatory legitimacy. The paper further argues that effective AI implementation within financial supervision requires hybrid governance frameworks in which technological systems augment—but do not replace—human supervisory judgement.

Methodologically, the study adopts a qualitative case study approach focused on the Swiss supervisory environment. The analysis combines doctrinal examination of regulatory frameworks with policy analysis of FINMA publications, international regulatory reports, and interdisciplinary academic literature relating to AI governance, financial regulation, and supervisory technology. Presentations and discussions from the 2025 Swiss Risk Association conference on the use of AI by FINMA and banks are incorporated as contextual material illustrating emerging supervisory practices and institutional perspectives within the Swiss financial sector.

The paper is structured in six parts. The first section examines the evolution of banking supervision and the emergence of risk-based and data-driven regulatory models. The second section analyses FINMA’s supervisory framework and the institutional role of the Data Innovation Lab within Swiss financial supervision. The third section evaluates the principal applications of AI within banking supervision, including predictive analytics, anomaly detection, alternative data monitoring, and document analysis. The fourth section examines the legal, organisational, and governance challenges associated with AI implementation within banks and supervisory authorities. The fifth section critically analyses the continuing importance of human oversight and the limitations of automation within prudential governance. The final section concludes by evaluating the broader implications of algorithmic governance for the future of banking supervision and financial stability.

2. Theoretical Framework and the Evolution of Risk-Based Supervision

The growing integration of artificial intelligence (AI), machine learning (ML), and advanced analytics into financial regulation reflects a broader transformation in the governance of contemporary financial systems. Banking supervision is increasingly evolving beyond traditional models based on retrospective reporting, periodic inspections, and rule-based compliance towards more continuous, predictive, and data-driven forms of oversight (Arner, Barberis and Buckley, 2020; BIS, 2023). Supervisory authorities now operate within highly digitalised financial environments characterised by rapidly expanding volumes of transactional, behavioural, and operational data that exceed the analytical capacity of conventional supervisory approaches (Bholat, 2015; BIS, 2021).

These developments have accelerated the emergence of supervisory technologies (SupTech), which utilise AI, machine learning, predictive analytics, and natural language processing (NLP) to strengthen prudential oversight and institutional risk assessment (Dias and Staschen, 2017; BIS, 2023). At the same time, they reflect a broader shift towards algorithmic governance in which institutional decision-making, monitoring, and regulatory intervention increasingly depend upon computational systems and automated forms of analysis (Yeung, 2018).

This chapter establishes the theoretical framework underpinning the paper by examining the relationship between algorithmic governance and the evolution of risk-based banking supervision. First, it analyses how AI-enabled systems are reshaping regulatory authority and supervisory knowledge within financial governance. Second, it examines the historical evolution of banking supervision from compliance-oriented regulation towards increasingly predictive and risk-sensitive supervisory models. Together, these perspectives provide the conceptual foundation for understanding FINMA’s growing use of AI-enabled supervisory technologies and the governance challenges emerging from technologically mediated prudential oversight.

2.1 Algorithmic Governance and Financial Regulation

The increasing use of AI within financial regulation forms part of a broader transformation commonly described as algorithmic governance or algorithmic regulation. Yeung (2018) defines algorithmic regulation as governance systems in which decision-making and behavioural control are increasingly mediated through automated data collection, algorithmic analysis, and continuous monitoring processes. Within such systems, computational models play an expanding role in identifying patterns, classifying behaviour, predicting risk, and shaping institutional responses.

In financial regulation, algorithmic governance emerges primarily from the increasing complexity and digitalisation of financial systems. Contemporary financial institutions generate enormous volumes of structured and unstructured information through market transactions, digital banking activity, customer interactions, regulatory reporting, online communications, and interconnected technological infrastructures (BIS, 2021). Traditional supervisory models based primarily on periodic reporting and manual analysis are increasingly insufficient for processing this information efficiently or identifying rapidly emerging systemic vulnerabilities (Bholat, 2015).

The growing adoption of AI technologies within supervisory authorities therefore reflects the need for enhanced analytical capacity capable of operating within increasingly complex financial ecosystems. Machine learning systems can identify anomalous institutional behaviour, detect hidden correlations, approximate risk indicators, and generate predictive assessments across large and multidimensional datasets (BIS, 2023). Natural language processing technologies further extend supervisory capability by enabling regulators to analyse unstructured textual information such as regulatory filings, audit reports, market commentary, and social media communications (European Central Bank, 2024).

Importantly, algorithmic governance involves more than the automation of existing supervisory practices. Rather, it alters the epistemological foundations of financial supervision itself. Supervisory knowledge increasingly depends upon computational systems capable of identifying statistical relationships and behavioural patterns that may not be directly observable through traditional human-centred analysis. In this sense, algorithmic systems shape not only how regulators process information, but also how institutional risk is conceptualised, prioritised, and governed.

This development aligns closely with broader transformations in contemporary governance associated with digitalisation, predictive analytics, and data-driven regulation. Zuboff (2019) argues that modern institutional systems increasingly depend upon large-scale behavioural data extraction and predictive modelling, while Rouvroy and Berns (2013) describe the emergence of “algorithmic governmentality” in which governance increasingly operates through anticipatory data analysis and behavioural prediction. Within banking supervision, these developments are particularly significant because prudential oversight fundamentally depends upon the ability to identify emerging vulnerabilities and manage uncertainty within highly interconnected financial systems.

The rise of algorithmic governance has therefore contributed to a broader transition from reactive supervision towards predictive governance. Traditional supervisory approaches primarily identified institutional weaknesses retrospectively through financial deterioration, compliance breaches, or crisis events. By contrast, AI-enhanced supervisory systems seek to identify early warning indicators before risks materialise into systemic instability (FSB, 2022). Predictive analytics, anomaly detection, stress-testing models, and peer-group comparisons increasingly allow supervisory authorities to anticipate institutional vulnerabilities and intervene more proactively.

However, the growing reliance on algorithmic systems also introduces substantial governance and legitimacy challenges. One major concern relates to explainability and transparency. Many advanced machine learning systems operate through highly complex computational processes that are difficult to interpret even for technical specialists (Barocas, Hardt and Narayanan, 2019). In prudential supervision, where regulatory interventions may have significant legal and economic consequences, limited explainability may undermine procedural fairness, accountability, and institutional legitimacy (OECD, 2024).

A second challenge concerns the concentration of epistemic authority within technological systems. Financial supervision increasingly depends upon algorithmic models, data infrastructures, and predictive analytics developed either internally or through third-party technological providers. This creates the risk that supervisory judgement becomes overly dependent on computational outputs and embedded modelling assumptions rather than broader prudential interpretation or contextual institutional analysis (Katzenbach and Ulbricht, 2019).

Algorithmic governance also raises concerns regarding surveillance, proportionality, and institutional power. Continuous monitoring systems may significantly expand supervisory visibility into institutional behaviour, potentially increasing regulatory capability while simultaneously raising questions concerning privacy, proportionality, and informational asymmetries between regulators and supervised institutions (Yeung, 2018). In banking systems characterised by extensive data collection and digital infrastructures, supervisory authorities increasingly possess the capacity to conduct near real-time institutional monitoring through AI-enabled analytical systems.

Furthermore, algorithmic governance creates new operational and systemic dependencies. Supervisory authorities increasingly rely upon cloud infrastructures, external technology vendors, data integration systems, and advanced analytical platforms to support supervisory activities. These dependencies introduce new risks relating to cybersecurity, operational resilience, concentration risk, and third-party governance (FSB, 2023). Consequently, regulators themselves become subject to many of the same technological vulnerabilities they seek to supervise within financial institutions.

The growing use of AI within financial supervision therefore represents not merely a technical development, but a broader institutional transformation affecting the nature of regulatory authority, prudential governance, and supervisory legitimacy within contemporary financial systems.

2.2 The Evolution of Risk-Based Banking Supervision

The emergence of AI-enabled supervision must also be understood within the historical evolution of banking regulation and prudential oversight. Banking supervision has historically evolved in response to recurring episodes of financial instability, institutional failure, technological change, and increasing market complexity (Goodhart, 2011). Earlier supervisory frameworks were primarily compliance-oriented and focused heavily on legal conformity, periodic inspections, and retrospective analysis of financial statements.

Traditional banking supervision relied on relatively static reporting systems in which supervisory authorities assessed solvency, liquidity, and capital adequacy through periodic regulatory submissions and on-site examinations (Avgouleas, 2009). These models reflected earlier banking systems that were comparatively less interconnected, less technologically intensive, and slower in operational tempo. Supervisory intervention generally occurred after weaknesses became visible through financial deterioration or regulatory non-compliance.

However, the rapid globalisation and digitalisation of financial markets exposed significant limitations within these conventional supervisory frameworks. Financial institutions became increasingly interconnected through global capital markets, derivative exposures, cross-border operations, and digital financial infrastructures. Simultaneously, financial innovation generated increasingly complex products and institutional structures that proved difficult to supervise using traditional methodologies alone (Avgouleas, 2009).

The global financial crisis of 2007–2008 fundamentally accelerated the transformation of prudential supervision. The crisis revealed severe deficiencies in existing supervisory systems, particularly the inability of regulators to identify systemic vulnerabilities and interconnected institutional risks before widespread market collapse occurred (BCBS, 2011). Formal compliance with regulatory requirements proved insufficient to ensure institutional resilience or systemic stability.

In response, regulators increasingly adopted risk-based supervisory frameworks designed to allocate supervisory attention according to institutional risk exposure, systemic significance, and operational complexity (BCBS, 2015). Risk-based supervision shifted the focus of prudential regulation away from narrow rule compliance towards broader assessment of governance quality, risk culture, operational resilience, strategic decision-making, and institutional interconnectedness.

This transformation significantly altered the philosophy of financial supervision. Supervisory authorities increasingly moved towards continuous and forward-looking assessment of institutional risk rather than purely retrospective evaluation of historical performance. Prudential supervision became increasingly concerned with identifying emerging vulnerabilities before they materialised into systemic crises (FSB, 2022).

The transition towards risk-based supervision also accelerated the demand for more sophisticated analytical capabilities. Effective risk-based supervision requires regulators to process large volumes of financial, operational, and behavioural data across multiple institutions simultaneously. This need contributed directly to the emergence of SupTech and AI-enabled supervisory systems capable of supporting continuous monitoring, anomaly detection, predictive modelling, and automated risk assessment (Dias and Staschen, 2017; BIS, 2023).

AI technologies are particularly well suited to risk-based supervision because they enable regulators to identify non-linear relationships and hidden behavioural patterns within complex datasets. Machine learning systems can support peer-group analysis, approximate institutional vulnerabilities, and identify deviations from sectoral norms that may indicate emerging prudential concerns (BIS, 2023). Predictive analytics also strengthen stress-testing methodologies by incorporating broader datasets and dynamic modelling assumptions into supervisory risk assessment.

These developments reflect a broader shift from reactive supervision towards anticipatory and predictive governance. Contemporary supervisory systems increasingly seek to identify early warning indicators associated with liquidity stress, operational failure, reputational deterioration, and systemic instability before crises fully materialise. This predictive orientation became particularly significant following recent episodes of banking instability involving rapid depositor behaviour and digital information flows.

The collapse of Silicon Valley Bank in 2023 demonstrated how social media dynamics and digital banking infrastructures can accelerate institutional deterioration at unprecedented speed. Digital communication platforms enabled rapid dissemination of concerns regarding institutional solvency, contributing to accelerated depositor panic and liquidity stress. Such developments illustrate the growing importance of real-time monitoring, sentiment analysis, and predictive supervisory capability within increasingly digital financial systems (FSB, 2023).

Nevertheless, the increasing reliance on predictive systems also creates important conceptual and governance concerns. Machine learning models depend heavily upon historical data and statistical correlations, yet financial crises frequently involve structural breaks, behavioural shifts, and unprecedented market conditions that differ substantially from historical patterns (Taleb, 2007). Excessive reliance on predictive systems may therefore create forms of automation bias and false confidence in quantitative modelling.

Moreover, risk itself is not a purely objective or technical category. Supervisory definitions of institutional risk are shaped by regulatory priorities, political-economic assumptions, institutional cultures, and normative governance objectives. AI systems may therefore influence not only how risks are detected, but also how risks are constructed and prioritised within prudential governance frameworks (Power, 2004).

Consequently, while AI technologies significantly enhance supervisory analytical capacity, they do not eliminate the importance of human judgement, institutional interpretation, and normative decision-making within financial supervision. Prudential governance continues to depend fundamentally upon balancing technological capability with accountability, contextual understanding, and regulatory legitimacy.

2.3 Algorithmic Governance and the Transformation of Prudential Supervision

The convergence of algorithmic governance and risk-based supervision is fundamentally transforming the institutional character of financial regulation. Supervisory authorities increasingly operate as technologically integrated organisations combining legal authority and prudential expertise with data science, predictive analytics, and computational monitoring capabilities (BIS, 2021).

This transformation is particularly visible within regulators such as FINMA, which have progressively integrated AI-enabled analytical tools into supervisory activities. Machine learning systems, document analytics, anomaly detection models, and alternative data monitoring technologies increasingly support supervisory assessment and risk identification processes. These developments reflect the emergence of hybrid governance structures in which supervisory authority is increasingly mediated through interactions between human expertise and algorithmic systems.

Importantly, the rise of algorithmic governance does not imply the replacement of human supervisors by autonomous technological systems. Rather, prudential supervision increasingly operates through “human-in-the-loop” governance structures in which AI systems augment analytical capacity while human actors remain responsible for contextual interpretation, ethical reasoning, institutional judgement, and legal accountability (Parasuraman and Riley, 1997; OECD, 2024).

This balance between automation and human oversight is particularly important in prudential supervision because financial governance involves uncertainty, normative judgement, and systemic interpretation that cannot be reduced entirely to computational optimisation. AI systems may identify statistical irregularities and predictive correlations, but they cannot independently determine broader questions concerning proportionality, legitimacy, market confidence, or public interest.

The growing integration of AI into banking supervision therefore represents both a technological and institutional transformation. Algorithmic governance enhances regulators’ capacity to process information and identify emerging risks, but it simultaneously creates new challenges relating to explainability, operational resilience, accountability, and institutional legitimacy. Understanding these tensions is essential for evaluating the future development of AI-enabled prudential governance within increasingly digital financial systems.

The following chapter examines how these broader transformations are reflected within the Swiss supervisory environment through FINMA’s evolving use of AI-enabled supervisory technologies and the institutional role of the Data Innovation Lab.

3. FINMA, SupTech, and the Transformation of Swiss Banking Supervision

The increasing adoption of artificial intelligence (AI) and supervisory technologies (SupTech) by financial regulators reflects a broader institutional transformation within prudential governance. As discussed in the previous chapter, banking supervision is evolving from a predominantly retrospective and compliance-oriented activity towards more predictive, data-driven, and technologically integrated forms of oversight. Within this transformation, supervisory authorities increasingly rely upon advanced analytical systems to monitor institutional behaviour, identify emerging vulnerabilities, and strengthen systemic risk assessment (BIS, 2023).

Switzerland provides a particularly important case study for examining these developments due to the global significance of its financial sector, the concentration of systemically important institutions within its banking system, and FINMA’s progressive engagement with digital supervisory technologies. The Swiss supervisory environment illustrates both the opportunities and governance challenges associated with integrating AI into prudential supervision. This chapter therefore examines the evolution of FINMA’s supervisory framework, the institutional role of the Data Innovation Lab, and the broader emergence of AI-enabled supervision within Swiss financial governance.

3.1 The Swiss Banking Sector and the Evolution of FINMA

The Swiss financial system occupies a central position within global banking and wealth management. Switzerland hosts internationally significant financial institutions and maintains a highly interconnected banking sector characterised by substantial cross-border activity, large capital flows, and globally integrated financial services (Swiss Bankers Association, 2024). Consequently, the stability and supervision of Swiss banks possess importance not only for domestic financial governance, but also for broader international financial stability.

The Swiss Financial Market Supervisory Authority (FINMA) was established in 2009 following the enactment of the Federal Act on the Swiss Financial Market Supervisory Authority (FINMASA). FINMA consolidated several pre-existing supervisory bodies into a unified regulator responsible for overseeing banks, insurance companies, securities firms, financial market infrastructures, and anti-money laundering compliance (FINMA, 2024a). Its creation reflected broader international regulatory reforms following the global financial crisis and the increasing need for integrated prudential supervision within complex financial systems.

FINMA operates according to a risk-based supervisory framework that prioritises supervisory attention according to institutional size, systemic importance, risk exposure, and operational complexity. This approach aligns with broader international developments in prudential governance promoted by the Basel Committee on Banking Supervision (BCBS, 2015). Rather than relying solely on formal rule compliance, FINMA increasingly evaluates broader dimensions of institutional resilience including governance structures, risk management frameworks, operational controls, liquidity positions, and technological capabilities.

The collapse of Credit Suisse in 2023 further intensified debate regarding the effectiveness of banking supervision within Switzerland. The failure of one of Switzerland’s globally systemically important banks generated significant criticism concerning risk management failures, governance weaknesses, supervisory responsiveness, and institutional accountability (Swiss Federal Council, 2023). The crisis reinforced the importance of early-warning systems, continuous monitoring capabilities, and more dynamic forms of prudential oversight capable of identifying institutional deterioration before systemic instability materialises.

In response to growing financial complexity and rapidly evolving digital infrastructures, FINMA has increasingly adopted data-driven supervisory methods intended to strengthen analytical capacity and improve regulatory responsiveness. These developments reflect broader international trends in which financial regulators increasingly utilise SupTech systems to support prudential supervision and systemic risk monitoring (BIS, 2021).

3.2 SupTech and the Digitalisation of Prudential Supervision

Supervisory technology (SupTech) refers to the application of digital technologies—including AI, machine learning, data analytics, automation, and natural language processing—to supervisory and regulatory activities (Dias and Staschen, 2017). SupTech systems are designed to improve the efficiency, speed, and analytical sophistication of prudential oversight by enabling regulators to process larger volumes of information and identify emerging risks more effectively.

The growth of SupTech reflects several interconnected developments within contemporary financial systems. First, financial institutions increasingly generate vast quantities of digital data through transactional activity, algorithmic trading, digital banking platforms, customer interactions, and regulatory reporting obligations (BIS, 2023). Second, financial products and institutional structures have become increasingly complex and interconnected, making traditional supervisory approaches more resource-intensive and less effective. Third, regulators face growing pressure to identify emerging vulnerabilities before crises materialise, particularly following the failures exposed during the global financial crisis and subsequent episodes of banking instability.

SupTech therefore represents an attempt to enhance supervisory capability through technological augmentation. AI-enabled systems allow supervisory authorities to conduct automated monitoring, detect anomalous behaviour, analyse unstructured information, and generate predictive assessments across multiple institutions simultaneously (Arner, Barberis and Buckley, 2020). These technologies support a broader shift towards continuous and predictive supervision rather than periodic and retrospective oversight.

Internationally, financial regulators have increasingly expanded their use of SupTech systems. The European Central Bank (ECB) has developed AI-supported analytical tools to improve supervisory efficiency and banking data analysis (ECB, 2024). Similarly, the Monetary Authority of Singapore (MAS) has implemented advanced data analytics and machine learning systems to strengthen anti-money laundering supervision and financial risk monitoring (MAS, 2023). The Financial Conduct Authority (FCA) in the United Kingdom has also invested heavily in data-driven supervision and regulatory analytics to support market oversight and consumer protection (FCA, 2022).

These developments demonstrate that AI-enabled supervision is becoming a central feature of modern prudential governance. Regulators increasingly require technological systems capable of operating within highly digitalised financial environments characterised by rapid information flows, interconnected institutional risks, and evolving operational vulnerabilities.

However, the expansion of SupTech also raises important institutional and governance challenges. As supervisory authorities increasingly depend upon algorithmic systems, regulatory decision-making becomes more technologically mediated. Supervisory processes may therefore become vulnerable to issues associated with model opacity, data quality, automation bias, cybersecurity risk, and third-party technological dependencies (FSB, 2023). Consequently, the adoption of SupTech requires not only technological investment, but also institutional adaptation, governance reform, and enhanced operational resilience frameworks.

3.3 FINMA’s Data Innovation Lab and AI-Enabled Supervision

FINMA’s growing use of AI and advanced analytics reflects its broader strategic emphasis on data-driven supervision and technological innovation. A central component of this transformation has been the establishment of the Data Innovation Lab, which functions as an institutional platform for developing and integrating data analytics, machine learning tools, and digital supervisory capabilities within FINMA’s supervisory processes.

The Data Innovation Lab reflects a recognition that effective prudential supervision increasingly depends upon regulators’ ability to process large and complex datasets efficiently. Traditional supervisory approaches based primarily on manual review and static reporting are increasingly insufficient for overseeing highly digitalised financial institutions operating within rapidly evolving technological environments.

FINMA’s use of AI-enabled systems aims to strengthen several dimensions of prudential oversight. One important application involves anomaly detection and risk identification. Machine learning models can identify unusual transactional patterns, deviations from institutional norms, or emerging operational vulnerabilities that may warrant supervisory attention. Such systems improve regulators’ ability to detect early warning indicators associated with liquidity stress, governance failures, compliance weaknesses, or financial misconduct.

AI technologies also support document analysis and regulatory intelligence. Natural language processing systems enable supervisory authorities to analyse large quantities of unstructured textual information including regulatory filings, audit reports, enforcement documents, internal governance materials, and market commentary (ECB, 2024). These capabilities significantly improve supervisory efficiency and reduce the resource burden associated with manual document review.

Another important area involves peer-group analysis and comparative institutional assessment. AI systems can identify behavioural divergences between financial institutions operating within similar market environments, allowing regulators to detect abnormal patterns or potential supervisory concerns more rapidly. Such approaches align closely with risk-based supervisory methodologies that prioritise dynamic institutional assessment and early risk identification.

Furthermore, AI-supported systems may strengthen macroprudential supervision by improving regulators’ capacity to identify interconnected systemic vulnerabilities across financial institutions and market infrastructures. Financial crises often emerge through complex interactions between liquidity conditions, market sentiment, institutional behaviour, and operational interconnectedness. AI systems offer regulators enhanced capability to monitor these relationships across large-scale datasets in near real time (BIS, 2023).

The increasing use of AI within FINMA also reflects broader institutional changes within supervisory organisations. Prudential supervision increasingly requires interdisciplinary expertise combining legal interpretation, financial analysis, risk management, data science, and technological governance. Consequently, supervisory authorities are evolving beyond traditional legal-administrative institutions into technologically integrated governance organisations.

Importantly, FINMA’s adoption of AI has generally been framed as augmenting rather than replacing human supervisory judgement. Presentations and discussions at the 2025 Swiss Risk Association conference emphasised that AI systems are intended primarily to support supervisory analysis and improve efficiency rather than automate prudential decision-making entirely. Human supervisors remain responsible for contextual interpretation, proportionality assessments, legal judgement, and institutional accountability.

This distinction is particularly important within prudential governance because supervisory decisions frequently involve ambiguity, uncertainty, and normative considerations that cannot be reduced entirely to statistical optimisation. While AI systems may identify correlations, anomalies, and predictive indicators, they cannot independently determine broader questions concerning institutional trustworthiness, market confidence, proportionality, or systemic significance.

3.4 Governance Challenges within AI-Enabled Supervision

Although AI technologies offer significant opportunities for enhancing supervisory effectiveness, they also generate substantial governance and institutional challenges. One major concern involves explainability and transparency. Many advanced machine learning systems operate through highly complex computational processes that may not be fully interpretable by regulators, supervised institutions, or external stakeholders (Barocas, Hardt and Narayanan, 2019).

This issue is particularly important within prudential supervision because supervisory actions may produce significant legal and economic consequences for regulated institutions. If regulatory interventions increasingly rely upon opaque algorithmic systems, questions may emerge regarding procedural fairness, accountability, and institutional legitimacy (OECD, 2024). Financial institutions subject to supervisory scrutiny may demand greater transparency regarding how AI systems generate risk assessments or identify anomalous behaviour.

A second governance challenge concerns data quality and model reliability. AI systems depend fundamentally upon the quality, consistency, and representativeness of underlying datasets. Inaccurate, incomplete, or biased data may generate misleading supervisory outputs and increase the risk of inappropriate regulatory intervention. Furthermore, financial systems evolve continuously, meaning that predictive models trained on historical data may become less reliable under changing market conditions (Taleb, 2007).

Cybersecurity and operational resilience also represent major concerns within AI-enabled supervision. As regulators increasingly depend upon digital infrastructures and advanced analytical systems, supervisory authorities themselves become vulnerable to cyberattacks, operational disruptions, cloud concentration risks, and third-party technological dependencies (FSB, 2023). These vulnerabilities are particularly significant because disruptions affecting supervisory infrastructures could impair broader financial stability monitoring capabilities.

Additionally, AI-enabled supervision raises broader concerns regarding institutional dependency and concentration of technological expertise. Financial regulators increasingly rely upon specialised data infrastructures, cloud providers, external technology vendors, and advanced computational systems. This may create asymmetries in technical capability and increase dependence on private-sector technology providers whose operational priorities may not fully align with prudential governance objectives.

Finally, there remains a broader concern regarding automation bias and excessive reliance on predictive systems. Human supervisors may place undue confidence in algorithmic outputs, particularly when systems appear technically sophisticated or statistically precise (Parasuraman and Riley, 1997). Such dynamics may weaken critical supervisory judgement and reduce sensitivity to contextual factors or unprecedented market conditions not adequately captured within computational models.

Consequently, the integration of AI into banking supervision requires robust governance frameworks capable of balancing technological innovation with transparency, accountability, operational resilience, and human oversight. AI may substantially strengthen supervisory capability, but it cannot eliminate the need for institutional judgement and normative decision-making within prudential governance.

3.5 The Transformation of Swiss Prudential Governance

FINMA’s adoption of SupTech and AI-enabled supervisory systems illustrates a broader transformation in contemporary financial governance. Prudential supervision is increasingly evolving into a technologically integrated form of governance characterised by continuous monitoring, predictive analytics, and data-driven institutional assessment.

This transformation reflects the convergence of algorithmic governance and risk-based supervision discussed in the previous chapter. AI systems strengthen regulators’ ability to process information, identify emerging vulnerabilities, and allocate supervisory resources more dynamically. At the same time, they alter the institutional foundations of supervisory authority by increasing reliance upon computational infrastructures, predictive modelling, and technologically mediated forms of regulatory knowledge.

Importantly, these developments do not imply the disappearance of human supervisory judgement. Rather, contemporary banking supervision increasingly operates through hybrid governance structures in which AI systems augment—but do not replace—human expertise. Effective prudential supervision continues to depend upon contextual interpretation, legal reasoning, institutional experience, and normative assessment that extend beyond purely computational analysis.

The Swiss experience therefore demonstrates both the opportunities and limitations of AI-enabled prudential governance. While AI technologies substantially improve analytical capability and supervisory responsiveness, they simultaneously create new governance challenges relating to explainability, operational resilience, accountability, and institutional legitimacy. Understanding these tensions is essential for evaluating the future development of AI-driven supervision within increasingly digitalised financial systems.

The following chapter examines the specific applications of AI within banking institutions and supervisory environments, focusing particularly on predictive analytics, fraud detection, anti-money laundering systems, document intelligence, and algorithmic risk assessment.

4. Governance Challenges and Institutional Implications of AI-Enabled Banking Supervision

The increasing integration of artificial intelligence (AI), machine learning (ML), and supervisory technologies (SupTech) into banking supervision has significantly transformed prudential governance within Switzerland and internationally. As demonstrated in the previous chapters, FINMA’s adoption of AI-enabled supervisory systems and the establishment of the Data Innovation Lab reflect a broader shift towards predictive, data-driven, and technologically mediated supervision. While these developments substantially enhance supervisory capability, they simultaneously generate important legal, organisational, operational, and governance challenges that directly affect the legitimacy, accountability, and resilience of contemporary prudential regulation.

This chapter critically examines the principal governance implications associated with AI-enabled supervision within both supervisory authorities and banking institutions. Building upon the theoretical framework of algorithmic governance developed in Chapter 2 and the analysis of FINMA’s evolving supervisory infrastructure in Chapter 3, this chapter argues that the growing reliance on AI within prudential governance creates a fundamental tension between technological efficiency and regulatory legitimacy. Although AI systems strengthen regulators’ ability to identify emerging risks, process large volumes of information, and enhance supervisory responsiveness, they also introduce significant challenges relating to explainability, accountability, operational resilience, data governance, and human oversight.

The chapter further argues that effective AI integration within banking supervision requires hybrid governance frameworks in which algorithmic systems augment rather than replace human supervisory judgement. Prudential supervision involves normative assessment, contextual interpretation, and institutional accountability that cannot be fully automated through predictive analytics alone.

4.1 Explainability, Transparency, and Supervisory Legitimacy

One of the most significant governance challenges associated with AI-enabled supervision concerns explainability and transparency. Many advanced machine learning models—particularly deep learning systems—operate through highly complex computational processes that are often difficult to interpret even for technical specialists (Barocas, Hardt and Narayanan, 2019). While such systems may generate highly accurate predictive outputs, the internal reasoning processes underlying algorithmic decisions frequently remain opaque.

Within prudential supervision, this lack of explainability creates substantial legal and institutional concerns. Supervisory authorities exercise significant regulatory power capable of affecting institutional reputation, market confidence, capital requirements, enforcement measures, and operational restrictions. Consequently, supervisory interventions must satisfy standards of procedural fairness, proportionality, and legal accountability. If supervisory decisions increasingly rely upon opaque algorithmic systems, regulated institutions may face difficulty understanding how risk assessments were generated or why specific supervisory actions were initiated (Doshi-Velez and Kim, 2017).

These concerns are particularly important within risk-based supervisory frameworks such as FINMA’s, where supervisory intensity depends heavily upon dynamic assessments of institutional risk exposure and operational vulnerability. AI systems may identify behavioural anomalies, predictive indicators, or statistical correlations that influence supervisory prioritisation. However, if these analytical outputs cannot be adequately explained or independently verified, questions may emerge regarding regulatory legitimacy and due process.

Scholars of algorithmic governance argue that opacity may undermine institutional trust and democratic accountability by concentrating epistemic authority within technical systems inaccessible to external scrutiny (Yeung, 2018; Katzenbach and Ulbricht, 2019). In financial supervision, this issue becomes especially significant because prudential regulation depends heavily upon institutional credibility and market confidence. Excessive reliance on opaque computational systems may weaken confidence in supervisory neutrality and fairness.

International regulatory organisations increasingly recognise these concerns. The Organisation for Economic Co-operation and Development (OECD) emphasises that AI systems deployed in high-impact governance environments should satisfy principles of transparency, explainability, robustness, and accountability (OECD, 2024). Similarly, the European Union’s AI Act identifies financial services and prudential supervision as high-risk regulatory domains requiring enhanced governance safeguards and human oversight mechanisms (European Union, 2024).

Consequently, supervisory authorities adopting AI technologies must ensure that algorithmic systems remain sufficiently interpretable to support regulatory accountability and legal defensibility. Explainability is therefore not merely a technical requirement, but a core component of institutional legitimacy within AI-enabled prudential governance.

4.2 Data Governance, Bias, and Model Reliability

AI-driven supervision also creates major challenges relating to data governance, model reliability, and algorithmic bias. Machine learning systems depend fundamentally upon large quantities of high-quality data to generate reliable predictive outputs. However, financial datasets are often incomplete, inconsistent, fragmented, or historically biased, creating risks that algorithmic systems may produce inaccurate or distorted supervisory assessments (Bholat, 2015).

Within banking supervision, data quality problems may significantly affect prudential outcomes. Predictive models trained on flawed or unrepresentative datasets may incorrectly identify institutional vulnerabilities, underestimate systemic exposures, or generate false supervisory alerts. Such risks are particularly important because prudential interventions based on unreliable analytical outputs may produce substantial economic and reputational consequences for regulated institutions.

Algorithmic bias represents a further concern. AI systems frequently reproduce or amplify existing biases embedded within historical datasets and institutional practices (Barocas, Hardt and Narayanan, 2019). In financial services, biased models may affect areas such as credit assessment, anti-money laundering (AML) monitoring, fraud detection, and customer risk profiling. When regulators themselves utilise AI-driven systems, there is a corresponding risk that supervisory assessments may reflect embedded biases within underlying data structures or modelling assumptions.

Importantly, financial systems are characterised by continuous change and evolving market behaviour. Machine learning models typically rely upon historical data patterns to generate predictive assessments. However, financial crises frequently involve structural breaks, behavioural shifts, and unprecedented market conditions that differ substantially from historical experience (Taleb, 2007). Models that perform effectively under normal market conditions may therefore become unreliable during periods of systemic stress.

The collapse of Silicon Valley Bank in 2023 illustrated how rapidly changing depositor behaviour, digital communication networks, and social media dynamics can accelerate institutional instability in ways that traditional risk models may fail to anticipate (FSB, 2023). These developments demonstrate the limitations of purely data-driven predictive systems within complex financial environments characterised by uncertainty and behavioural volatility.

Accordingly, AI systems used within banking supervision require continuous validation, recalibration, and human interpretation. Regulators must ensure that supervisory models remain adaptive, context-sensitive, and capable of responding to evolving institutional conditions. Effective data governance frameworks therefore become essential components of AI-enabled prudential supervision.

4.3 Cybersecurity, Operational Resilience, and Technological Dependency

The digitalisation of prudential supervision also creates substantial operational and systemic risks. As supervisory authorities increasingly depend upon AI systems, cloud infrastructures, data integration platforms, and digital monitoring technologies, regulators themselves become vulnerable to many of the same operational risks they seek to supervise within financial institutions.

Cybersecurity represents one of the most significant concerns. Supervisory authorities manage highly sensitive financial information relating to institutional solvency, liquidity, governance structures, market activity, and regulatory investigations. AI-enabled supervisory infrastructures may therefore become attractive targets for cyberattacks, espionage, data theft, or operational disruption (FSB, 2023).

The increasing use of cloud computing and third-party technology providers further intensifies these concerns. Many AI systems rely upon external vendors for computational infrastructure, software platforms, data storage, and analytical capabilities. This creates concentration risks and operational dependencies that may reduce institutional autonomy and complicate supervisory resilience (BIS, 2023).

Operational dependency on external technological providers raises broader governance questions regarding control, accountability, and sovereignty within financial regulation. Public supervisory authorities may become reliant upon proprietary technologies developed by private firms whose commercial objectives do not necessarily align with prudential governance priorities. Scholars have therefore warned that digital governance increasingly involves forms of “infrastructural power” exercised through control over technological systems and data architectures (Plantin et al., 2018).

These dependencies are particularly important within Switzerland due to the international significance and systemic interconnectedness of its banking sector. Disruptions affecting supervisory infrastructures could impair FINMA’s ability to conduct timely risk assessments, monitor institutional vulnerabilities, or coordinate responses during periods of market stress.

Consequently, operational resilience must become a central component of AI-enabled prudential governance. Supervisory authorities require robust cybersecurity frameworks, redundancy mechanisms, secure data governance protocols, and comprehensive oversight of third-party technological dependencies.

4.4 Automation Bias and the Continuing Importance of Human Judgement

A further challenge associated with AI-enabled supervision concerns automation bias and excessive reliance on computational systems. Automation bias refers to the tendency of human actors to place disproportionate trust in automated outputs, particularly where systems appear technically sophisticated or statistically precise (Parasuraman and Riley, 1997).

Within banking supervision, automation bias may weaken critical supervisory judgement and reduce sensitivity to contextual, qualitative, or unprecedented institutional factors not adequately captured within predictive models. Human supervisors may become overly dependent on algorithmic risk indicators while neglecting broader prudential interpretation, organisational culture, governance weaknesses, or behavioural dynamics.

This concern reflects a broader limitation of algorithmic governance. While AI systems are highly effective at identifying statistical relationships and processing large-scale datasets, they cannot independently evaluate normative questions concerning proportionality, institutional legitimacy, public confidence, or systemic significance. Prudential supervision inherently involves discretionary judgement, ethical reasoning, and contextual assessment that extend beyond purely quantitative analysis.

Theoretical research on risk governance further demonstrates that risk itself is not an objective or purely technical category. Rather, definitions of institutional risk are shaped by political priorities, regulatory cultures, organisational assumptions, and normative governance objectives (Power, 2004). AI systems therefore do not merely detect risk; they may also influence how risk is conceptualised and prioritised within supervisory frameworks.

This issue is particularly important following the collapse of Credit Suisse and broader debates concerning supervisory responsiveness within Switzerland. Prudential failures often emerge not solely from deficiencies in quantitative indicators, but also from governance weaknesses, organisational culture problems, strategic misjudgements, and failures of institutional accountability that may not be fully observable through algorithmic monitoring systems alone (Swiss Federal Council, 2023).

For this reason, contemporary supervisory models increasingly emphasise “human-in-the-loop” governance structures in which AI systems support—but do not replace—human supervisory authority (OECD, 2024). Within such frameworks, algorithmic systems augment analytical capability while human supervisors retain responsibility for interpretation, contextual evaluation, legal reasoning, and final decision-making.

FINMA’s own approach appears broadly consistent with this model. As discussed in Chapter 3, presentations at the Swiss Risk Association conference emphasised that AI technologies are intended primarily to enhance supervisory efficiency and analytical capability rather than automate prudential judgement entirely. This reflects recognition that effective prudential governance ultimately depends upon maintaining an appropriate balance between technological innovation and human oversight.

4.5 Institutional Transformation and the Future of Prudential Governance

The integration of AI into banking supervision reflects a broader institutional transformation affecting the nature of financial governance itself. Supervisory authorities increasingly operate as hybrid organisations combining legal authority, financial expertise, technological infrastructure, and computational analytics (BIS, 2021). Prudential supervision is therefore evolving from a predominantly legal-administrative activity into a technologically integrated form of governance characterised by continuous monitoring, predictive analysis, and algorithmically mediated decision-support systems.

This transformation creates both opportunities and tensions. On one hand, AI systems significantly strengthen regulators’ capacity to process information, identify emerging vulnerabilities, and enhance supervisory responsiveness within highly digitalised financial environments. On the other hand, they simultaneously create new forms of institutional dependency, governance complexity, and legitimacy risk.

The Swiss experience illustrates these broader dynamics particularly clearly. FINMA’s development of the Data Innovation Lab and adoption of SupTech systems demonstrate how supervisory authorities are adapting institutionally to increasingly data-intensive financial systems. At the same time, these developments highlight the continuing importance of accountability, transparency, operational resilience, and human judgement within prudential governance.

Ultimately, AI does not eliminate the need for supervisory expertise or institutional interpretation. Rather, it transforms the conditions under which prudential authority is exercised. Effective AI-enabled supervision therefore requires governance frameworks capable of integrating technological capability with legal accountability, ethical oversight, and institutional legitimacy.

The following chapter examines these tensions further by analysing the continuing role of human oversight within AI-enabled prudential governance and evaluating the broader limitations of automation within banking supervision.

5. The Role of Human Oversight and the Limits of Automation in Prudential Governance

The preceding chapters have demonstrated that artificial intelligence (AI) and supervisory technologies (SupTech) are fundamentally reshaping banking supervision by enhancing analytical capacity, enabling predictive risk assessment, and transforming supervisory infrastructures within institutions such as FINMA. However, they also highlight that this transformation is not a simple process of technological substitution. Instead, it produces a structurally hybrid form of prudential governance in which algorithmic systems and human judgement are increasingly interdependent.

This chapter develops the central argument that, despite significant advances in AI-enabled supervision, human oversight remains indispensable within prudential governance. The core limitation of automation in banking supervision lies not only in technical constraints, but in the inherently normative, uncertain, and context-dependent nature of financial regulation itself. Banking supervision is not purely an exercise in pattern recognition or statistical inference; it is a form of institutional judgement embedded within legal, economic, and political frameworks that require interpretation, accountability, and discretion.

Accordingly, this chapter critically examines the epistemic, institutional, and regulatory limits of automation and argues that effective prudential supervision must be understood as a socio-technical system in which AI functions as an augmentative tool rather than a substitute for supervisory authority.

5.1 The Epistemic Limits of AI in Prudential Decision-Making

A central limitation of AI in banking supervision concerns the epistemic boundaries of machine learning systems. While AI systems excel at identifying correlations within large datasets, they do not inherently distinguish between correlation and causation. This distinction is particularly important in financial regulation, where supervisory intervention depends not only on identifying patterns of risk but on interpreting the underlying causal mechanisms of financial instability.

Financial systems are characterised by non-linear dynamics, feedback loops, and structural discontinuities. As highlighted in Chapter 2, financial crises often involve regime shifts and behavioural changes that are not fully captured by historical data (Taleb, 2007). Machine learning models trained on past observations may therefore perform poorly under conditions of financial stress, particularly when novel forms of market behaviour emerge.

Moreover, AI systems operate within a fundamentally inductive framework. They infer future risk based on historical regularities, whereas prudential supervision often requires anticipatory judgement about unprecedented events. This creates a structural epistemic gap between predictive modelling and supervisory foresight. As Power (2004) argues, risk is not an objective phenomenon, but a constructed category shaped by institutional interpretation, regulatory priorities, and organisational culture. AI systems cannot independently generate these normative interpretations.

In this sense, algorithmic systems should be understood as tools for probabilistic estimation rather than instruments of prudential judgement. Their outputs require contextualisation within broader supervisory frameworks that incorporate legal reasoning, institutional knowledge, and macroeconomic interpretation (Bholat, 2015).

5.2 Human Judgement as a Core Component of Supervisory Authority

Despite the increasing sophistication of AI-enabled supervision, human judgement remains central to prudential governance. Supervisory decisions involve evaluative reasoning that extends beyond quantitative risk indicators to include considerations of proportionality, systemic importance, institutional behaviour, and public interest.

As discussed in Chapter 3, FINMA’s supervisory framework continues to emphasise risk-based supervision in which human supervisors retain ultimate responsibility for interpreting risk signals generated by analytical systems. This reflects a broader international consensus that supervisory authority cannot be fully delegated to algorithmic systems without undermining accountability and legal legitimacy (OECD, 2024).

The concept of “human-in-the-loop” governance has therefore become a defining principle of AI integration in financial regulation. Within this framework, AI systems support decision-making by providing analytical inputs, while human supervisors remain responsible for final judgement and intervention (Parasuraman and Riley, 1997). This model preserves the normative dimension of prudential governance, ensuring that regulatory decisions remain grounded in legal accountability and institutional discretion.

Importantly, human judgement also plays a critical role in interpreting model outputs. Machine learning systems may identify anomalies or risk patterns, but they cannot determine their institutional significance without human contextualisation. For example, a statistical deviation in liquidity metrics may signal systemic risk in one context but may be benign in another depending on market conditions, institutional strategy, or macroeconomic factors.

Research in regulatory governance has consistently shown that expert judgement is essential in translating complex data into actionable supervisory decisions (Bamberger, 2010). In this sense, AI should be understood as augmenting rather than displacing supervisory expertise.

5.3 Automation Bias and the Risk of Over-Reliance on Algorithmic Systems

One of the most significant risks associated with AI-enabled supervision is automation bias—the tendency of human operators to over-rely on algorithmic outputs even when those outputs may be incomplete, misleading, or contextually inappropriate (Parasuraman and Riley, 1997). Within banking supervision, this risk is particularly acute due to the perceived authority and technical sophistication of AI systems.

As supervisory authorities increasingly adopt predictive analytics and anomaly detection tools, there is a risk that human judgement becomes subordinated to algorithmic recommendations. This may lead to a reduction in critical oversight, particularly in situations where model outputs are treated as objective or neutral representations of financial risk.

Empirical research in human-computer interaction demonstrates that automation bias tends to increase when systems are perceived as highly accurate or when users are under cognitive or organisational pressure (Lee and See, 2004). In supervisory contexts, where regulators must process large volumes of information under time constraints, such conditions are frequently present.

This raises important governance concerns. If supervisory decisions are overly influenced by algorithmic outputs, there is a risk that errors embedded in models may be systematically reproduced at scale. This is particularly problematic in financial regulation, where model failures can contribute to systemic misjudgements with significant economic consequences.

Accordingly, regulatory frameworks must actively mitigate automation bias through institutional safeguards, including mandatory human review of AI-generated risk assessments, transparent model documentation, and structured decision-making protocols that encourage critical evaluation of algorithmic outputs.

5.4 Explainability, Accountability, and Legal Responsibility

Another key limitation of AI in prudential governance concerns the relationship between algorithmic systems and legal accountability. As highlighted in Chapter 4, many advanced machine learning models are inherently opaque, making it difficult to trace how specific outputs are generated (Barocas, Hardt and Narayanan, 2019).

This creates a fundamental tension within financial supervision. Regulatory decisions must be legally justifiable and capable of external scrutiny, particularly when they affect regulated institutions through enforcement actions, capital requirements, or supervisory interventions. However, if these decisions are informed by opaque AI systems, it becomes difficult to establish clear lines of responsibility.

Legal scholarship on algorithmic governance has emphasised that accountability requires not only transparency in outcomes but also intelligibility in decision-making processes (Wachter, Mittelstadt and Floridi, 2017). In the absence of explainability, supervisory decisions risk being perceived as arbitrary or technocratic, potentially undermining institutional legitimacy.

Within FINMA’s supervisory framework, this challenge reinforces the importance of maintaining human accountability at the centre of AI-enabled decision-making. While AI systems may inform supervisory judgement, final decisions must remain attributable to identifiable legal authorities. This ensures compliance with principles of administrative law, including due process, proportionality, and reason-giving.

Consequently, explainable AI (XAI) is increasingly recognised as a critical requirement for regulatory deployment. However, even explainable systems cannot fully eliminate the need for interpretive judgement, as supervisory decisions ultimately involve normative evaluation rather than purely technical computation.

5.5 Institutional Knowledge, Organisational Culture, and Contextual Understanding

Beyond technical and legal constraints, human oversight remains essential due to the importance of institutional knowledge and organisational context in prudential supervision. Banking supervision is deeply embedded within historical experience, informal knowledge networks, and institutional memory that cannot be fully encoded within algorithmic systems.

Supervisors develop contextual understanding of institutions through repeated interaction, qualitative assessment, and experience-based judgement. This includes knowledge of organisational culture, governance quality, risk appetite, and behavioural patterns that may not be captured in structured datasets.

Research in financial regulation has shown that organisational culture is a critical determinant of risk outcomes, particularly in banking institutions (BCBS, 2015). Such cultural factors are difficult to quantify and may not be readily detectable through automated systems. As a result, human supervisors play a crucial role in interpreting qualitative signals and integrating them into broader risk assessments.

Moreover, prudential supervision often involves tacit knowledge—forms of understanding that are difficult to formalise or encode into computational systems. This includes intuition developed through experience, professional judgement shaped by institutional practice, and contextual awareness of market dynamics.

AI systems, by contrast, are inherently limited to the data they are trained on. They cannot independently access informal knowledge, interpret organisational culture, or understand broader socio-political contexts unless these are explicitly represented in data form.

5.6 Towards Hybrid Prudential Governance

The analysis in this chapter supports the conclusion that effective banking supervision requires a hybrid governance model in which AI systems and human judgement operate in a complementary relationship. Rather than replacing supervisory authority, AI should be understood as a tool that extends analytical capacity while preserving the normative and interpretive functions of human regulators.

This hybrid model aligns with emerging international regulatory principles emphasising human oversight, accountability, and robustness in AI governance (OECD, 2024; European Union, 2024). It also reflects the practical realities of supervisory practice within institutions such as FINMA, where AI systems are used to support—but not determine—regulatory decisions.

Within this framework, the future of prudential supervision is best understood not as a transition from human to machine decision-making, but as the emergence of a socio-technical system in which governance is distributed across humans, algorithms, and institutional structures. The effectiveness of this system depends on maintaining a careful balance between technological efficiency and institutional accountability.

5.7 Conclusion

While AI and SupTech systems significantly enhance the analytical capabilities of banking supervision, they do not eliminate the need for human oversight. On the contrary, the increasing complexity of financial systems and the limitations of algorithmic reasoning reinforce the importance of human judgement in prudential governance.

The key challenge for regulators is therefore not whether to adopt AI, but how to integrate it responsibly within supervisory frameworks that preserve accountability, transparency, and institutional legitimacy. As the Swiss case illustrates, the future of banking supervision lies in hybrid governance structures that combine technological innovation with enduring principles of prudential judgement.

6. Legal and Organisational Challenges for Banks in AI-Enabled Banking

The integration of artificial intelligence (AI), machine learning (ML), and supervisory technologies into banking operations—alongside the parallel transformation of prudential supervision discussed in Chapters 1–5—has fundamentally reshaped the organisational and legal environment in which banks operate. While AI systems enhance efficiency, risk detection, customer service, and regulatory compliance, they also introduce complex governance challenges that extend beyond traditional IT risk management.

As shown in the preceding chapters, AI is no longer a peripheral tool within financial services but a core component of modern banking architecture, embedded in credit scoring, fraud detection, AML systems, trading algorithms, and customer analytics (Fuster et al., 2022; BIS, 2024). This deep integration creates a tightly coupled socio-technical system in which banks, regulators, and technology providers are increasingly interdependent. Consequently, AI adoption is not merely an operational upgrade but a structural transformation of banking governance, accountability, and regulatory exposure.

In Switzerland, these issues are particularly pronounced due to the global importance of its banking sector, the strictness of its regulatory framework, and FINMA’s increasingly data-driven supervisory approach (FINMA, 2024; Swiss Federal Council, 2023). The convergence of AI-enabled supervision and AI-enabled banking creates a mirrored governance environment in which both regulators and regulated entities rely on similar technological infrastructures. This intensifies shared risks relating to opacity, cyber vulnerability, data dependency, and accountability gaps.

This chapter examines the key legal and organisational challenges faced by banks in implementing AI systems, focusing on governance and accountability, data protection, explainability, and cybersecurity. It argues that AI adoption requires banks to shift from traditional compliance-based governance models towards integrated, adaptive, and continuously monitored AI risk management frameworks aligned with both regulatory expectations and emerging standards of algorithmic governance (Yeung, 2018; OECD, 2024).

6.1 Governance, Accountability, and Organisational Responsibility

A central challenge in AI-enabled banking concerns governance and accountability within increasingly complex organisational structures. As demonstrated in Chapters 3 and 5, AI systems in financial services operate within hybrid human–machine environments where decision-making is distributed across data scientists, software engineers, compliance teams, senior management, and external technology providers.

This fragmentation of responsibility creates what scholars describe as “accountability diffusion”, where it becomes difficult to assign clear legal or organisational responsibility for AI-driven outcomes (Barocas, Hardt and Narayanan, 2019). In traditional banking systems, responsibility for decisions such as credit approval or risk classification could be traced to identifiable human actors or rule-based systems. In contrast, AI systems often produce probabilistic outputs derived from complex model architectures that are not fully interpretable by any single actor within the organisation.

Regulatory frameworks increasingly reject the idea that algorithmic systems can be treated as autonomous decision-makers. The European Banking Authority (EBA) and international standard-setters emphasise that ultimate accountability for AI systems remains with the financial institution deploying them, regardless of outsourcing arrangements or technological complexity (EBA, 2023; FSB, 2023). This principle aligns with the broader supervisory logic discussed in Chapter 3, where FINMA retains ultimate responsibility for prudential oversight even when relying on AI-assisted supervisory tools.

To operationalise accountability, banks are required to implement robust AI governance frameworks consisting of three interrelated layers:

First, board-level and senior management oversight is essential to ensure that AI deployment aligns with institutional risk appetite and regulatory obligations. This reflects the broader shift towards risk-based governance models described in Chapters 2 and 5, where strategic oversight becomes central to managing complex technological systems.

Second, model risk management frameworks must be established, including validation procedures, stress testing, performance monitoring, and periodic recalibration. These mechanisms are consistent with Basel Committee principles on model governance and reflect the need to ensure that AI systems remain reliable under changing market conditions (BCBS, 2015).

Third, auditability and traceability are critical. Banks must maintain comprehensive documentation covering training datasets, feature selection, model design, parameter changes, and deployment decisions. This is essential not only for internal control but also for regulatory inspection, particularly in jurisdictions such as Switzerland where supervisory authorities increasingly rely on data-driven oversight tools (FINMA, 2024).

The rise of third-party AI providers and cloud-based infrastructure further complicates governance structures. Outsourcing introduces additional layers of dependency and potential control gaps, requiring enhanced vendor risk management and contractual safeguards (FSB, 2023). In this context, governance must extend beyond organisational boundaries to include ecosystem-wide accountability structures.

6.2 Data Protection, Privacy, and Regulatory Complexity

AI systems in banking are fundamentally data-intensive, relying on large-scale datasets that include transactional histories, behavioural information, customer profiles, and financial records. This creates significant legal and ethical challenges related to data protection, particularly within highly regulated environments such as Switzerland.

Swiss banks operate under stringent confidentiality obligations, including banking secrecy principles and the revised Swiss Federal Act on Data Protection (FADP). In addition, cross-border operations often require compliance with the European Union’s General Data Protection Regulation (GDPR), creating overlapping and sometimes conflicting regulatory obligations.

From a governance perspective, AI systems challenge core data protection principles such as purpose limitation, data minimisation, and storage limitation. Machine learning models typically require extensive datasets to improve accuracy and predictive performance, creating tension between regulatory compliance and technical efficiency (Bholat, 2015).

A further challenge lies in inferential analytics. As demonstrated in empirical research, AI systems can derive sensitive attributes indirectly through pattern recognition and proxy variables, even when such attributes are not explicitly included in datasets (Barocas, Hardt and Narayanan, 2019). This raises significant concerns regarding indirect discrimination, behavioural profiling, and informed consent.

Cross-border data flows and cloud-based infrastructures further complicate compliance. Data may be processed, stored, or analysed across multiple jurisdictions, raising questions regarding legal enforceability, data sovereignty, and regulatory oversight. These issues are increasingly central to prudential governance in globally integrated banking systems, as discussed in Chapter 3.

To address these risks, banks must implement comprehensive data governance frameworks encompassing:

  • lawful basis and transparency in data processing;

  • purpose specification and limitation controls;

  • encryption and secure data storage systems;

  • access control and identity management;

  • continuous data quality assurance; and

  • regulatory compliance monitoring systems.

Cybersecurity is closely linked to data governance. AI systems increase the attack surface of banking infrastructures, making data protection inseparable from operational security (FSB, 2023).

6.3 Explainability, Transparency, and Model Interpretability

Explainability remains one of the most critical challenges in AI-enabled banking systems. As highlighted in Chapters 4 and 5, many advanced machine learning models function as “black boxes”, producing outputs that are difficult to interpret or justify in human-readable form.

In banking contexts, this creates significant legal and regulatory concerns because AI systems increasingly influence decisions with material consequences for customers and institutions, including credit allocation, fraud detection, AML monitoring, and investment advice.

A lack of explainability generates three key governance problems.

First, it undermines customer protection and trust. Individuals affected by automated decisions have a legitimate expectation of meaningful explanations, particularly where decisions involve denial of services or increased regulatory scrutiny.

Second, it complicates regulatory compliance. Supervisory authorities such as FINMA require banks to demonstrate that their risk models are fair, robust, and aligned with prudential standards. Without sufficient transparency, compliance verification becomes difficult, increasing regulatory uncertainty (OECD, 2024).

Third, it weakens internal risk management. Without interpretability, banks may fail to detect model drift, embedded bias, or deteriorating performance, increasing operational and reputational risk exposure.

Empirical research demonstrates that AI systems can reproduce and amplify biases present in historical datasets, particularly in credit scoring and lending decisions (Fuster et al., 2022). This reinforces the need for explainable AI (XAI) methods, including feature attribution techniques, surrogate models, and local interpretability tools.

However, explainability often involves trade-offs with predictive accuracy and model complexity. Highly sophisticated deep learning systems may deliver superior performance but at the cost of reduced interpretability. Banks must therefore balance efficiency gains with regulatory expectations for transparency and accountability, reflecting broader tensions between algorithmic governance and prudential oversight discussed in Chapters 2–5 (Yeung, 2018).

6.4 Cybersecurity, Operational Risk, and Systemic Vulnerability

AI adoption significantly expands the cybersecurity and operational risk landscape for banks. As financial institutions become increasingly dependent on interconnected digital infrastructures, they face heightened exposure to cyberattacks, system failures, and operational disruptions.

AI systems themselves introduce novel security risks. Adversarial attacks, data poisoning, model inversion, and prompt manipulation can compromise system integrity or distort outputs (FSB, 2023). These vulnerabilities are particularly concerning in critical banking functions such as fraud detection, AML monitoring, and credit decisioning.

Operational risks also arise from model instability and “model drift”, where performance deteriorates as underlying data distributions change. This is especially relevant in financial markets characterised by volatility and structural change (Taleb, 2007). In such environments, static models may fail to adapt to new behavioural patterns or macroeconomic conditions.

Generative AI introduces additional uncertainty. Large language models may produce inaccurate or fabricated outputs (“hallucinations”), particularly in complex regulatory contexts. If deployed in customer-facing systems without adequate safeguards, such errors may result in legal liability, compliance breaches, or reputational damage.

The systemic implications of operational risk are increasingly recognised by regulators. As discussed in Chapters 3 and 5, both FINMA and international regulatory bodies emphasise operational resilience as a core pillar of prudential supervision, particularly in the context of digital transformation and cyber risk expansion (BCBS, 2021; FSB, 2023).

To mitigate these risks, banks must adopt integrated AI risk management frameworks including:

  • continuous cybersecurity monitoring;

  • adversarial robustness testing;

  • model validation and recalibration processes;

  • incident response and recovery mechanisms;

  • third-party vendor risk assessments; and

  • enterprise-wide resilience governance structures.

AI risk management must therefore be embedded within broader operational resilience frameworks rather than treated as an isolated technical function.

6.5 Towards Integrated AI Risk Governance in Banking

The challenges outlined in this chapter demonstrate that AI adoption in banking is not simply a technological upgrade but a fundamental transformation of organisational governance structures. As highlighted across Chapters 1–5, AI systems reshape the relationship between banks and regulators by embedding predictive analytics and algorithmic decision-making within both supervisory and operational domains.

This creates a highly interdependent regulatory ecosystem in which banks and supervisory authorities co-evolve alongside technological infrastructures. In Switzerland, this dynamic is particularly evident in the interaction between FINMA’s AI-enabled supervision and banks’ internal AI governance systems.

Effective AI governance therefore requires an integrated approach that combines legal compliance, technical oversight, ethical safeguards, and organisational risk management. Rather than treating AI as a discrete innovation, banks must embed it within enterprise-wide governance structures aligned with prudential expectations and algorithmic governance principles (OECD, 2024).

Ultimately, the Swiss experience illustrates that AI in banking is a socio-technical transformation requiring continuous adaptation. The effectiveness of AI systems depends not only on technological sophistication but also on institutional capacity, regulatory alignment, and sustained human oversight.

7. Human Oversight and the Limits of Automation in AI-Enabled Financial Governance

Across the preceding chapters, a consistent theme has emerged: artificial intelligence (AI) is reshaping both banking operations and prudential supervision, but it does not displace the need for human judgement. Instead, it reconfigures how human authority is exercised within increasingly data-driven, algorithmically mediated financial systems.

Chapters 1–6 have shown that AI is embedded in both sides of the regulatory relationship. Banks deploy machine learning for credit scoring, fraud detection, AML monitoring, and trading decisions (Fuster et al., 2022), while supervisory authorities such as FINMA increasingly rely on SupTech tools, predictive analytics, and anomaly detection systems to enhance risk-based supervision (BIS, 2023; FINMA, 2024). This co-evolution produces what can be described as a dual algorithmic environment: both regulators and regulated entities operate through similar technological logics, infrastructures, and data dependencies.

Within this context, the key governance question is no longer whether AI should be used, but how its use can be reconciled with the normative, legal, and institutional requirements of prudential governance. This chapter argues that, despite rapid automation, human oversight remains structurally indispensable because financial supervision is fundamentally a domain of uncertainty, interpretation, and accountability that cannot be fully formalised into computational systems.

7.1 AI as Augmentation, Not Substitution

A central principle emerging from both regulatory practice and academic literature is that AI functions as an augmentative technology rather than a replacement for human decision-making. This principle is embedded in supervisory practice at FINMA and reflected in international regulatory frameworks, which consistently emphasise “meaningful human oversight” as a prerequisite for trustworthy AI deployment (OECD, 2024; European Union, 2024).

As discussed in Chapter 4, the rise of innovation units such as FINMA’s Data Innovation Lab illustrates how AI is integrated into supervisory workflows to support, rather than replace, human judgement. Similarly, Chapter 6 demonstrated that banks are required to embed AI systems within governance structures that retain clear lines of accountability and human responsibility.

This reflects a broader consensus in algorithmic governance literature: computational systems can process information, but they cannot assume normative responsibility for decisions with legal or societal consequences (Yeung, 2018). In prudential supervision, where decisions affect financial stability, institutional survival, and public trust, this distinction is particularly significant.

The widely cited principle that “technology assists – people decide” captures this institutional reality. AI may generate risk signals, predictive outputs, or anomaly detections, but supervisory interpretation, escalation, and intervention remain fundamentally human responsibilities.

7.2 The Epistemic Limits of Machine Learning in Financial Systems

A key limitation of AI systems lies in their dependence on historical data and statistical inference. Machine learning models identify patterns based on past observations, but financial systems are inherently dynamic, adaptive, and prone to structural discontinuities.

As emphasised in Chapter 2, financial crises often involve regime shifts, behavioural changes, and non-linear feedback effects that are not predictable from historical data alone (Taleb, 2007). This creates a fundamental epistemic limitation: models trained on past conditions may fail under novel or extreme scenarios.

The global financial crisis and more recent episodes of banking instability, including rapid digital bank runs, illustrate how financial behaviour can change abruptly in response to technological and informational shifts (FSB, 2023). In such environments, predictive models may misestimate risk precisely when accurate judgement is most needed.

From a supervisory perspective, this reinforces the importance of human interpretive capacity. While AI systems are effective at detecting correlations and incremental risk signals, they lack the ability to understand structural breaks, evolving institutional behaviour, or macro-financial context. As a result, human supervisors remain essential for interpreting whether algorithmic outputs reflect genuine systemic risk or merely statistical noise.

7.3 Judgement, Context, and the Normative Nature of Supervision

Financial supervision is not a purely technical exercise; it is a normative and institutional practice embedded within legal and economic governance frameworks. As discussed in Chapter 3, risk-based supervision involves not only identifying vulnerabilities but also prioritising interventions based on systemic importance, proportionality, and public interest.

These determinations cannot be reduced to algorithmic optimisation. Supervisors must evaluate qualitative dimensions such as governance quality, risk culture, managerial integrity, and organisational resilience—factors that are difficult to quantify but central to prudential assessment (BCBS, 2015).

Similarly, banking decisions within institutions—such as credit allocation or AML escalation—require contextual judgement that extends beyond model outputs. While AI systems may flag anomalies or assign risk scores, human decision-makers are needed to interpret ambiguous cases, assess proportionality, and incorporate broader institutional knowledge.

This reflects a key insight from risk governance theory: risk is not simply discovered but constructed through institutional interpretation and decision-making processes (Power, 2004). AI systems therefore do not eliminate judgement; they reshape the informational environment within which judgement is exercised.

7.4 Explainability, Accountability, and the Problem of Delegated Decision-Making

One of the most persistent challenges identified across Chapters 4–6 is the issue of explainability. Many AI systems—particularly deep learning models—operate as opaque “black boxes”, making it difficult to trace how specific outputs are generated (Barocas, Hardt and Narayanan, 2019).

In financial governance, this opacity creates a fundamental accountability problem. Supervisory actions, credit decisions, or compliance interventions must be legally defensible and capable of explanation to regulators, courts, and affected parties. If decisions are based on systems that cannot be meaningfully interpreted, accountability becomes fragmented.

Regulatory frameworks therefore consistently reaffirm that accountability cannot be delegated to algorithmic systems. Institutions remain fully responsible for AI-assisted decisions, regardless of automation level or outsourcing arrangements (OECD, 2024; EBA, 2023). This principle directly connects to the governance challenges discussed in Chapter 6, where AI-related accountability must be embedded within enterprise risk management structures.

From a supervisory perspective, explainability is also essential for regulatory oversight. Authorities such as FINMA must be able to understand and challenge the logic of risk models used by banks. Without interpretability, supervision risks becoming dependent on outputs that cannot be independently validated.

7.5 Automation Bias and the Risk of Over-Reliance on Algorithms

A further concern arising from increased automation is automation bias—the tendency of human actors to over-trust algorithmic outputs even when they are incorrect or contextually inappropriate (Parasuraman and Riley, 1997).

Within both banks and supervisory authorities, this risk is amplified by the perceived objectivity and technical sophistication of AI systems. As Chapters 4 and 5 highlighted, AI-generated outputs may be treated as authoritative even when underlying data quality, model assumptions, or contextual relevance are uncertain.

Empirical research shows that automation bias can reduce critical oversight, particularly in high-pressure environments where decision-makers rely on system recommendations to manage complex information flows (Lee and See, 2004). In financial supervision, this may lead to the uncritical acceptance of model outputs, weakening institutional resilience.

This creates a paradox: systems designed to reduce risk may, if over-relied upon, introduce new systemic vulnerabilities. As Taleb (2007) argues, overconfidence in predictive systems can be particularly dangerous in environments characterised by uncertainty and rare events.

Consequently, maintaining effective oversight requires institutional cultures that actively encourage critical engagement with algorithmic outputs rather than passive reliance on them.

7.6 The Organisational Challenge of Sustained Human Oversight

Ensuring meaningful human oversight in AI-enabled financial systems is not only a regulatory requirement but also an organisational challenge. As discussed in Chapter 6, banks increasingly deploy AI across multiple interconnected functions, making it difficult to maintain continuous and substantive human review at scale.

This raises questions about the scalability of oversight. As systems become more complex and embedded, there is a risk that human supervision becomes formalistic rather than substantive—satisfying regulatory requirements without genuinely influencing decision-making outcomes.

Addressing this challenge requires investment in interdisciplinary expertise combining finance, law, data science, cybersecurity, and ethics. Supervisors and banking professionals must develop sufficient technological literacy to interrogate model outputs critically, understand limitations, and identify potential failures.

This aligns with the broader shift described in Chapter 3, where both FINMA and international regulators increasingly operate as technologically integrated organisations. However, this transformation also increases institutional demands on human expertise rather than reducing them.

7.7 Towards Hybrid Human–AI Governance

The analysis across Chapters 1–6 suggests that the future of financial governance will be defined by hybrid human–AI systems rather than full automation. In such systems, AI performs computational and analytical tasks, while humans retain responsibility for interpretation, judgement, and accountability.

This hybrid model is consistent with international regulatory frameworks emphasising proportionality, risk sensitivity, and human oversight in high-impact AI applications (European Union, 2024; OECD, 2024). It also reflects the practical realities of FINMA’s supervisory approach, where AI tools support but do not replace prudential decision-making.

Within this framework, human oversight is not a residual safeguard but a core structural feature of governance. It ensures that financial supervision remains anchored in legal accountability, institutional legitimacy, and normative judgement.

7.8 Conclusion

The limits of automation in banking and supervision are not temporary technical constraints but structural features of financial governance. AI significantly enhances analytical capacity and operational efficiency, but it cannot replace the interpretive, normative, and accountable dimensions of human judgement.

Across the Swiss case study examined throughout this paper, a clear pattern emerges: effective financial governance depends on maintaining a balance between technological capability and human authority. FINMA’s evolving use of SupTech and banks’ increasing reliance on AI both reinforce the need for structured human oversight.

Ultimately, the central lesson is that AI transforms—but does not eliminate—the role of humans in financial governance. Technology may assist decision-making at scale, but responsibility, legitimacy, and ethical judgement remain fundamentally human functions within both banking supervision and institutional risk management.

8. Conclusion

This paper has examined the evolving role of artificial intelligence in banking supervision and financial governance, with a particular focus on Switzerland and the institutional development of FINMA. Across Chapters 1–7, it has demonstrated that AI is fundamentally reshaping both sides of the regulatory relationship: banks are embedding AI into core operational functions, while supervisory authorities are increasingly adopting SupTech tools to enhance risk-based supervision.

8.1 Key Findings

The first key finding is that banking supervision is undergoing a structural transformation from a retrospective, compliance-based model towards a predictive, data-driven, and continuously adaptive form of governance. This shift is driven by the growing complexity of financial systems, increasing data availability, and the emergence of advanced analytical tools capable of processing large-scale structured and unstructured information (BIS, 2023; Yeung, 2018).

Second, the Swiss case demonstrates that this transformation is institutionally embedded through FINMA’s evolving supervisory framework and the establishment of the Data Innovation Lab. These developments illustrate how regulators are building internal technological capacity to support risk-based supervision while maintaining institutional control over AI-driven processes.

Third, the paper shows that AI adoption within banks and supervisory authorities is producing a deeply interconnected socio-technical system. Both regulators and regulated institutions rely on similar infrastructures—machine learning models, cloud computing systems, and predictive analytics—creating new forms of systemic interdependence and shared technological vulnerability.

Fourth, the analysis identifies four persistent governance challenges: explainability, accountability, data governance, and operational resilience. Across both supervisory and banking contexts, AI systems introduce risks associated with model opacity, biased outputs, cybersecurity threats, and the diffusion of responsibility across human and machine actors (Barocas, Hardt and Narayanan, 2019; FSB, 2023).

8.2 The Central Argument

The central argument advanced throughout the paper is that AI does not replace traditional prudential supervision but fundamentally transforms its epistemological and institutional foundations. Supervisory authority is increasingly mediated through algorithmic systems that reshape how risk is identified, prioritised, and managed.

However, despite these transformations, the paper demonstrates that AI systems remain limited in their capacity to provide normative judgement, interpretive understanding, and accountability. Financial supervision is not solely a technical exercise; it is a governance practice embedded within legal, political, and ethical frameworks. As such, it requires human oversight to ensure legitimacy, proportionality, and contextual understanding.

8.3 The Role of Human Oversight

A consistent theme across the paper is the continuing importance of human judgement within AI-enabled financial systems. While AI enhances analytical capability and supports early risk detection, it cannot replace the interpretive and normative functions of supervisory decision-making.

Human oversight is essential for addressing the epistemic limitations of machine learning, particularly in environments characterised by uncertainty, structural change, and systemic risk. As financial crises often involve unprecedented events and behavioural shifts, reliance on historical data alone is insufficient for effective prudential governance (Taleb, 2007).

Moreover, human oversight is necessary to ensure accountability and legal responsibility. Regulatory decisions must remain explainable, contestable, and attributable to human actors, even when informed by algorithmic systems (OECD, 2024). This reinforces the principle that technological systems may assist decision-making but cannot assume responsibility for it.

8.4 Implications for Financial Regulation and Supervision

The findings of this paper have several broader implications for financial regulation.

First, regulatory frameworks must increasingly account for the hybrid nature of AI-enabled governance. Effective supervision will depend on integrating technological tools within institutional structures that preserve human judgement and accountability.

Second, regulators must develop stronger capabilities in data science, model risk management, and algorithmic auditing to ensure effective oversight of AI-driven financial systems. This requires ongoing institutional investment in expertise and infrastructure, as illustrated by FINMA’s Data Innovation Lab.

Third, international regulatory coordination will become increasingly important, given the cross-border nature of data flows, cloud infrastructure, and financial technology providers. AI governance cannot be effectively managed within isolated national frameworks alone.

8.5 Final Reflection

Ultimately, this paper has shown that the rise of artificial intelligence in banking and supervision represents not a technological endpoint but a governance transformation. AI enhances the efficiency and analytical capacity of financial institutions, but it simultaneously introduces new forms of risk, complexity, and dependency.

The Swiss case illustrates that successful integration of AI into financial supervision depends on maintaining a careful balance between innovation and oversight. FINMA’s approach demonstrates that technological advancement can be incorporated into prudential governance without abandoning core principles of accountability, proportionality, and institutional judgement.

In conclusion, the future of banking supervision is best understood as a hybrid system in which human and machine intelligence operate in tandem. AI expands what is possible in terms of data analysis and risk detection, but human judgement remains essential for interpretation, legitimacy, and responsibility. The enduring challenge for financial governance is therefore not to automate supervision, but to govern automation itself.

9. References

Arner, D.W., Barberis, J. and Buckley, R.P. (2020) ‘FinTech, RegTech, and the reconceptualization of financial regulation’, Northwestern Journal of International Law & Business, 37(3), pp. 371–413.

Avgouleas, E. (2009) The Global Financial Crisis, Behavioural Finance and Financial Regulation: In Search of a New Orthodoxy. London: City University London.

Bamberger, K.A. (2010) ‘Technologies of Compliance: Risk Regulation and the Future of Corporate Crime Control’, Law & Society Review, 44(3), pp. 619–640.

Bank for International Settlements (BIS) (2024) Intelligent financial system: how AI is transforming finance. Basel: BIS.

Barocas, S., Hardt, M. and Narayanan, A. (2019) Fairness and Machine Learning. Cambridge, MA: MIT Press.

Basel Committee on Banking Supervision (BCBS) (2011) Basel III: A global regulatory framework for more resilient banks and banking systems. Basel: Bank for International Settlements.

Basel Committee on Banking Supervision (BCBS) (2021) Principles for Operational Resilience. Basel: Bank for International Settlements.

Basel Committee on Banking Supervision (BCBS) (2015) Corporate governance principles for banks. Basel: Bank for International Settlements.

Bholat, D. (2015) ‘Big data and central banks’, Bank of England Quarterly Bulletin, 55(2), pp. 216–225.

Dias, D. and Staschen, S. (2017) Innovative Regulatory Approaches with RegTech and SupTech. Washington, DC: CGAP.

Doshi-Velez, F. and Kim, B. (2017) ‘Towards a Rigorous Science of Interpretable Machine Learning’, arXiv preprint arXiv:1702.08608.

European Banking Authority (EBA) (2023) Machine Learning for IRB Model. European Banking Authority.

Financial Stability Board (FSB) (2023) The Financial Stability Implications of Digitalisation. Basel: Financial Stability Board.

FINMA (2024) Annual Report and Supervisory Developments. Bern: Swiss Financial Market Supervisory Authority.

Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T. and Walther, A. (2022) ‘Predictably unequal? The effects of machine learning on credit markets’, Review of Financial Studies, 35(1), pp. 43–75.

Goodhart, C. (2011) The Basel Committee on Banking Supervision: A History of the Early Years 1974–1997. Cambridge: Cambridge University Press.

European Banking Authority (EBA) (2023) Discussion paper on machine learning in the banking sector. Paris: EBA.

European Commission (2024) Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Brussels: European Commission.

European Union (2024) Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act). Official Journal of the European Union.

Financial Stability Board (FSB) (2022) Supervisory and regulatory approaches to climate-related risks and operational resilience. Basel: FSB.

Financial Stability Board (FSB) (2023) Enhancing third-party risk management and operational resilience. Basel: FSB.

FINMA (2024) Annual Report 2024. Bern: Swiss Financial Market Supervisory Authority.

Katzenbach, C. and Ulbricht, L. (2019) ‘Algorithmic governance’, Internet Policy Review, 8(4), pp. 1–18.

Lee, J.D. and See, K.A. (2004) ‘Trust in Automation: Designing for Appropriate Reliance’, Human Factors, 46(1), pp. 50–80.

OECD (2024) Regulatory Approaches To Artificial Intelligence In Finance . Paris: OECD Publishing.

Parasuraman, R. and Riley, V. (1997) ‘Humans and automation: use, misuse, disuse, abuse’, Human Factors, 39(2), pp. 230–253.

Plantin, J.-C., Lagoze, C., Edwards, P.N. and Sandvig, C. (2018) ‘Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook’, New Media & Society, 20(1), pp. 293–310.

Power, M. (2004) The Risk Management of Everything: Rethinking the Politics of Uncertainty. London: Demos.

Rouvroy, A. and Berns, T. (2013) ‘Algorithmic governmentality and prospects of emancipation’, Réseaux, 177(1), pp. 163–196.

Swiss Bankers Association. (2024). What sets the Swiss financial centre apart. Swiss Banking. Retrieved from Swiss Banking – Overview

Swiss Risk Association. Strengthening Operational Resilience: FINMA Guidance 05/2025 in Practice, 21 May 2025.

Taleb, N.N. (2007) The Black Swan: The Impact of the Highly Improbable. New York: Random House.

Wachter, S., Mittelstadt, B. and Floridi, L. (2017) ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’, International Data Privacy Law, 7(2), pp. 76–99.

Yeung, K. (2018) ‘Algorithmic regulation: a critical interrogation’, Regulation & Governance, 12(4), pp. 505–523. doi:10.1111/rego.12158

Zetzsche, D.A., Buckley, R.P., Arner, D.W. and Barberis, J.N. (2020) ‘Decentralized finance’, Journal of Financial Regulation, 6(2), pp. 172–203.

Zuboff, S. (2019) The Age of Surveillance Capitalism. London: Profile Books.