Artificial Intelligence and Innovation in Swiss Banking
This paper argues that artificial intelligence is fundamentally restructuring the banking industry by transforming customer interaction, organisational governance, market competition, and operational infrastructures into increasingly integrated socio-technical systems in which competitive advantage depends on the effective coordination of data, algorithms, governance frameworks, and human expertise.
Sanchez P.
5/12/202626 min read


Abstract
The rapid integration of artificial intelligence (AI) into banking is fundamentally reshaping how financial institutions operate, compete, and create value. This paper analyses key insights from the 2026 “Innovationen im Banking” conference organised by the Institute of Financial Services Zug. The conference explored how AI is transforming banking operations, customer engagement, and service delivery, with particular attention to generative AI, fraud prevention, marketing, and advisory services.
The findings highlight that AI is no longer a peripheral innovation tool but has evolved into a strategic infrastructure embedded across core banking functions. At the same time, new forms of AI-driven interaction—such as autonomous payment agents and generative customer service systems—are creating both efficiency gains and new governance, regulatory, and ethical challenges. The conference further emphasised emerging risks related to deepfakes, synthetic identity fraud, and algorithmic mediation of customer decision-making, which collectively reshape the nature of trust in financial services.
Drawing on conference contributions and recent academic literature, the paper argues that competitive advantage in modern banking is increasingly determined by the integration of technology, organisational capabilities, governance frameworks, and customer trust. Rather than replacing human functions, AI is reshaping them into hybrid socio-technical systems in which value creation depends on the alignment of data, algorithms, institutions, and human expertise.
1. Introduction
Artificial intelligence has become one of the most transformative technologies in the financial services industry. Banks increasingly utilise AI-driven systems to improve operational efficiency, personalise customer interactions, automate decision-making, enhance fraud detection, and support strategic planning (Davenport and Ronanki, 2018). In parallel, advances in machine learning, deep learning, and generative AI have accelerated the digital transformation of banking, enabling new forms of service delivery and fundamentally altering traditional operating models (LeCun, Bengio and Hinton, 2015; Vial, 2019).
In Switzerland, this transformation is particularly significant due to the strong position of retail and private banks operating in a highly regulated, trust-based financial environment. Swiss banks face increasing pressure from FinTech competitors, platform ecosystems, and rapidly evolving customer expectations shaped by digital-native experiences (Gomber, Koch and Siering, 2017; Zetzsche et al., 2018). As a result, AI adoption is no longer optional but has become a central component of long-term strategic positioning, particularly in data-driven credit scoring, risk management, and customer analytics (Fuster et al., 2022).
Against this backdrop, the 2026 “Innovationen im Banking” conference provided a comprehensive overview of current developments in banking innovation, with a strong focus on the evolving role of AI in banking systems. Discussions centred on generative AI in customer service, agentic commerce, AI-driven fraud and cybersecurity risks, customer-centric operating models, AI visibility in digital marketing, pension advisory services, and organisational readiness for digital transformation.
This paper critically analyses the key insights presented at the conference and situates them within contemporary academic literature on digital transformation, artificial intelligence, behavioural finance, and organisational theory. Rather than treating AI as a purely technological phenomenon, the paper adopts a socio-technical perspective, examining how AI reshapes governance structures, customer relationships, competitive dynamics, and institutional trust within banking systems (Siau and Yang, 2017; Rossi, 2018).
The central argument advanced in this paper is that AI in banking is evolving from a set of tools into a multi-layered strategic infrastructure. Its impact extends beyond automation, fundamentally reshaping how value is created, how trust is established, and how financial institutions compete in increasingly algorithmically mediated markets (Brynjolfsson, Rock and Syverson, 2019; Makridakis, 2017).
2. AI as a Strategic Infrastructure in Banking
One of the dominant themes emerging from the conference was the transition of artificial intelligence (AI) from isolated pilot initiatives toward enterprise-wide strategic infrastructure. Representatives from UBS described AI not as a collection of stand-alone tools, but as an organisational “backbone” embedded across compliance, customer advisory, risk management, fraud detection, and digital banking services. This reflects a broader transformation within the banking sector in which AI increasingly supports proactive, predictive, and continuously adaptive banking processes rather than merely automating routine tasks.
Historically, academic literature strongly supports this shift. Brynjolfsson and McAfee (2017) argue that AI creates value not simply through automation, but through the redesign of organisational workflows, managerial decision-making, and operational structures. Extending this argument, Verhoef et al. (2021) emphasise that successful digital transformation requires the alignment of technology, governance, organisational culture, and strategic leadership. In banking specifically, AI is increasingly becoming part of core institutional infrastructure, shaping how organisations manage operational resilience, regulatory compliance, and customer engagement simultaneously.
Contemporary research further demonstrates that AI adoption in banking is now closely linked to enterprise governance and institutional capability rather than technological experimentation alone. Ratzan and Rahman (2024) show that responsible AI implementation in banking depends on integrated governance frameworks capable of balancing innovation with accountability, transparency, and regulatory oversight. Their findings suggest that mature AI adoption requires banks to institutionalise ethical review mechanisms, risk controls, and cross-functional governance structures across the organisation.
The hybrid governance model presented by UBS — combining centralised AI governance with decentralised innovation teams — closely reflects the theory of organisational ambidexterity. O’Reilly and Tushman (2013) argue that successful organisations simultaneously pursue exploitation (efficiency, standardisation, and control) and exploration (innovation, experimentation, and adaptability). Within banking, this balance is particularly important because financial institutions must innovate rapidly while operating under stringent regulatory and risk-management requirements.
More recent empirical studies reinforce the relevance of ambidextrous governance in financial institutions. Mulyana, Rusu and Perjons (2024), in their study of digital transformation governance in banking, identify “ambidextrous IT governance mechanisms” as critical to successful enterprise transformation. Their findings highlight the importance of combining central oversight, enterprise architecture, and risk management with agile innovation practices and decentralised development capabilities. Similarly, Aziz and Long (2023) demonstrate that strong data analytics capabilities enhance organisational ambidexterity within the banking sector by enabling institutions to integrate exploratory innovation with operational efficiency.
The conference discussions also reflected a growing recognition that AI governance has become strategically inseparable from banking governance itself. Emerging scholarship increasingly frames AI as an institutional force capable of reshaping organisational norms, structures, and managerial authority. Rudko et al. (2024) argue that AI should not be understood solely as a technological tool, but as a transformative organisational mechanism that alters institutional practices and governance logics. This perspective helps explain why leading banks are investing heavily in enterprise-wide AI governance frameworks, internal AI ethics boards, and cross-functional supervisory structures.
Furthermore, the integration of AI into core banking infrastructure has intensified regulatory and supervisory concerns. Recent work on algorithmic governance in banking highlights that AI systems increasingly influence credit scoring, anti-money-laundering procedures, compliance monitoring, and prudential supervision, thereby raising questions concerning accountability, model risk, and systemic stability. Consequently, banks are under growing pressure to establish governance models capable of ensuring transparency, auditability, and regulatory compliance across the full AI lifecycle.
Overall, the conference discussions illustrated that AI in banking is evolving from a peripheral innovation initiative into a foundational organisational capability. Competitive advantage no longer derives solely from possessing advanced AI tools, but from the ability to integrate AI strategically across governance structures, operational processes, and organisational culture. In this sense, AI increasingly functions as a form of strategic infrastructure underpinning digital transformation, institutional resilience, and long-term competitiveness within the financial sector.
3. Generative AI and Customer Support Transformation
The presentation by representatives from Migros Bank illustrated how generative artificial intelligence (GenAI) is being operationalised within customer support and case management systems. Rather than deploying GenAI as a discrete conversational assistant or front-end chatbot, Migros Bank described a more structurally embedded approach in which large language models (LLMs) are integrated across the entire customer case lifecycle. This includes intake, triage, information retrieval, response drafting, escalation handling, and post-interaction documentation. In this configuration, GenAI functions less as an interface layer and more as an infrastructural capability supporting end-to-end service orchestration.
This implementation reflects a broader shift in both industry practice and academic understanding of generative AI’s value proposition. Recent research emphasises that the performance gains from AI systems are increasingly contingent not on model sophistication alone, but on how effectively they are embedded within organisational workflows, data ecosystems, and decision architectures. Kaplan and Haenlein (2020) argue that the strategic value of AI arises from its contextual deployment within firm-specific processes, particularly where proprietary data and domain knowledge shape model outputs and application relevance. In the context of banking, this means that the same underlying LLM can yield significantly different value depending on how deeply it is integrated into internal knowledge systems and operational processes.
Conference discussions further highlighted an important strategic shift: as foundation models become more widely available and increasingly commoditised, competitive differentiation is moving away from model ownership toward data advantage and workflow integration. This view is strongly supported in the recent AI strategy literature, which suggests that the long-term value of generative AI systems will depend on proprietary datasets, institutional memory, and embedded process design rather than access to frontier models alone (Dwivedi et al., 2023). In banking specifically, this reinforces the idea that customer interaction histories, regulatory documentation, and internal knowledge repositories become critical sources of defensible advantage when properly integrated into AI systems.
From a theoretical standpoint, this dynamic aligns closely with the resource-based view (RBV) of the firm. Barney (1991) argues that sustained competitive advantage arises from resources that are valuable, rare, inimitable, and non-substitutable (VRIN). Within banking, GenAI systems themselves are increasingly becoming non-rare due to widespread access to similar foundational models. However, the organisational assets surrounding these systems—such as proprietary customer data, long-term client relationships, compliance expertise, and institutional trust—remain highly differentiated and difficult to replicate. Consequently, GenAI becomes strategically valuable not as a standalone capability, but as a mechanism for activating and amplifying these underlying resources.
More recent extensions of RBV in digital contexts further reinforce this interpretation. Scholars argue that in data-driven industries, competitive advantage is increasingly derived from “data-enabled capabilities” rather than traditional physical or technological assets (Mikalef et al., 2020). In banking, this implies that GenAI systems achieve strategic value when they are tightly coupled with high-quality, structured, and context-rich internal data, enabling more accurate reasoning, better decision support, and more consistent customer engagement outcomes.
The Migros Bank case also demonstrates the operational implications of human–AI collaboration in service environments. Huang and Rust (2018) show that AI can enhance service productivity and customer satisfaction when it is designed to complement rather than replace human employees. Their framework highlights that AI is particularly effective in analytical and mechanical service tasks, while human agents remain essential for empathic, complex, and high-stakes interactions. In practice, this suggests that GenAI systems are most effective when they operate as “co-pilots” within service workflows—supporting human agents by generating responses, summarising case histories, and recommending actions, while leaving final judgement and escalation authority to human staff.
This hybrid model was implicitly reflected in Migros Bank’s approach, where GenAI supported case handling but remained embedded within human-supervised processes. Such designs also align with emerging regulatory expectations in financial services, where explainability, traceability, and human oversight are increasingly required in AI-mediated decision systems. As a result, GenAI adoption in banking is not simply a technological upgrade but a redesign of socio-technical systems that redistribute tasks, responsibilities, and decision rights between humans and machines.
Overall, the conference highlighted that generative AI is reshaping customer support in banking not by replacing existing systems, but by reconfiguring them into more integrated, data-driven, and adaptive service architectures. The strategic implication is clear: the competitive advantage of GenAI in banking lies less in model access and more in the depth of organisational integration, the quality of proprietary data, and the design of human–AI collaboration frameworks.
4. Agentic Commerce and Autonomous Payments
A particularly forward-looking theme discussed at the conference was “agentic commerce,” presented by representatives from Mastercard. Agentic commerce refers to the use of AI agents capable of autonomously initiating, negotiating, and executing purchase and payment transactions on behalf of users. In this emerging paradigm, AI systems move beyond recommendation or assistance roles and begin to function as delegated economic actors operating within user-defined constraints such as budget limits, merchant preferences, risk thresholds, and approval rules.
This shift represents a structural change in digital commerce architectures. Traditional payment systems are built around explicit, human-initiated actions—where purchase intent, authentication, and authorisation are sequential and user-driven. In contrast, agentic systems introduce continuous, autonomous decision-making layers that can evaluate options, compare merchants, and execute transactions without direct human intervention at the point of sale. Industry discussions at the conference highlighted that this evolution is being driven by the increasing integration of large language models with payment infrastructure, identity systems, and API-based commerce ecosystems.
The development can be conceptualised as a progression through increasingly autonomous transaction modes. First, single-merchant autonomous purchases involve AI agents executing predefined purchases from approved vendors under strict constraints. Second, multi-merchant orchestrated purchases allow agents to compare and select between suppliers dynamically, optimising for price, availability, or delivery conditions. Third, delayed and condition-based transactions introduce temporal and contextual autonomy, where payments are executed only when certain conditions are met (e.g., price thresholds, inventory availability, or user activity patterns). Together, these stages illustrate a shift from reactive payment systems toward anticipatory and self-executing financial agents.
From a theoretical perspective, this evolution aligns with broader research on autonomous intelligent agents in socio-technical systems. Shrestha, Ben-Menahem and von Krogh (2019) argue that the adoption of AI agents in organisational and economic contexts fundamentally depends on the development of trust, explainability, and governance mechanisms. Their work emphasises that as autonomy increases, so too does the need for transparent decision logic and robust accountability structures, particularly in high-stakes domains such as finance. In agentic commerce, this implies that users and regulators must be able to understand not only what transactions occurred, but why and under what constraints they were executed.
The conference discussions also underscored the regulatory complexity introduced by autonomous payment systems. Key concerns include liability attribution (who is responsible when an AI agent makes a suboptimal or fraudulent purchase), fraud prevention in agent-mediated environments, and ensuring meaningful user consent when decisions are delegated over time. These issues extend beyond traditional payment regulation because decision authority is partially decoupled from real-time human control, raising questions about how consent should be defined, recorded, and verified in continuously operating systems.
In response to these challenges, Mastercard’s proposed “Agent Pay Framework” represents an attempt to establish foundational standards for secure and accountable AI-driven transactions. The framework focuses on embedding security controls, identity verification protocols, and transaction-level transparency into agentic payment flows. It also seeks to define governance mechanisms for delegating financial authority to AI systems while maintaining user control boundaries. Such initiatives reflect a broader industry trend toward standardisation of AI governance in financial infrastructure, particularly as autonomous systems begin to interact directly with regulated payment rails.
More broadly, the emergence of agentic commerce aligns with ongoing academic debates about algorithmic decision-making in economic systems. Recent literature highlights that as AI systems gain autonomy in financial contexts, they increasingly blur the boundaries between user intent, system inference, and machine-initiated action (de Bruin, 2023). This raises fundamental questions about agency, accountability, and the nature of economic decision-making in AI-mediated markets. In this context, payment systems are no longer passive infrastructures but active computational environments in which decisions are continuously generated, evaluated, and executed.
Overall, the conference highlighted that agentic commerce represents a significant inflection point in the evolution of digital payments. Rather than simply digitising transactions, AI agents are beginning to reshape the structure of economic participation itself. The key strategic challenge for financial institutions will be to design systems that enable autonomy while preserving trust, transparency, and regulatory compliance. In this sense, the future of payments is not only automated but increasingly agent-mediated, requiring new governance frameworks that reconcile machine autonomy with human accountability.
5. Deepfakes and AI-Driven Fraud Risks
Another critical theme emerging from the conference was the escalating threat posed by deepfakes and synthetic identity fraud within financial services. Representatives from PXL Vision presented data indicating a marked increase in AI-enabled fraud attempts in 2025, alongside global losses from synthetic identity fraud estimated at between USD 20 and 40 billion annually. These figures reflect a rapidly evolving fraud landscape in which generative AI is lowering the technical barrier for producing highly convincing synthetic identities, including forged documents, cloned voices, and manipulated video evidence used to bypass traditional verification systems.
Deepfake technologies significantly challenge core banking processes such as customer onboarding (Know Your Customer, KYC), identity verification, anti-money laundering (AML) compliance, and payment authentication. Unlike earlier forms of digital fraud, which often relied on static image manipulation or simple document forgery, contemporary generative models enable real-time or near-real-time synthesis of multimodal identity signals. This includes voice imitation for telephone banking authentication, video-based “liveness” spoofing, and synthetic identity construction that blends real and fabricated personal data to evade detection systems.
Recent academic literature strongly supports the view that deepfakes represent a structural threat to trust in digital systems. Westerlund (2019) argues that deepfake technologies undermine epistemic trust in mediated communication by eroding users’ ability to distinguish authentic from synthetic content. In financial services, this erosion of trust is particularly critical because authentication processes depend heavily on the assumption that digital representations correspond to real individuals. As synthetic media becomes more realistic, the reliability of biometric and behavioural authentication systems is increasingly contested.
Building on this, Kietzmann, Lee and McCarthy (2020) highlight that organisations must respond to generative AI-driven manipulation by developing robust detection systems, multi-layered verification architectures, and AI-based countermeasures. Their work emphasises that the challenge is not merely technological but organisational, requiring continuous adaptation as adversarial AI systems evolve. In the context of banking, this implies that fraud detection systems must move beyond static rule-based models toward adaptive machine learning systems capable of identifying subtle anomalies across identity signals, behavioural patterns, and transactional metadata.
More recent research has further advanced this perspective by framing deepfake detection as an arms race between generative and discriminative AI systems. Mirsky and Lee (2021) describe deepfakes as part of a broader “AI-synthetic media ecosystem” in which detection models must constantly adapt to increasingly sophisticated generative techniques. This dynamic is particularly relevant for financial institutions, where adversarial actors can rapidly iterate on fraud strategies, exploiting weaknesses in identity verification pipelines.
Conference discussions also highlighted that synthetic identity fraud is especially problematic because it does not rely on a single point of failure. Instead, it exploits systemic vulnerabilities across fragmented identity ecosystems, including credit bureaus, digital onboarding platforms, and third-party verification services. This aligns with findings in the broader cybersecurity literature, which show that identity fraud increasingly operates as a distributed process rather than a single transactional event (Jain et al., 2022). In banking, this necessitates end-to-end verification frameworks that integrate data across multiple institutional layers.
Importantly, speakers emphasised that cybersecurity can no longer be treated as a peripheral or standalone function within AI strategy. Instead, it must be embedded directly into AI governance and digital transformation architectures. This reflects a growing consensus in both industry and academia that AI systems are inherently dual-use: they simultaneously enhance defensive capabilities (e.g. anomaly detection, behavioural analytics) and offensive capabilities (e.g. fraud generation, impersonation at scale). Consequently, the net security impact of AI depends heavily on governance design, model oversight, and organisational readiness.
Recent research supports this integrated perspective. Ransbotham et al. (2022) argue that organisations adopting AI at scale must simultaneously invest in “trustworthy AI ecosystems” that combine technical safeguards, governance structures, and human oversight mechanisms. In financial services, this includes explainability requirements, audit trails for AI-driven decisions, and continuous monitoring of model behaviour under adversarial conditions. Without such integration, AI adoption can inadvertently increase systemic risk exposure rather than reduce it.
Overall, the conference underscored that deepfakes and AI-driven fraud represent not just an incremental increase in cybersecurity risk, but a qualitative shift in the nature of digital trust. As synthetic media becomes indistinguishable from authentic content, financial institutions must fundamentally rethink identity verification, authentication, and fraud prevention architectures. The strategic imperative is therefore to embed adversarial resilience directly into AI-enabled banking systems, ensuring that innovation in generative AI is matched by equally sophisticated developments in detection, governance, and institutional trust frameworks.
6. Customer-Centric Operating Models and Organisational Readiness
The conference further emphasised the importance of developing customer-centric target operating models (TOMs) as a foundational element of successful digital transformation in banking. A key contribution was the presentation by Prof. Dr. Nils Hafner, who highlighted that despite significant investment in digital channels and analytics tools, many banks continue to struggle with systematically embedding customer feedback into organisational decision-making processes. Rather than being fully integrated into operational and strategic workflows, customer insights often remain fragmented across departments, limiting their impact on product development, service design, and strategic prioritisation.
Several structural barriers to organisational readiness were identified. These included insufficient data infrastructure, weak data literacy and analytical competencies, limited end-to-end automation capabilities, and underdeveloped change management capacity. Taken together, these constraints suggest that the challenge is not merely technological adoption, but the ability of institutions to translate data into actionable organisational intelligence. In many cases, banks possess substantial volumes of customer data but lack the integrated systems and organisational practices required to convert this data into consistent, real-time decision-making input.
These observations align closely with established digital transformation research. Kane et al. (2015) argue that digital maturity is driven less by the deployment of advanced technologies and more by organisational factors such as leadership commitment, cultural adaptability, and strategic coherence. Their empirical findings suggest that digitally mature organisations are distinguished by their ability to embed digital technologies into core processes and decision-making structures, rather than treating them as peripheral enhancements. In the context of banking, this implies that even highly advanced AI and analytics systems will fail to deliver value unless accompanied by corresponding organisational redesign and capability development.
More recent literature reinforces this perspective by highlighting the importance of data-driven organisational capabilities as a prerequisite for customer-centric transformation. Mikalef et al. (2020) argue that big data analytics capabilities must be understood as a combination of technological infrastructure, managerial competencies, and organisational culture. Without alignment across these dimensions, firms are unable to effectively leverage customer data for strategic decision-making or service innovation. This is particularly relevant in banking, where legacy systems, regulatory constraints, and siloed organisational structures often impede the integration of customer insights across the enterprise.
The conference discussions also placed strong emphasis on the centrality of customer journeys in modern banking operating models. This reflects a broader theoretical shift towards service-dominant logic (SDL), which conceptualises value as co-created through interactions between organisations and customers rather than being embedded in products or services alone. Vargo and Lusch (2004; 2008) argue that value emerges through ongoing service exchange processes, positioning customers as active participants in value creation rather than passive recipients. In financial services, this perspective has significant implications for operating model design, as it requires banks to focus on end-to-end customer experiences rather than isolated product transactions.
Recent extensions of service-dominant logic in digital contexts further highlight the increasing importance of customer journeys as dynamic, data-rich ecosystems. Lemon and Verhoef (2016) demonstrate that customer journeys are non-linear, multi-channel, and continuously evolving, requiring organisations to integrate data across touchpoints to understand and shape customer experiences effectively. In banking, this necessitates the development of integrated CRM systems, real-time analytics capabilities, and cross-functional teams capable of responding to customer needs in an agile and coordinated manner.
Importantly, the challenges identified at the conference also reflect broader issues of organisational change management in digital transformation initiatives. Successful implementation of customer-centric TOMs requires not only technological investment but also significant shifts in organisational structure, incentives, and employee capabilities. Recent research by Singh and Hess (2020) highlights that digital transformation success depends on the establishment of clear digital leadership roles and governance structures that align business and IT functions around shared customer outcomes.
Overall, the conference underscored that customer-centric operating models are not simply a design choice but a strategic necessity in digitally transformed banking environments. However, achieving genuine customer centricity requires more than the deployment of analytics tools or CRM systems. It demands deep organisational change, including the development of data capabilities, cultural alignment around customer value creation, and the integration of customer insights into core decision-making processes. In this sense, organisational readiness emerges as a critical determinant of whether banks can successfully translate digital and AI investments into sustained customer and business value.
7. AI Visibility and the Future of Banking Marketing
A novel and increasingly strategic topic discussed at the conference was “AI visibility” — the idea that in AI-mediated environments, customer discovery and selection of financial services is increasingly shaped by generative AI systems such as ChatGPT, Gemini, and Perplexity. Researchers from the Lucerne University of Applied Sciences argued that these systems are beginning to act as intermediaries of choice, often presenting users with a small number of curated recommendations rather than long lists of search results. As a result, banks risk becoming “invisible” within AI-mediated customer journeys if they are not explicitly surfaced, ranked, or interpreted by these systems.
This development represents a structural shift in digital marketing dynamics. Traditional search engine optimisation (SEO) has historically focused on ranking within keyword-based search engines. However, generative AI systems rely on probabilistic language modelling, retrieval-augmented generation, and structured knowledge synthesis, meaning that visibility is increasingly determined by data accessibility, semantic structure, and model interpretability rather than backlink ecosystems or keyword density. In this emerging environment, financial institutions must rethink how they design and publish digital content so that it is machine-readable, semantically structured, and contextually relevant for AI systems that mediate customer decisions.
From a theoretical perspective, this shift aligns with broader research on algorithmic mediation in digital markets. Pasquale (2015) argues in The Black Box Society that algorithmic systems increasingly shape visibility, access, and power in digital environments, often in ways that are opaque to both users and firms. In the context of banking, this implies that competitive positioning is no longer determined solely by brand strength or traditional marketing channels, but also by how effectively firms are represented within algorithmic recommendation systems that mediate consumer attention and choice.
More recent academic work on platform and AI-mediated markets reinforces this concern. Kleinberg et al. (2018) highlight that algorithmic decision systems can systematically reshape market outcomes by influencing which options are surfaced, ranked, or suppressed. In financial services, this creates a scenario in which AI systems effectively function as gatekeepers of customer attention, potentially concentrating demand around a small number of highly visible providers while marginalising others. This introduces new forms of competitive asymmetry that are not fully captured by traditional marketing or competition theory.
The implications for banking marketing strategy are significant. If AI systems increasingly mediate customer decision-making, then visibility becomes dependent on “AI readability” — the extent to which an organisation’s products, services, and attributes can be correctly interpreted, retrieved, and prioritised by generative models. This requires banks to move beyond conventional SEO strategies toward structured data ecosystems, including schema markup, API-accessible product information, standardised financial product descriptors, and consistent cross-platform metadata.
This transformation also intersects with emerging research on explainable and transparent recommendation systems. Ricci et al. (2022) argue that modern recommender systems are evolving toward hybrid architectures that combine machine learning, knowledge graphs, and contextual signals. In such systems, data quality, semantic consistency, and ontological structure become key determinants of visibility. For banks, this implies that marketing and data architecture functions must become increasingly integrated, ensuring that product information is not only accurate for human users but also optimally interpretable by machine systems.
The conference discussions further highlighted that AI visibility may produce asymmetric competitive effects across the banking sector. Large incumbent banks may benefit from strong data infrastructure, brand recognition, and extensive digital footprints that make them more likely to be surfaced by AI systems. However, smaller niche banks could also gain disproportionate advantages if they successfully optimise for specific customer segments or specialised queries, thereby achieving high relevance in narrow but valuable recommendation contexts. This reflects a broader trend toward “long-tail visibility” in algorithmic markets, where relevance within specific contexts can outweigh general market dominance.
Importantly, AI visibility also introduces new risks related to informational control and market transparency. If AI systems increasingly act as intermediaries of financial choice, then the criteria used for recommendation become critical determinants of market fairness and consumer autonomy. Recent scholarship on algorithmic gatekeeping emphasises that such systems may unintentionally encode biases, prioritise commercially incentivised content, or obscure the rationale behind recommendations (Gillespie, 2018). In regulated industries such as banking, this raises important questions about accountability, disclosure, and regulatory oversight of AI-mediated customer journeys.
Overall, the conference underscored that AI visibility is emerging as a foundational concept in the future of banking marketing. It extends traditional notions of digital presence into a new paradigm where visibility is no longer primarily determined by search engines or human browsing behaviour, but by how effectively organisations are represented within generative AI systems. For banks, the strategic challenge is therefore twofold: to ensure that their offerings remain discoverable in AI-mediated environments, and to adapt their marketing, data, and content strategies to the structural logic of algorithmic intermediaries shaping customer decision-making.
8. Trust, Human Advice, and Pension Services
Beyond technological innovation, the conference strongly reinforced the continued strategic importance of human trust and advisory services in banking. A particularly illustrative case was the presentation by Thurgauer Kantonalbank on its TKB Pension Centre, which demonstrates how traditional financial institutions continue to differentiate themselves through credible, transparent, and perceived-independent financial advice. Despite rapid advances in AI-driven advisory tools, the case highlighted that trust-based human interaction remains central in high-stakes financial domains such as pensions, retirement planning, and long-term wealth management.
An interesting behavioural insight from the presentation concerned the bank’s deliberate choice of terminology: positioning “pension” rather than “retirement planning” as its core customer-facing concept. This decision reflects a nuanced understanding of cognitive framing effects in financial decision-making. From a behavioural economics perspective, the choice of wording significantly shapes how individuals perceive complexity, urgency, and emotional relevance. Kahneman and Tversky (1979) demonstrate through prospect theory that individuals do not evaluate outcomes purely rationally, but instead rely on mental shortcuts and framing cues that systematically influence preferences and risk perception. In this context, “pension” functions as a more concrete and emotionally accessible frame than the more abstract notion of “retirement planning,” thereby reducing perceived cognitive load and increasing engagement.
This insight is strongly supported by subsequent behavioural finance literature, which shows that framing effects, salience, and mental accounting play a critical role in financial decision-making, particularly in long-term planning contexts (Thaler, 2015). In pension services specifically, where decisions involve uncertainty, time discounting, and complex trade-offs, the way information is presented can significantly influence uptake, contribution levels, and long-term saving behaviour.
The conference further emphasised that despite the rapid expansion of AI-powered advisory systems, human expertise continues to play a critical role in financial services, particularly in complex, emotionally sensitive, or high-value decision contexts. This aligns with empirical research by Belanche, Casaló and Flavián (2019), which shows that while customers increasingly accept automated service technologies, they still place high value on human interaction when dealing with complex financial products or when trust and reassurance are required. Their findings suggest that rather than fully replacing human advisors, digital technologies are more likely to reshape advisory roles into hybrid models combining automation with human oversight.
More recent research in service and financial technology adoption further supports this complementary view of human–AI interaction. Wirtz et al. (2021) argue that AI-based service systems are most effective when designed to augment rather than replace human employees, particularly in contexts requiring empathy, ethical judgement, and contextual understanding. In financial advisory services, this implies that AI can support tasks such as data analysis, scenario modelling, and portfolio simulation, while human advisors remain responsible for relational engagement, trust-building, and final decision validation.
From a theoretical standpoint, the continued importance of human advisory services can also be interpreted through the lens of trust theory in financial intermediation. Trust is a foundational component of financial decision-making, particularly in environments characterised by complexity, information asymmetry, and long-term commitment. As AI systems become more prevalent in financial advice, the nature of trust itself becomes more distributed, shifting from interpersonal trust between advisor and client toward hybrid trust relationships involving institutions, algorithms, and human oversight structures.
The conference discussions also highlighted that the increasing automation of financial advice may paradoxically increase the value of human advisors in certain segments of the market. As algorithmic tools become standardised and widely accessible, differentiation increasingly depends on relational, contextual, and interpretive capabilities that are more difficult to automate. This suggests a potential bifurcation in advisory services: routine and low-complexity tasks becoming increasingly automated, while high-complexity, trust-intensive advisory roles become more specialised and human-centric.
Overall, the case of the TKB Pension Centre illustrates that even in a rapidly digitising and AI-enhanced financial landscape, human trust, framing, and interpersonal advisory relationships remain essential components of value creation. Rather than being displaced by technology, human advisory services are being reconfigured into more targeted, high-trust roles that complement AI-driven analytics and automation. In this sense, the future of financial advice is not purely digital or human, but increasingly hybrid—integrating behavioural insights, technological augmentation, and relational trust mechanisms into a unified service model.
9. Conclusion
The analysis of the 2026 “Innovationen im Banking” conference demonstrates that artificial intelligence is reshaping the banking industry across technological, organisational, and strategic dimensions. Rather than functioning as an incremental improvement to existing systems, AI is emerging as a foundational infrastructure that redefines how banks operate, interact with customers, and compete in digital ecosystems.
Across the conference themes, a consistent pattern emerges: AI is simultaneously a source of value creation and systemic complexity. In areas such as generative AI and customer service, AI enhances efficiency and scalability, but only when deeply embedded in organisational workflows and supported by proprietary data assets. In agentic commerce, AI introduces new forms of autonomous economic behaviour that challenge traditional notions of payment, consent, and accountability. In cybersecurity, particularly deepfake-driven fraud, AI amplifies both defensive and offensive capabilities, requiring continuous adaptation of verification and governance systems.
At the organisational level, the findings highlight that successful AI adoption depends less on technological deployment and more on institutional readiness. Customer-centric operating models, data-driven capabilities, and ambidextrous governance structures emerge as critical enablers of sustainable transformation. Banks that fail to integrate customer insights, data infrastructure, and organisational change management risk underutilising even the most advanced AI systems.
At the same time, the emergence of AI-mediated markets introduces new competitive dynamics. Concepts such as AI visibility illustrate how generative systems increasingly shape customer discovery and choice, effectively acting as intermediaries in financial decision-making. This shifts competitive advantage toward firms that can ensure machine interpretability of their products and services, fundamentally altering traditional marketing and branding strategies.
Despite these technological shifts, the conference also underscores the enduring importance of human trust and advisory relationships. In high-stakes domains such as pensions and long-term financial planning, human expertise, framing effects, and relational trust remain essential. Rather than being displaced, human advisors are being repositioned within hybrid systems that combine AI-driven analytics with human judgement, empathy, and accountability.
Overall, the paper concludes that the future of banking will be defined by the integration of artificial intelligence into socio-technical systems that combine algorithms, data, governance, and human expertise. Competitive advantage will increasingly depend on the ability of banks to orchestrate these elements into coherent, trustworthy, and adaptable organisational architectures. In this sense, AI does not replace the banking system—it fundamentally restructures it.
10. References
Aziz, N.A. and Long, F. (2023) ‘Examining the relationship between big data analytics capabilities and organizational ambidexterity in the Malaysian banking sector’, Frontiers in Big Data, 6, 1036174.
Barney, J. (1991) ‘Firm resources and sustained competitive advantage’, Journal of Management, 17(1), pp. 99–120.
Belanche, D., Casaló, L.V. and Flavián, C. (2019) ‘Artificial intelligence in FinTech: understanding robo-advisors adoption among customers’, Industrial Management & Data Systems, 119(7), pp. 1411–1430.
Brynjolfsson, E. and McAfee, A. (2017) Machine, Platform, Crowd: Harnessing Our Digital Future. New York: W.W. Norton.
Brynjolfsson, E., Rock, D. and Syverson, C. (2019) ‘Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics’, NBER Macroeconomics Annual, 33(1), pp. 1–72.
Davenport, T.H. and Ronanki, R. (2018) ‘Artificial intelligence for the real world’, Harvard Business Review, 96(1), pp. 108–116.
de Bruin, B. (2023) ‘Co-regulation and AI-innovation: principles for a sustainable framework fostering innovation and acceptance of AI’.
Dwivedi, Y.K. et al. (2023) ‘So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI’, International Journal of Information Management, 71, 102642.
Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T. and Walther, A. (2022) ‘Predictably unequal? The effects of machine learning on credit markets’, The Journal of Finance, 77(1), pp. 5–47.
Gillespie, T. (2018) Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press.
Gomber, P., Koch, J.-A. and Siering, M. (2017) ‘Digital finance and FinTech: current research and future research directions’, Journal of Business Economics, 87(5), pp. 537–580.
Huang, M.-H. and Rust, R.T. (2018) ‘Artificial intelligence in service’, Journal of Service Research, 24(1), pp. 3–14.
Huang, M.-H. and Rust, R.T. (2021) ‘A strategic framework for artificial intelligence in marketing’, Journal of the Academy of Marketing Science, 49(1), pp. 30–50.
Jagtiani, J. and Lemieux, C. (2019) ‘The roles of alternative data and machine learning in fintech lending: Evidence from the LendingClub consumer platform’, Financial Management, 48(4), pp. 1009–1029.
Jain, A.K., Flynn, P. and Ross, A.A. (2022) Handbook of Biometrics. 2nd edn. Cham: Springer.
Kahneman, D. and Tversky, A. (1979) ‘Prospect theory: an analysis of decision under risk’, Econometrica, 47(2), pp. 263–291.
Kane, G.C., Palmer, D., Phillips, A.N., Kiron, D. and Buckley, N. (2015) Strategy, Not Technology, Drives Digital Transformation. MIT Sloan Management Review.
Kaplan, A. and Haenlein, M. (2020) ‘Rulers of the world, unite! The challenges and opportunities of artificial intelligence’, Business Horizons, 63(1), pp. 37–50.
Kietzmann, J., Lee, L.W., McCarthy, I.P. and Kietzmann, T.C. (2020) ‘Deepfakes: Trick or treat?’, Business Horizons, 63(2), pp. 135–146.
Kleinberg, J., Ludwig, J., Mullainathan, S. and Rambachan, A. (2018) ‘Algorithmic fairness’, AEA Papers and Proceedings, 108, pp. 22–27.
LeCun, Y., Bengio, Y. and Hinton, G. (2015) ‘Deep learning’, Nature, 521(7553), pp. 436–444.
Lemon, K.N. and Verhoef, P.C. (2016) ‘Understanding customer experience throughout the customer journey’, Journal of Marketing, 80(6), pp. 69–96.
Makridakis, S. (2017) ‘The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms’, Futures, 90, pp. 46–60.
Mikalef, P., Krogstie, J., Pappas, I.O. and Pavlou, P.A. (2020) ‘Big data analytics capabilities: A systematic literature review and research agenda’, Information Systems Frontiers, 22, pp. 1–21.
Mirsky, Y. and Lee, W. (2021) ‘The creation and detection of deepfakes: A Survey’, ACM Computing Surveys, 54(1), pp. 1–41.
Mulyana, R., Rusu, L. and Perjons, E. (2024) ‘Key ambidextrous IT governance mechanisms for successful digital transformation: A case study of Bank Rakyat Indonesia (BRI)’, Digital Business, 4(2), 100083.
Nassirtoussi, A.K., Aghabozorgi, S., Wah, T.Y. and Ngo, D.C.L. (2015) ‘Text mining for market prediction: A systematic review’, Expert Systems with Applications, 41(16), pp. 7653–7670.
O’Reilly, C.A. and Tushman, M.L. (2013) ‘Organizational ambidexterity: past, present, and future’, Academy of Management Perspectives, 27(4), pp. 324–338.
Pasquale, F. (2015) The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.
Ratzan, J. and Rahman, N. (2024) ‘Measuring responsible artificial intelligence (RAI) in banking: A valid and reliable instrument’, AI and Ethics, 4, pp. 1279–1297.
Ricci, F., Rokach, L. and Shapira, B. (eds.) (2022) Recommender Systems Handbook. 3rd edn. Cham: Springer.
Rossi, F. (2018) ‘Building trust in artificial intelligence’, Journal of Intelligent & Fuzzy Systems, 34(4), pp. 1–12.
Rudko, I., Bashirpour Bonab, A., Fedele, M. and Formisano, A.V. (2024) ‘New institutional theory and AI: Toward rethinking artificial intelligence in organizations’, Journal of Management History.
Shrestha, Y.R., Ben-Menahem, S.M. and von Krogh, G. (2019) ‘Organizational decision-making structures in the age of artificial intelligence’, California Management Review, 61(4), pp. 66–83.
Singh, A. and Hess, T. (2020) ‘How chief digital officers promote the digital transformation of their companies’, MIS Quarterly Executive, 19(2), pp. 1–1
Siau, K. and Yang, Y. (2017) ‘Impact of artificial intelligence, robotics, and machine learning on sales and marketing’, Journal of Database Management, 28(1), pp. 1–10.
Tchamyou, V.S. (2020) ‘The role of knowledge economy in African economic development’, Journal of the Knowledge Economy, 11(4), pp. 1455–1473.
Vargo, S.L. and Lusch, R.F. (2004) ‘Evolving to a new dominant logic for marketing’, Journal of Marketing, 68(1), pp. 1–17.
Vargo, S.L. and Lusch, R.F. (2008) ‘Service-dominant logic: Continuing the evolution’, Journal of the Academy of Marketing Science, 36(1), pp. 1–10.
Vial, G. (2019) ‘Understanding digital transformation: A review and a research agenda’, The Journal of Strategic Information Systems, 28(2), pp. 118–144.
Thaler, R.H. (2015) Misbehaving: The Making of Behavioral Economics. New York: W.W. Norton.
Verhoef, P.C., Broekhuizen, T., Bart, Y., Bhattacharya, A., Dong, J.Q., Fabian, N. and Haenlein, M. (2021) ‘Digital transformation: A multidisciplinary reflection and research agenda’, Journal of Business Research, 122, pp. 889–901.
Westerlund, M. (2019) ‘The emergence of deepfake technology: A review’, Technology Innovation Management Review, 9(11), pp. 39–52.
Wirtz, J., Patterson, P.G., Kunz, W.H., Gruber, T., Lu, V.N., Paluch, S. and Martins, A. (2021) ‘Brave new world: Service robots in the frontline’, Journal of Service Management, 32(1), pp. 8–32.
Zetzsche, D.A., Buckley, R.P., Arner, D.W. and Barberis, J.N. (2018) ‘From FinTech to TechFin: The regulatory challenges of data-driven finance’, New York University Journal of Law & Business, 14(2), pp. 393–446.
Contact
Reach out via email for inquiries.
Subscribe to newsletter
info@grcadvisory.ch
© 2025. All rights reserved.