Artificial intelligence (AI) is becoming a structural component of modern bank management. It enhances speed, scalability as well as cost and process efficiency, but simultaneously changes the risk materialisation, correlation, controllability and accountability of existing financial and non-financial risks. Against this backdrop, the CRO function gains significantly in strategic importance. It is becoming the central integration hub. The paper argues that AI is not a standalone risk type, but rather expands the core of risk management. What is required is not a parallel AI framework, but systematic embedding in existing risk management, governance and control structures. The CRO takes on an architectural leadership role: designing robust decision-making architectures, enabling speed through clear guardrails and securing trust with supervisory authorities, clients and capital markets.
On this basis, the paper develops an integrated blueprint for embedding AI in bank-wide risk management and derives implications for the CRO role, working methods and organization until 2030-35.
1. Introduction
1.1 Context and problem STATEMENT
Artificial intelligence is currently being implemented at high speed within banks and is increasingly penetrating strategic, operational and governance functions. Institutions use AI to accelerate decision-making processes, increase cost and process efficiency and make more systematic unlock large, heterogeneous data pools for business and risk management. Particularly in credit-related decision-making processes, in customer service, in treasury functions and in analysis and control processes, AI enables higher scalability, consistency and responsiveness of organizations.
These efficiency and productivity potentials make AI a central lever for competitiveness in the banking sector. At the same time, AI enables finer segmentation of risks, more dynamic resource allocation and a closer alignment of business and risk perspectives. Particularly against the backdrop of increasing regulatory, technological, economic and geopolitical complexity, AI is thus becoming a structural component of modern bank management.
Against this backdrop, the diffusion of AI in the banking sector marks a turning point that goes beyond incremental process optimization. AI acts as an orthogonal or cross-cutting risk dimension: it does not merely intervene in individual models or processes, but reconfigures the decision logic, governance architecture, and competitive dynamics of institutions. In this dual function – as a multiplier of established risk types (financial and non-financial risks) and as an amplifier of exogenous shocks – risk profiles emerge that are substantially shaped by model dependencies, data risks, and concentrations in technology ecosystems, along with resulting loss risks.
In parallel, the market and competitive environment is shifting. Value creation is migrating into platform and ecosystem structures, in which credit decisions, payments and financing are embedded in highly integrated digital journeys. Competition is exercised less over individual products than over the ability to consistently and scalably govern complex decision processes. Speed – in data collection, model adaptation and decision throughput – becomes a strategic variable; however, it creates synchronized failure modes, particularly when institutions draw on similar data pools, model families and infrastructures, and implicit biases in the underlying data are systematically propagated into decisions.
This development poses new demands on the trust anchor of banking. When decisions are increasingly automated and adaptive, trust can no longer be taken for granted but must be actively shaped, explained, and protected. The ability to combine speed, efficiency and innovation with fairness, transparency and reliability thus becomes a central success factor of modern bank management. Accordingly, European banking supervision also places outstanding importance on this topic and defines the effective embedding of digital and AI-related strategies, governance approaches, and risk management structures as a central supervisory priority for the next three years.
Against this backdrop, the risk function undergoes a structural upgrade. It is becoming a natural interface where technological possibilities, business ambitions and regulatory and social expectations converge, and where institutions are supported in realizing the potential of AI – particularly cost and process efficiency – without inappropriate elevating their risk profile.
1.2 Implications for the Risk Function and the CRO role
The increasing penetration of AI fundamentally changes the expectations places on the risk function. Risks no longer arise primarily from individual business decisions or external shocks, but increasingly from the design, coupling and scaling of internal decision structures. This shifts the focus of the risk function from a downstream control authority to a formative control function that integrates business strategy, technology deployment and risk appetite.
The central management task of the CRO is to assume responsibility for this new decision architecture and to secure trust as an explicit control metric. Governance evolves as a framework of clearly defined guardrails: the CRO translates the risk appetite into operational requirements, creates transparency about decision logic and defines consistent governance structure within, which AI can be deployed and scaled responsibly. The goal is not control of individual cases, but establishment of a reliable decision-making environment that enables speed while simultaneously ensuring fairness, traceability and regulatory compliance.
In this role, the CRO becomes the strategic integrator, bringing together technological opportunities, business ambitions and regulatory and societal expectations.
1.3 Transformation of the Second Line of Defence: Role, Working Mode and Efficiency
The strategic repositioning of the risk function requires a corresponding transformation of the 2nd LoD. Classic, largely ex-post-oriented review and control mechanisms are only of limited suitability for AI-supported, adaptive decision-making. When risks arise faster, propagate across system and process boundaries, and can amplify exogenous shocks, risk management must intervene earlier, more continuously and closer to operational value creation.
Close interlinking between 1st and 2nd LoD becomes imperative. In this context, the CRO is to be understood as a strategic sparring partner, particularly for the CDO/COO, in order to coherently align risk, data and operations coherently.
In this context, the four central roles of the CRO take concrete shape:

Operationally, the role of 2nd LoD shifts from pure documentation and model review towards continuous, end-to-end system assurance across the entire life cycle of AI applications. Risk is no longer assessed downstream, but integrated early into design, build, deployment, and change processes. Simultaneously, working modes between 1st and 2nd LoD are changing: the risk function is evolving from a controlling authority to an active enabler of responsible innovation.
Process efficiency becomes an imperative within the risk function itself. Standardization, automation, and risk-based prioritization are prerequisites for the 2nd LoD to concentrate on value-adding activities and to ensure the necessary speed in responding to new risks.
2. The AI Risk Framework – Blueprint for Embedding AI into Bank-Wide Risk Management
This chapter does not develop a standalone, parallel AI framework, but rather an integrated blueprint for the systematic embedding of AI into existing bank-wide risk management. The starting point is the recognition that AI is a transversal risk. This perspective is also held by the European Central Bank, which views the transversal embedding of AI in governance and risk management structures as a central prerequisite for the stability and steerability of the financial sector.
The blueprint follows a clear structure. The existing risk management framework is broken down into three central components – (i) strategic anchoring in business strategy and risk appetite, (ii) integration in the management of existing risk types and (iii) operating model and enablers – each supplemented by AI-specific governance. The goal is not a parallel structure, but a coherent, auditable, and decision enabling further development of the existing frameworks.
The blueprint follows three guiding principles:
- End-to-end accountability and transparency: Clear responsibilities and traceability across the entire life cycle of AI-supported decision-making systems – from data sources and models to agentic behaviour and human intervention.
- Build on existing frameworks: AI is integrated into existing risk types, ICAAP/ILAAP processes and governance and control structures.
- Risk-based proportionality: The intensity of control is determined by the materiality, degree of automation, and depth of impact of AI applications.
2.1 Strategic Anchoring of AI in Strategy and Risk Appetite
The effective management of AI begins at board level and in the explicit articulation of the business strategy. The deployment of AI is to be understood as a conscious risk and management decision, and anchored accordingly in strategy, risk appetite, and multi-year planning. Central elements include the prioritization of strategic AI use cases along the value chain, a multi-year scenario analysis of technological, regulatory and business developments and an explicit AI risk appetite that defines degrees of automation, error tolerances, intervention rights and non-negotiable guardrails.
The AI anchoring creates orientation for investment decisions, governance design and operational implementation and forms the basis for consistent scaling of AI-supported decisions in line with the institution’s risk profile.
2.2 Extension of the Risk Management Core: Risk Types × Risk Management Components
The management of AI follows existing risk types. The innovation lies not in the taxonomy itself, but in the systematic expansion of classic risk management components with AI-specific requirements. The blueprint follows a matrix logic that links risk types with central risk management components.
Risk types per the institution’s own risk taxonomy:
Credit, market, liquidity, model, business, operational risks as well as reputational, legal & compliance, cyber, third-party, data privacy and HR risks.
Risk Management Components:
- Risk Identification: Systematic capture of AI use cases, degrees of automation, and data and model dependencies.
- Materiality Assessment: Evaluation of financial, nonfinancial, and systemic effects, including concentration effects.
- Risk Appetite & Limits: Definition of AI-specific guardrails on depth of automation, error tolerances, and escalation and shutdown criteria.
- Governance & Oversight: Clear RACI structures, specific committees, independent challenge functions and transparent decision-making documentation.
- Capital, Liquidity & Resilience: Consideration of AI-related risks in ICAAP, ILAAP and business resilience considerations.
For prioritisation, a condensed heatmap is used that classifies risk types by disruption intensity, steering complexity and materialization over time. It serves as a central management tool for the CRO to align risk appetite, governance priorities and resource allocation across risk types.

Key: Priority action areas are risk types with high disruption intensity and high complexity. The time horizon shows whether institutions need to ensure stability and trust in the short term or anchor governance and control mechanisms in the risk management framework in the medium term.
AI is thus integrated consistently into the existing core of risk management.iorities and allocation decisions across risk types.
2.3 Operating Model and Enabler
The effectiveness of the framework depends on the further development of the operating model that goes beyond the risk function. A prerequisite for scalable and governable AI applications is transversal data aggregation capability across all functions and business lines. This gives the principles of BCBS 239 organisation-wide significance – not merely as a regulatory minimum requirement for financial and risk data, but as a structural enabler for AI-supported management.
Central enablers are:
- Data & Technology: End-to-end data aggregation, lineage, monitoring and standardised transparency artefacts along the entire value chain. AI can itself contribute to improving data quality by identifying anomalies, patterns and inconsistencies and pinpointing specific areas of action. This shifts the focus from a sequential “data quality first, then AI” to an integrated approach in which AI actively contributes to quality assurance.
- Processes: End-to-end lifecycle management of AI systems, including red teaming, incident and kill switch playbooks, and scalable change management. The goal is a sustainable increase in process efficiency and scalability.
- Capabilities: Development of hybrid profiles at the intersection of risk, data, AI and business, addressing both control and scaling requirements.
Implementing this target operating model creates the foundation for the effectiveness of the entire framework. Transparent decision-making architectures, clear responsibilities and scalable data and process structures enable the risk function to keep pace with the dynamics of AI-supported systems without losing depth, consistency or steerability.ency or controllability.
3. Outlook – the CRO Function 2030-2035
While Chapter 2 describes the integrated blueprint and the underlying target operating model as a prerequisite for effective AI management, the focus now turns to the future. Chapter 3 outlines how the CRO function will develop by 2030 in this context – from a primarily controlling body to the designer of responsible decision-making architectures in the AI age.
Current global risk analyses, in particular the World Economic Forum Global Risks Report 2026, show that AI-related risks are gaining significantly in importance and rank among the defining risk drivers in both the short and long term. In addition, adverse outcomes from AI technologies are expected, whose effects will only become visible in the medium to long term, but could then lead to considerable systemic effects.
The time dimension of this development is central. Experience with regulatory requirements such as BCBS 239 illustrates the high structural complexity and implementation intensity at institutions: between publication, binding application and actual operational implementation, nearly ten years elapsed at many banks, and after more than a decade, full implementation has not been achieved at all institutions even for the reduced focus on risk and finance functions. The causes are the complexity of historically grown legacy systems, fragmented data architectures, and the considerable investment required to technologically transform institutions.
AI is currently experiencing a momentum in which integration into risk management and business processes is rapidly increasing in speed. This dynamic forces organizations to resolve the temporal asymmetry between technological development and structural adaptability, and to act immediately.
Against this backdrop, the years up to 2030 – and beyond to 2035 – will become a transformation phase. This requires immediate action as well as a clear transformation vision and strategy for the coming ten years. Unlike BCBS 239, this is not primarily a regulatory requirement, but a matter of competitiveness and business resilience. The transformation will only succeed through close collaboration among CRO, CDO and COO, with the CRO acting as a strategic sparring partner and the risk function consistently positioned as a cooperation and enablement partner. The aim is to break down siloed thinking and to connect risk, data and operations into a consistent decision-making architecture.
Strategic Implications and Call to Action for CROs
From Control to Guardrails and Enablement
Ex-post controls are replaced by pre-anchored guardrails. Risk-based classifications of AI use cases, standardized approval pathways and technically implemented guardrails enable speed without compromising controllability and trust.
From Model Validation to System Responsibility
AI-supported decision-making architectures require end-to-end responsibility for data sources, model chains, agentic behaviour and human intervention. Management is based on a small number of decision-relevant key metrics and clear ownership – not through point-in-time reviews of individual models.
From Compliance to Strategic Modernization and Business Resilience
Regulatory requirements are not seen as an isolated mandatory exercise, but as levers for coupling risk appetite, AI governance and business strategy. The CRO uses regulation deliberately to address structural weaknesses in data, processes and governance structures and to strengthen the resilience of the business model.
From Ex-Post Reactions to Sustainable Capability Buildup
Short-term measures are necessary but insufficient. The bottleneck lies in building sustainable capabilities at the intersection of risk, data, AI and business. Leading CROs act as talent architects and develop organizational models that systematically combine experiential knowledge and algorithmic thinking.
Resource Management, Upskilling and Change management as a Leadership Task
The transformation requires targeted resource management as well as consistent upskilling and change management across the entire organization. The central prerequisite is the systematic identification of skills that are currently absent or only insufficiently present, but will be needed in the future for steering AI-supported decision-making.
At the same time, knowledge development must be organized as a continuous transfer process – through new learning formats, institutionalised communities of practice, targeted rotation between functions and the deliberate combination of expert knowledge and algorithmic thinking. AI transformation is not a project, but an ongoing process of change that equally affects culture, skills and management models.
Leading CROs do not address the open questions of the AI age abstractly, but operationally. They design decision-making architectures that enable speed without sacrificing resilience and trust – and anchor governance, technology, and strategy in a robust risk management model that remains viable even under uncertainty.
Bank of England (2024) Safe and responsible AI in financial services. Available at: https://www.bankofengland.co.uk/report/2024/safe-and-responsible-ai-in-financial-services (Accessed: 07 January 2026).
BIS – Bank for International Settlements (2023) BIS Working Paper 1132: Artificial intelligence and financial stability. Available at: https://www.bis.org/publ/work1132.pdf (Accessed: 07 January 2026).
EBA – European Banking Authority (2022) Discussion paper on machine learning for IRB models. Available at: https://www.eba.europa.eu/sites/default/files/document_library/Publications/Discussions/2022/ Discussion%20on%20machine%20learning%20for%20IRB%20models/1023883/ Discussion%20paper%20on%20machine%20learning%20for%20IRB%20models.pdf (Accessed: 07 January 2026).
ECB Banking Supervision (2025) SSM Supervisory Priorities 2026-2028. Available at: https://www.bankingsupervision.europa.eu/framework/priorities/html/ssm.supervisory_priorities202511.en.html (Accessed: 07 January 2026).
European Union (2024) Artificial Intelligence Act (AI Act) – consolidated text. Available at: https://artificial-intelligence-act.eu/ai-act-text/ (Accessed: 07 January 2026).
FSB – Financial Stability Board (2025) Artificial intelligence in financial services: adoption, risks and regulatory/supervisory implications. Available at: https://www.fsb.org/wp-content/uploads/P101025.pdf (Accessed: 07 January 2026).
NIST – National Institute of Standards and Technology (2023) AI Risk Management Framework (AI RMF 1.0). NIST.AI.100-1. Available at: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf (Accessed: 07 January 2026).
OECD (n.d.) OECD.AI Policy Initiatives Dashboard. Available at: https://oecd.ai/en/dashboards/policy-initiatives/AI-Central (Accessed: 07 January 2026).
Oliver Wyman (2025) Success factors for generative AI in banks. Available at: https://www.oliverwyman.de/unsere-expertise/publikationen/2024/oct/die-zukunft-des-kundenservice-in-banken.html (Accessed: 07 January 2026).
Oliver Wyman (2025) Credit Risk Assistant – AI-driven solution. Available at: https://www.oliverwyman.com/our-expertise/insights/2025/mar/credit-risk-assistant-ai-driven-solution.html (Accessed: 07 January 2026).