The disruptive potential of agentic artificial intelligence
Does agentic AI have the potential to trigger a paradigm shift in risk management? The prerequisites sound good: intelligent software agents that analyze, decide and control processes independently can be instruments for far-reaching change. But what does it look like in practice? Around 80 experts discussed this at the FIRM Artificial Intelligence workshop on May 19, 2025.

More than 80 participants took part in the workshop, some of whom were present on site.
AI applications are increasingly finding their way into various areas of life and business, including risk management. FIRM has therefore invited all specialist round tables to an overarching exchange to discuss the current trends and potential of AI applications. AI experts Dr. Jochen Papenbrock (NVIDIA), Jan Jelovsek (ING Germany), Dr. Christoph Anders (KPMG) and Dr. Til Bünder (BCG) provided insights into development and banking practice. The workshop was moderated by Dr. Sebastian Fritz-Morgenthal (advisense).
Agentic AI: From the idea to industrial implementation
Dr. Jochen Papenbrock kicked off the event. He presented the concept of the AI Factory – a new operating model for companies to build a scalable, secure and productive AI infrastructure. Agent-based systems (“Agentic AI”), which not only respond to requests but can also plan, execute and evaluate tasks independently, play a central role here. In the medium term, they could therefore largely replace the development of traditional software applications and effectively become digital employees in the company.
With the help of the AI Factory, a company’s own data can be efficiently converted into usable intelligence – for example to automate risk analyses, regulatory documentation or customer interactions. NVIDIA not only provides the necessary computing power for this (e.g. via GPU-optimized systems and inference microservices), but also specialized toolkits for model-based decision-making. Papenbrock emphasizes that the ability of reasoning in particular, i.e. comprehensible and context-based conclusions, is becoming the key to trustworthy AI systems in the financial sector.
Strengthening credit risk models with AI
The use of AI is also advancing in the area of regulatory credit risk models (IRB). Based on ING’s experience, Jan Jelovsek showed how modern machine learning methods such as XGBoost, LightGBM or Explainable Boosting Machines (EBMs) can help to improve the forecasting quality and interpretability of PD and LGD models.
In addition, the involvement in the BSI project AICRIV-Finanz was highlighted, in which concrete test criteria, requirements and methods for AI systems in the financial sector were developed and tested together with other institutions – an important contribution to the implementation of the EU AI Act and the promotion of trustworthy AI in the credit risk context. Ulf Menzeler from d-fine showed what the use cases look like in practical application.
Rethinking AI risk management
Dr. Christoph Anders (KPMG) contributed the lessons learned from the KPMG RiskTech conference to the discussion. Under the motto “Rethinking risk management in banks”, it became clear that technology, organization and people must be considered even more closely together in the future. The integration of GenAI, particularly in the form of agent-based systems, requires new governance structures, clear responsibilities and the targeted promotion of AI literacy throughout the company.
Another key topic was the classification of AI applications under the EU AI Act. Although many use cases in the financial sector are not formally considered “high-risk systems”, the breadth of the definition and the practical implementation raise questions – particularly in the context of automated credit decisions. KPMG therefore advises proactive AI risk management that goes beyond traditional model validation and includes aspects such as cyber risks, bias prevention and control architecture.
The need for AI literacy
The subsequent discussion focused on the topics of AI literacy – i.e. the ability to understand, critically classify and competently apply artificial intelligence – and the question of possible applications in risk management. The use cases mentioned (many of which are still being designed or developed) show that AI is currently mainly used to increase efficiency and effectiveness. However, a paradigm shift has not yet taken place. The applications are still too error-prone for this and the traceability required by the supervisory authorities is a particular challenge for use cases. However, most experts expect the ever more advanced applications to offer enormous potential for new workflows and new approaches, for example in scenario analysis.
The workshop provided a good basis for delving even deeper into the possible applications of generative and agentic AI in risk management as part of the Artificial Intelligence Round Table.