Does AI create new systemic risks in finance? A new report urges a careful policy response

Artificial intelligence is rapidly transforming finance, but its growing use by investors and financial institutions may also create new sources of systemic risk. A new report by Robin Lumsdaine, as part of the Advisory Scientific Committee of the European Systemic Risk Board (ESRB), asks a pressing question: does this growing use of AI create, or amplify, systemic risks in the financial system?

The report, “Artificial intelligence and systemic risk”, is co-authored by Robin Lumsdaine, Professor of Applied Econometrics at Erasmus School of Economics, together with Stephen Cecchetti, Tuomas Peltonen and Antonio Sánchez Serrano. Rather than focusing on AI algorithms themselves, the authors examine how AI is used by investors and financial intermediaries, and how these choices can generate externalities that matter for financial stability.

AI already plays a significant role in corporate and financial sectors. Its ability to process vast amounts of unstructured data can improve risk management, fraud detection and decision-making. More broadly, AI may boost productivity through automation, task complementarity and the creation of new tasks. 

In finance, AI is already used for anti-money laundering, fraud detection and risk assessment, while asset managers typically deploy it to support human-led investment decisions. Yet Lumsdaine and co-authors stress that these same capabilities can also introduce new vulnerabilities.

Potential ways AI influence systemic risk

A central contribution of the report is its analysis of 11 specific features of AI that influence systemic risk. These include monitoring challenges caused by the complexity of AI systems, concentration among a small number of AI providers, and model uniformity, which can lead to highly correlated behaviour across institutions. 

Other risk-enhancing features are excessive trust in AI outputs that lead to increased risk-taking and increased speed of transactions that can intensify procyclicality. Also opacity, which reduces transparency, plays a role. Lastly, malicious uses and the spread of misinformation contribute to systemic risk. 

Crucially, the report stresses that these risks arise from how AI is used, not from AI as an inherently destabilising technology.

Adaptation of existing policies

For policymakers, this creates a delicate balancing act. Regulating too early or too stringently may stifle innovation, while acting too late or being too lax could mean losing control. The authors argue that existing regulatory tools are largely sufficient, but need recalibration to account for the speed, scale and scope introduced by AI. Proposed measures include greater transparency about AI use in financial products, ‘skin-in-the-game’ requirements, and strengthened supervisory capacity. Given the global nature of AI, close international cooperation, particularly within the EU, is essential.

If authorities can keep pace with AI’s rapid development, Lumsdaine and her co-authors conclude, it may be possible to harness its benefits while reducing the likelihood of future episodes of financial instability.

About the European Systemic Risk Board

The European Systemic Risk Board (ESRB) is an independent EU body responsible for monitoring and assessing risks to the stability of the financial system across the European Union. It issues warnings and policy recommendations to prevent or mitigate systemic financial crises, working closely with national and European supervisory authorities. The 15-member Advisory Scientific Committee (ASC) that Lumsdaine is on conducts research to inform the ESRB’s macroprudential decisions.

Professor
More information

For more information, please contact Ronald de Groot, Media and Public Relations Officer at Erasmus School of Economics, rdegroot@ese.eur.nl, or +31 6 53 641 846.

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes