The consortium partners of AI MAPS are sitting around a colorful, rectangular table to brainstorm.

What is ethical and responsible artificial intelligence ("AI")? The Covid pandemic and unrest around forced migration or farmers' protests illustrate the significance of public safety. Ensuring public safety is a constant challenge in the pursuit of the delicate balance between freedom and security.

AI MAPS adopts a freedom and social well-being perspective and focuses on three themes to address key security challenges:

  • Social disorder and public nuisances in neighbourhoods; ​

  • High-impact crime; ​

  • Crowds and events.​

AI algorithms can support institutions in making decisions and taking public safety measures. For instance, pattern recognition in videos can automatically detect anomalous patterns in neighbourhood gatherings and semantic pattern recognition in texts can detect rising tensions in groups on social media. AI algorithms can thus guide local authorities in scaling up intervention activities. AI MAPS focuses on the ethical, legal and social aspects of AI (ELSA) development and application - to prevent the solution from being worse than the problem.


AI MAPS will create a mutual-learning ecosystem of quadruple helix agents to responsibly guide the growing use of AI applications. The quadruple helix consists of 20 partners from academic research, government, business and civil society. Together, they will combine their knowledge, experiences and perspectives to produce ELSA guidelines to best meet diverse citizen needs and an investment framework that will provide guidance on what kinds of AI applications are worth investing in.

We approach AI from the perspective of an AI-cology, where we study AI systems, methods and applications from the perspective of Hybrid (human and artificial) Intelligence. We do not try to replace humans but focus on human-AI collaboration for better human well-being. Next to this, we plan on including nature as a stakeholder within our use cases to enhance cross-species justice. ​


We believe that AI-MAPS can make a real difference. Our hope is to jointly create powerful tools and insights with hybrid intelligence solutions for public safety. We work towards innovative, truly inclusive community approaches, targeting the root causes of social unrest. In this, we aim to contribute to a free and safe society for all.


AI MAPS stands for “AI Multi‐Agency Public Safety Issues” and is financed by NWO as part of the Call for Proposals Synergy theme Artificial Intelligence: Human-centred AI for an inclusive society – towards an ecosystem of trust.​ The project consists of 6 implementation partners, a Consortium advisory board (dedicated stakeholders with decision-making power within the project) and a Sounding board (interested stakeholders without decision-making power).

AI MAPS is part of the ELSA (ethical, legal and societal aspects of AI) Network. 

Get in touch

The AI MAPS consortium would like to hear from you! If you have any questions, thoughts, or opportunities to connect you want to share with us, please contact project manager Ekaterina Voynova, ekaterina.voynova[at]eur.nl. 

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes