Predictive policing, risk assessment, and artificial intelligence in law enforcement: Marc Schuilenburg, Professor of Digital Surveillance at Erasmus School of Law, has long studied the role of technology in criminal investigation. In this two-part series, we look both forward and backward with him: how new are AI applications in policing, what makes them ethically and practically problematic, and is there also a positive use imaginable? In this second part, we explore with Schuilenburg what AI can bring to the police.
(Or read part 1 first: What are the limits of predictive policing?)
After years of critical publications about predictive policing, Schuilenburg has recently chosen a different approach. “I realised that criticising from a distance, after the fact, had little impact,” he says. “That’s why I now try to influence the development phase of technology, so that public values like privacy, non-discrimination, and accountability are considered from the start.”
The shift from critical evaluation to active participation is not an obvious one for an academic. “It takes courage as a researcher to take on a guiding role, because you lose some academic distance. But if you dare to take that step, you can really make a difference,” says Schuilenburg. “And what I’m seeing now gives me hope. The police and other organisations are becoming increasingly open to this kind of collaboration.”
Current pilots – such as in Zaandam, where 1,000 asylum seekers are housed on two large boats – explore how data-driven tools can strengthen the social legitimacy of the police. Not as instruments of control, but as bridges between citizens and community-based policing.
From evaluating to co-creating
Another interesting initiative within the police are so-called ethics tables: consultation moments where the design of new technological tools is critically discussed. Different types of knowledge come together here. Alongside technicians, ethicists, people with lived experience, and policymakers are also at the table. “Previously, it was all about technical expertise. Now we recognise that experiential knowledge and social insights are also necessary to develop technology responsibly.”
The ethics table marks a broader shift: away from the idea that technology is a neutral tool, and towards the awareness that design choices have societal consequences. “If you only think from a technical perspective, you miss the context in which AI is applied. And that context is crucial if you want technology to fit reality. That makes AI a socio-technical challenge.”
Policing inspired by the Japanese model
Since September 2024, Schuilenburg has been involved in the international KOBAN project, funded by the European Commission. This three-year research focuses on the digitalisation of community policing, inspired by the Japanese KOBAN model: a form of policing where small, decentralised teams work closely with the community and provide local, tailored solutions. Various AI tools are being developed and tested in practice, aiming to strengthen the relationship between police and citizens. The focus is not on risk prediction or control, but on enhancing legitimacy and trust. “The question is not just what AI can do for fighting crime, but also: how can AI contribute to a better relationship between police and citizens, and the trust built on that?” says Schuilenburg.

AI used to promote participation
A striking example of how AI can be used positively, Schuilenburg says, is the possibility to visualise citizens’ lived experiences. In neighbourhoods where residents feel underrepresented by authorities and policymakers, AI can help them express their perception of safety. “Many people have valuable knowledge about their neighbourhood, but find it hard to articulate this in conversations with the municipality or police. Through AI-generated images, they can still make their ideas about what safety means to them visible.”
This is being researched in the AI Maps project, in which Schuilenburg and PhD candidate Majsa Storbeck from Erasmus School of Law are participating. “These visual expressions are then discussed in sessions with officials and other stakeholders, to reach new shared insights. Here, AI doesn’t help to fight, but to imagine,” Schuilenburg emphasises. “It’s about enabling people who are usually less heard to participate in a different way in the conversation about their living environment.”
This approach aligns closely with the principle behind the ethics table: technology should not exclude people, but enable them to actively participate. Imagining safety is an example of how AI not only analyses, but also facilitates. And that fits within a broader vision of community policing: local, engaged, and focused on collaboration. Schuilenburg: “Technology should never be disconnected from local reality. We are looking for ways to use AI as a tool for connection, not exclusion.”
Technology designed with connection in mind
For Schuilenburg, it is clear: AI is not a neutral instrument, but a product of societal choices. “Good technology doesn’t just happen. It requires thinking from the start about the desirable and undesirable effects of technology.” These choices should not only be made by technicians, but in dialogue with citizens, community officers, scientists – especially the voices that are less heard in the AI debate, which now mainly revolves around calls for more safety and efficiency.
Where part one of this series exposed the risks of predictive policing, part two shows that AI can also follow a different path. Not as a silver bullet, but as a tool for collaboration, to rebuild trust, and to create connection. “Technology can help explore new forms of cooperation,” Schuilenburg concludes. “But only if we recognise that AI never stands alone, but is always part of a broader social context.”
- Professor
- Related content