Walking and talking about AI and public safety in Lombardijen

Blogpost for the AI-MAPS project

What happens when technology marketed to “keep us safe” starts quietly shaping how we live together?

From facial recognition to violence detection software, AI systems are finding their way into policing and public spaces, sometimes as decision-making tools, sometimes just as “advisory” systems. But even when their role is described as minimal, their influence is anything but. And outside of official settings, the eyes of smart doorbells and smartphones are already everywhere. Together, these technologies are reshaping what it means to be and feel safe in public. Sometimes fostering security, sometimes deepening division or distrust.

That’s what Marlon Kruizinga’s research as part of the AI MAPS project tries to understand. AI MAPS brings together researchers, policymakers, and citizens to explore the Ethical, Legal, and Societal Aspects of AI - not in theory, but in practice. For Marlon, that means asking: how do people think ethically about the AI systems entering their lives, often marketed as tools to improve safety? And how can we bring different perspectives from residents to policymakers into real, constructive dialogue about what “responsible use” should look like?

Grappling with ethics on the ground

It’s not enough to study AI as if we were neutral observers. Technologies like these don’t just affect our behaviour, they shape our moral and political sensibilities too. They change what we notice, what we tolerate, and even what we consider fair.

That’s why his research aims to develop practical, normative tools — ways for communities and institutions to think and act ethically in the very specific contexts where AI for public safety is being used.

Listening to Lombardijen

Over the past year, he has been working in Lombardijen, a neighbourhood in Rotterdam-South that has already become something of a case study for AI MAPS. Through interviews and weekly walks with the local neighbourhood watch group, he’s heard a mix of curiosity, fatigue, and scepticism about smart cameras and other AI systems.

Several residents said they “could have a stronger opinion” about these technologies, but what would be the point? “They’ll do it anyway,” one person said, “whether we agree or not.” That kind of moral apathy, if it grows, could make it harder later on to have meaningful conversations about ethics or responsibility. It points to something deeper: a lack of trust, and a sense that decisions about technology are made elsewhere, by others.

Earning engagement

According to Marlon, if we want civil society to engage seriously with the ethics of AI, that engagement has to be earned. It means showing that those in government, academia, and industry are willing to share the steering wheel and to let community perspectives genuinely influence what gets built and how it’s used.

This is challenging work, but it’s also deeply rewarding. Marlon says that the trust he’s built by simply showing up - walking the streets, listening, joining community meetings - has been invaluable. Over time, he has become a familiar face at the local community centre. People wave when he arrives and sometimes they insist he stays for a hot meal or a sweet treat.

It’s those moments that keep him grounded. Marlon says “they remind me what this research is really about: not the technology itself, but the people it touches and the kind of community where safety is something we build together, not something imposed from above.”

Related content
Blogpost for the AI-MAPS project
Nanou van Iersel
Blogpost for the AI-MAPS project
AI imagination Lombardijen with the vegetable garden
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes