Governments face growing pressure to use AI responsibly. Following rules is not enough. True trust is built when technology serves people. In this blog, prof. dr. Evert Stamhuis from Erasmus University Rotterdam shares insights on how AI can strengthen public trust while keeping citizens at the centre.
AI is changing how governments deliver public services. It can predict patterns, manage crowds, and support decision-making. But it also raises a key question: how can governments use AI responsibly and earn trust from citizens?
This question guided a recent session hosted by ECP | Platform for the Information Society. Experts from academia, government, and industry gathered to explore the topic. The event was an opportunity to discuss how AI can support public safety while remaining ethical and people-centred.
People first, technology second
Prof. dr. Evert Stamhuis, from Erasmus University Rotterdam and one of the researchers on AI MAPS, shared a central idea: technology should not be the starting point. Solutions need to come from the people who experience the problems.
Evert explained: “I described how choices about the use of data-driven technology affect many more stakeholders than just the users. A conversation with all stakeholders is desirable, but not simple to realise in practice. Inclusion is more than just sending out a questionnaire.”
Evert warned against tech solutionism, which is the idea that technology alone can solve complex social problems. While AI can highlight patterns in digital data, it cannot solve real world problems that are underrepresented in the data. Innovation starts with understanding human needs.
Inclusion, transparency and trust
The session highlighted the importance of inclusion and transparency. Citizens should be part of design decisions. Experts from different fields should collaborate. Decisions should be understandable and visible.
As Evert emphasised: “It may be true that not every resident understands the ins and outs of technology, but they do understand their own problems. Experts surely have great value, but should not define problem and solution on their own terms.”
These principles help governments move beyond compliance. Trust is not a checklist. It grows through engagement, reflection, and collaboration.
AI MAPS in action
The ideas discussed at the session reflect the work of AI MAPS. The project combines ethical, legal, and social research with AI development. It focuses on complex public safety challenges.
Evert shared a concrete example: “In the Lombardijen use case, we have tried to give a voice to the residents. We did not deploy traditional social science tools but used art and AI. This could work because we had a good entry there and invested in contacts with people in this neighbourhood.”
Responsible AI is not just about rules or technology. It is about understanding people and respecting societal values.
From compliance to trust
The main lesson is clear. Following rules is necessary, but it is not enough. Governments need to listen to the people affected by AI and include them in decisions. Collaboration and critical reflection are essential.
Reflecting on the idea of trust, Evert said: “Trust in AI involves so many diverse trust relations that I cannot say that I generally find AI trustworthy. We can aim for the optimum in concrete contexts with people and planet in mind.”
Responsible AI is not just a technical or legal challenge. It is a societal one. Keeping people at the centre is the key to building trust.
- Related content
- Related links
- Overview blogposts | AI Maps
