What Trust Looks Like When Everyone Is Watching

Armoured police vehicle during the A12 climate protest in 2023.

During the A12 climate protests in 2023, cameras were impossible to ignore. Drones circled overhead, bodycams blinked red, and livestreams rolled from every angle. And it wasn’t just the police recording. Protesters filmed constantly too - turning the whole scene into a kind of mutual surveillance. Everyone watching everyone.

Amidst all that visible tension, a quieter, more unusual collaboration was taking shape: a team of researchers and police officers working together to understand what surveillance does to people- emotionally and socially. They wanted to understand how digital surveillance shapes the balance between two core interests: protecting the right to demonstrate and maintaining public order. Their findings became a scientific article. But the story behind the article, the collaboration itself, offers a different kind of insight. 

This article looks back on that process, through the perspectives of the researchers and police officers who made it possible. 

A12 Climate Protest in 2023.
Gabriele Jacobs

Why collaborate at all?

For Bas Testerink, an AI expert within the Dutch police, the reason is straightforward: the police need critical friends. 

The police as an organization reflects internally, he says, but the ELSA method -  looking at questions through ethical, legal, and social concerns - can’t be captured through standard metrics. Independent researchers bring new angles, new questions, and sometimes the uncomfortable mirrors institutions need. At the same time," he explains, "we’re always under resource constraints and being transparent and absorbing research results takes time and effort." 

This carefulness means access can be difficult. But long-term commitments - like working with the AI MAPS research project - helps bridge that gap. They build the social networks, trust, and shared understanding needed to make collaboration work. 

Researchers in the middle 

For the researchers, working with the police wasn’t just an academic exercise. It meant stepping into a space of tension between activists and police, between public perception and institutional reality, between emotional stakes and methodological rigor. 

These questions don’t appear in compliance documents. They emerge only in the messiness of real-world encounters. 

Police officers during the climate protest in 2023.
Gabriele Jacobs

"We use an ELSA methodology," says PhD researcher Majsa Storbeck. "That means proximity to stakeholders. We work with them, not on them: both for government organizations (police) and civil society (activists)." That closeness helped the research move forward, but it also made things more complicated. 

Some activists distrusted the police. Some officers were wary of activists. And in the crossfire, researchers were sometimes asked whose "side" they were on. Participation was withheld at times. Curiosity was mistaken for alignment. Over time, though, something shifted. 

The research team learned to navigate the sensitivities, explain their role, and earn trust. Eventually, they were even seen as bridge-builders; neither police nor protesters, but something in between, with their own expertise. 

"It required patience and transparency," Majsa reflects. "But it taught us what legitimacy really looks like in practice, and what we - as researchers - can bring to the table."

Unexpected insights from the field 

One of the most striking lessons emerged from how differently people interpreted the same technologies. Inside the police, Bas explains, most AI-related requests are actually about reducing data. Triage. Filtering. Summaries. Colleagues want to see less, not more.  And yet, protesters assumed the opposite - that police had expanding AI surveillance powers. "The psychological impact of that assumption hit me," Bas says. "Even when the capabilities don’t exist, the idea alone can shape someone’s behaviour or sense of safety."

His colleague Jan Spijkerman added a surprising detail: many activists, often highly educated, believed the police were using AI-enabled surveillance systems that they simply do not have. 

But the misunderstandings didn’t come out of nowhere. The researchers noticed how hard it is for the public to get a clear picture of what tools the police actually use. News stories sometimes blur different police authorities together. Legal frameworks are patchy. And in a tense political climate, worries about "function creep" don’t feel far-fetched to many people. All of this made one thing very clear: transparency isn’t optional. People need to know what technologies are being used during protests, why, and on what legal basis. Without that, fear fills in the blanks.

On the ground, the researchers saw a picture that didn’t quite match what policies or press releases suggest. Many protesters didn’t see officers as people there to safeguard their right to demonstrate; they saw them as the face of a system they felt had let them down. The police, meanwhile, were simply carrying out orders from mayors and prosecutors. They had little say in how the protests were handled, yet they were often blamed for everything that happened. The tension grew even sharper around the protests' safety risks, which Jan describes as the "very real physical dangers" of blocking a busy entrance to a highway. 

Police officers during the climate protest in 2023.
Gabriele Jacobs

AI, in this space, became not just technology but a symbol. A screen onto which fears, hopes, and power dynamics were projected. 

Ethics beyond checklists 

AI MAPS lead researcher, Prof. Gabriele Jacobs, describes the collaboration as a "trust-building exercise on many levels."

Researchers learned from police culture: buddy systems, safety checks, clear roles, physical preparedness, regular debriefs. A commitment to independence, confidentiality, and reflexivity was a common ground between the researchers and police officers. They realized that they respected each other’s positions on these topics.

On advise of the police, the research team even adopted yellow vests during fieldwork, not for visibility, but for transparency. People needed to know who they were. And, once they did, both protesters and officers began approaching them to talk. 

One of Gabriele’s key insights was how differently AI is perceived depending on who you are and the role you’re in. AI isn’t just a system - it’s a social construct. Protesters read it one way. Officers read it another. Sometimes even officers and protesters disagreed among themselves: should they emphasise transparency? Or is it strategically helpful that people assume the police have more AI capabilities than they do? 

Researchers join the A12 Climate Protest in 2023.
Gabriele Jacobs

These questions don’t appear in compliance documents. They emerge only in the messiness of real-world encounters. 

What each side wishes the other knew

When asked what he wanted researchers to understand, Bas was clear: limiting access isn’t about distrust. "It’s about safeguarding operations, avoiding miscommunication, and managing limited resources. Being transparent takes time. It creates follow-up questions."

Researchers during the A12 Climate Protest in 2023.
Gabriele Jacobs

Majsa’s message to police partners was different: Academic work moves slowly. Cultures differ. Misunderstandings happen. But researchers also adapt, learning from police practices about structure, safety, and teamwork. 

Gabriele offered advice to both sides: "Be very open and transparent about your needs and your position. Connect as humans. Define clearly what your standards and boundaries are and respect the other side’s."

Trust, in other words, is iterative. Built in conversations, emails, missteps, and clarifications. In yellow vests and buddy systems. In the willingness to learn across professional lines. 

Closing reflections

This collaboration didn’t erase differences. It didn’t make protests less tense or technology less controversial. But it showed that even in polarised spaces, researchers, police and protesters can learn from one another. 

It showed that ethical AI isn’t made through checklists alone. It grows from relationships, from reflection, from people willing to be honest about their blind spots and boundaries. It’s shaped in the space between watching and being watched. As Gabriele noted, "even when societal tensions or opposing professional roles create distance, human connection can still sustain dialogue and joint action."

More information

Want to read the findings of the research? The article has been published in Big Data & Society.

Related content
Blogpost for the AI-MAPS project
Nanou van Iersel

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes