From sceptic to scholar, a PhD researcher navigates AI's accountability problem

Blogpost for the AI-MAPS project

If you want to understand the real-world impact of Artificial Intelligence (AI), don’t just look at the technology. Look at the context. This is the core philosophy driving Nanou van Iersel, a PhD researcher with the AI MAPS project at Erasmus University Rotterdam.

AI MAPS is part of the Dutch network of ELSA labs - collaborative spaces where researchers, government, industry and citizens together explore the Ethical, Legal, and Societal Aspects of AI in practice. The goal is to learn from each other on the potential and limitations of AI, and whether and how accountability is affected by it. Nanou’s work digs into the messy middle ground between negative and positive effects, the specific, everyday contexts where AI is deployed and the complicated question of who is accountable when things go right or wrong.

Accountability in action

“On a generic level, it’s easy to say we need accountability for AI,” Nanou explains. “But it's only on the concrete, contextual level that you see the complexity.” What do we actually mean by accountability? In the face of failures due to AI, can one person or organization reasonably be held responsible? How does existing law structure accountability, and does it match the reality of how these new technologies work?

This focus is vital because AI is no longer futuristic. “AI and digital technologies are affecting everyone,” she notes. “Public space is filled with it, from cameras with automatic number-plate recognition on highways to smart doorbells in neighbourhoods. We need to actively shape this future, rather than having it happen to us, led by tech companies projecting their ideals onto our society.”

From scepticism to nuance

One of the most surprising findings of her work has been her own evolving perspective.

“I started this project mostly anti-AI,” she admits.  However, through countless stakeholder interactions, interviews, and observations, her view became more nuanced. “After seeing many real-life examples, I see now that there are also cases where AI and other digital technologies are desirable. I am still highly critical (and still kind of wishing we lived in a world with less surveillance) but I can understand the potential in particular contexts, like using AI to serve environmental protection.”

Accountability and "function creep"

Her research on camera surveillance shows how messy accountability becomes in practice.

Cameras are often installed for a narrow purpose, such as monitoring traffic. But once in place, they inevitably record much more. This phenomenon, known as “function creep,” creates legal and ethical grey zones.

“Cameras film everything within the scope of their lenses, not just the thing they were originally meant to record,” Nanou points out. “Is it allowed to act upon a crime recorded by a camera that was not installed for that purpose? This is a problem inherent to cameras worldwide.”

Her work maps how function creep unfolds in practice, challenging law enforcement and policymakers to face the accountability gap between a camera’s original justification and its actual capabilities.

The value of “multiple ways of knowing”

What drew Nanou to AI MAPS wasn’t just technology, but the project’s commitment to epistemic inclusion - the idea that there are multiple, equally valid ways of knowing.

“There is knowledge from experience, from profession, from art, or Indigenous knowledge,” she says. “I believe in the equality of different knowledge practices. I like that AI MAPS takes this as a starting point by trying to learn from other stakeholders and having them shape our research questions.”

This philosophy ensures her research on accountability is grounded not just in law and theory, but in the lived experiences of those affected by AI.

A grounded perspective

When she’s not untangling the complexities of AI and the law, Nanou is likely untangling herself from a different kind of complexity: the jungle. She recently returned from a long holiday in Suriname and French Guyana, a stark contrast to the world of digital technology. “It was a shock after spending so much time in a tropical climate, away from my laptop,” she says, “but never away from books.”

This balance between deep critical thought and a connection to the wider world is perhaps what makes her approach so valuable. In a field dominated by either uncritical hype or blanket condemnation, Nanou offers a rare and essential perspective: that of the critical engager, asking "why" and "for whom" to ensure AI's future is accountable to us all.

Related content
Blogpost for the AI-MAPS project
AI imagination Lombardijen with the vegetable garden
Blogpost for the AI-MAPS project by Nanou van Iersel
Nanou van Iersel
Blogpost for the AI-MAPS project by Majsa Storbeck
Majsa Storbeck
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes