From killer whales to conversational agents: a PhD researcher explores AI for better dialogue

Blogpost for the AI-MAPS project

What if Artificial Intelligence (AI) could help people talk to each other, not past each other? That’s the question driving Michaël Grauwde, a PhD researcher with the AI MAPS project at Erasmus University Rotterdam.

AI MAPS is part of the Dutch network of ELSA labs, where researchers explore the Ethical, Legal, and Societal Aspects (ELSA) of AI in practice. Michaël focuses on how AI can support reflection and deliberation between different stakeholders, making dialogue more productive, inclusive, and fair.

AI as a conversation partner

“My research is about making dialogue between different stakeholders more productive,” Michaël explains. “Imagine a meeting where residents, policymakers, and companies discuss changes in a neighbourhood. Too often, residents feel excluded because they don’t speak the technical or policy language. I’m exploring how AI could act as a mediator and help people frame their points, understand key ethical concerns, and actually listen to each other.”

Instead of endless debates or shouting matches, AI systems could help participants reflect, consider multiple perspectives, and engage constructively. It’s a vision of AI that doesn’t replace human judgement but creates space for different types of knowledge - from lived experience to professional insight - to meet on more equal terms.

Steering AI

One of the most exciting aspects of Michaël’s work is its potential for real-world application. But it comes with surprises.

“People might assume our AI systems can just be plugged into any scenario,” he says. “But they need careful guidance. That raises ethical questions: who decides how the system frames information? And how do you make sure it amplifies, rather than distorts, voices?”

These questions keep the work grounded. It’s not just about designing AI tools, but making sure they are usable, accountable, and ethically responsible in practice.

From whales to AI

Michaël’s interest in communication goes way back. “When I was younger, I wanted to be a marine biologist because I was obsessed with killer whales,” he recalls. “I wanted to know how they communicate with each other.”

He may not be tracking orcas today, but the fascination with communication has stayed with him. “Now I’m in a field where researchers are even looking at how whales communicate using AI and at human–animal communication more broadly. These can be scary times in AI, but if we build systems correctly, they can allow for wonderful breakthroughs.”

Making ethics actionable

At its core, Michaël’s work is about turning ethical principles into practical tools. “As AI systems become ubiquitous, I didn’t just want to work on the philosophical side,” he says. “I wanted to make the ethical principles we propose more than words on a page. To make them actionable, and to have the knowledge to deploy them in real-world environments.”

By combining critical reflection with hands-on engineering, Michaël hopes to ensure that AI doesn’t just disrupt conversations but rather improves them. Ultimately this will help communities, policymakers, and other stakeholders communicate better, understand each other, and make more informed, inclusive decisions.

Related content
Blogpost for the AI-MAPS project
Nanou van Iersel
Blogpost for the AI-MAPS project
AI imagination Lombardijen with the vegetable garden
Blogpost for the AI-MAPS project by Evert Stamhuis
Evert Stamhuis
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes