Responsible Engagement with AI in Organisations

First cross-cluster meeting of REACT and ROCCS
AI in Organizations

AI-powered healthcare virtual assistants, AI-generated music compositions and automated customer service via chatbots: these are examples of how Artificial Intelligence (AI) has increasingly been adopted in the professional sphere. But how to engage responsibly with AI as an organisation? This question was central to the first cross-cluster meeting of REACT and ROCCS last month, where members interested in or working on AI met to prompt discussion on this topic. “As scholars and especially when investigating responsibility, we need to ask critical questions,” said Dr. João Gonçalves.

The first cross-cluster meeting of REACT and ROCCS, organised by Dr. Sarah Young and Phuong Hoan Le, co-coordinators of REACT and ROCCS, kicks off with an introductory presentation by discussion leader Dr. João Gonçalves.

What is (responsible) AI?

The first point of discussion is the concept of AI. “If you look for definitions, everyone will come up with a different one”, says Gonçalves. “Machine learning is the technical part, but the meaning of AI is socially constructed, depending on the interaction with the learning with society”. For example, in the 1900s, we might have said that a calculator was AI because it was doing tasks that we thought only humans could do, whereas now we think AI is different. The process is mathematical, but humans direct the input on the one hand and what they want as a society to get as an output on the other.

One of the participants wonders whether questions around responsible AI aren’t similar to the questions about how to engage responsibly with employees. The discussion that follows revolves around the differences between how humans learn and the mechanisms of AI. Gonçalves: “The only risk with that debate is the discourse that AI is an agent, which can lead to the oversight of the humans behind it and might displace responsibility from people to AI, for example to ChatGPT.” 

“The risk is the discourse that AI is an agent, which can lead to the oversight of the humans behind it and might displace responsibility from people to AI"

- Dr. João Gonçalves 

“It’s also interesting to understand the organisational management part”, says Gonçalves. Crisis communication plays a role in the attribution of responsibility. A colleague carries the dialogue forward: “If you try to find a solution as a company, should the focus be on the developer of AI, or also on its users and their literacy?” An excerpt of paragraph 4b of the EU AI Act is cited: ‘Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf’.

Joint conversation

As last agenda point, participants addressed how artificial intelligence plays a role in their research and how they can contribute as social scientists to the ongoing debate on responsible engagement with AI. Aspects that come to pass: the relevance of AI in research on platform workers, the framing of AI crises, the use of the image software Midjourney to create experimental designs, and the AI tool of Atlas.ti. 

The meeting ends with a round of key takeaways. “This is mostly a discussion starter”, says Le in her concluding remarks. It’s only the first step in a joint conversation on responsible AI within ERMeCHS. 

PhD student
PhD student
More information

Responsible Engagement with AI, Culture and Technology (REACT)
Responsible Organisations: Communication, Change and Society (ROCCS)

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes