Making AI “less artificial”

João Gonçalves received a Veni grant for his research on using social science in AI
Algorithm
Joao Fernando Ferreirea Goncalves

Artificial Intelligence (AI) has become an integral part of the modern world. The possibilities seem endless with Chat GPT, virtual assistants like Siri and self-driving cars. But AI also has a downside, warns João Fernando Ferreirea Gonçalves (media scientist and lecturer in statistics at ESHCC). He received a €280,000 Veni grant from the NWO (Netherlands Organisation for Scientific Research) this year for his research on making AI 'less artificial', which starts this November. We spoke to João to get an insight into his research and what he wants to achieve with it.

What is AI and how did you come across it?

AI is an algorithm that teaches itself to perform a task based on data sets, which we also call machine learning. The algorithm arrives at a set of instructions that leads to a result, usually something we humans normally do that requires intelligence. The advantage is that a computer can naturally do that much faster.

During my PhD research, I analysed news comments for incivility and hate speech. Then I learned about machine learning and used AI to go through the data faster. After that, I started delving more and more into the technical aspects of AI and saw the possibilities it offers, not just for researchers or statisticians. But when I became aware of the downside of AI, it occurred to me that it could be different.

What is the downside of AI?

Algorithms are often based on datasets disconnected from the real world. This creates a 'disconnect' from human behaviour and context. Datasets are not always representative in terms of gender, cultural backgrounds, people's language use, social context, unexpected situations, you name it. And if an AI then has to give a result that makes decisions about people, that creates problems. A typical example is the “toeslagenschandaal” (childcare benefits scandal) in the Netherlands.

An AI cannot judge whether someone has accidentally made a mistake or what someone's background is. Machine learning works on a loss function basis; the loss function measures how good your model is at predicting the expected outcome. Say an algorithm is 96% accurate at detecting fraud properly, then that seems very good. The remaining 4% is a margin of error. But what is sometimes overlooked is that that is about people. For example, if that 4% is always applied to a certain population group, that is discriminating. The algorithm does what it is supposed to do, but if we look at it according to societal standards or human values, it is not correct.

"I want to mix my knowledge of the technology behind AI with something relevant to society. So why not apply social science to computer science?"

João Gonçalves using computer

How will your research change that?

I believe AI lacks the 'human touch'. I want to mix my knowledge of the technology behind AI with something relevant to society. So why not apply social science to computer science? That could be as simple as labelling news commentary. The programmer gives an instruction, AI gives a label. Only we know that, for example, a person's background, language, and cultural background matter in determining what constitutes hate speech or insult and putting that into context.

That is what we do in the social sciences, we take into account the possibility of bias. The possibility of a small group deviating is not penalised in the social sciences. If we find methods to integrate that into an algorithm, we can be better prepared for that and I think we can mitigate a lot of problems with these statistics.

What are the challenges?

There are plenty of those, of course, but these are mainly technical capabilities and people. On the technical side, we need to look closely at whether social science methods work well with large data sets. They work well for us with smaller samples and smaller groups, but how do they translate to big data? And what about resources, costs, ethics, privacy and the ecological footprint?

In addition, we need people to use the algorithms, which is a real challenge of course. It could be the perfect method, but maybe nobody wants to use it. For example, do computer scientists think it takes too much time or do people actually see the social value?

How can AI be 'less artificial'?

Most of the problems in machine learning are actually not even all technical. AI can already do so many incredibly complex things, but it is all based on human behaviour. And social biases in data often come from biases in humans. It makes a difference where an AI comes from. If it comes from people building a dataset in a lab, detached from reality, it will be more artificial than if you ask a large group of people what you want to use an algorithm for and how humanely they want to use the data. They will still be machine learning models with loss functions, but they will be determined and assessed according to social science methods. There is a more open, humane development and because of that, I would say AI can be less artificial and more human.

"The potential is very big. If this gets off the ground, it could be used by governments, banks, healthcare systems and tech companies."

Joao Goncalves

What concrete results do you expect from the research to make an impact with?

The research consists of two phases. The first is a specific case study, namely on online hate speech and foul language. I hope that by using social science methods in algorithms, we can achieve content moderation on platforms such as Facebook and Instagram. That could make an immediate difference in terms of content people see online. So, it would be great if we can see impact and involve companies like Meta and Google in our research.

Phase two is to develop a machine-learning package that anyone can download and use. In theory, these methods can be applied to any AI. So, for example, you can also tell Chat GPT, the most popular AI at the moment, which content is good or not good. If you feed an AI in the learning phase from a social consciousness, you might have a Chat GPT that is better suited to society's needs.

The potential is very big. If this gets off the ground, it could be used by governments, banks, healthcare systems and tech companies like Open AI, Meta and Google. However, we have to be realistic and start small. Hopefully, we can grow step by step and get more and more parties to join and convince them to make a change from artificial AI to human centric AI.

Researcher
Related links
Twelve Veni grants for Rotterdam researchers

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes