Computers against racism

Sophie van der Zee

Ever been stopped at the airport without any good reason? It happens all the time, and more to some people than others. Humans have proved themselves notoriously incapable of detecting who is a liar and who is not. That’s why Sophie van der Zee of Erasmus School of Economics develops machines that can do a better job. The New Scientist already nominated the assistant professor at the Erasmus School of Economics as Science Talent for 2018. Now she needs votes to actually win the title.

Imagine a lie-detecting suit. It exists - Sophie developed and tested it. Measuring body activity and the extent to which people subconsciously mimic the behaviour of others when trying to convince them, the suit proved quite able to tell who is lying and who is not. With an accuracy of 82%, the suit glamorously beat human professionals, who can only detect a lie with an accuracy of about 53%: hardly better than flipping a coin.

Nothing to hide, but super suspicious

Still people, including trained interrogators, believe they are able to tell when others are lying - for instance by looking at the way people handle eye contact. That is a very subjective, unreliable way to detect dishonesty, Van der Zee emphasises, one of many incorrect beliefs about cues to deception. They are especially problematic in cross-cultural contexts, because people from different cultural backgrounds tend to exhibit different types of behaviour, whether they are lying or not.

Some of these culture-specific behaviours may actually make interviewers feel they are dealing with a deceiver. This may be the case when Surinamese suspects avoid eye-contact out of respect, Van der Zee notes, or when people express themselves in another language than the language they grew up with. Regardless of lying, using a different language comes with anxiety and an increased cognitive load. As a result, people with a different cultural background using a second language, are much more at risk of being misjudged than suspects with a background similar to the interviewer’s own.

Program without prejudice

If we manage to program without prejudice, computer algorithms may deliver justice with better accuracy when screening people, Van der Zee hopes. Rather than gathering subjective data and single observations, they are able to process a wide range of objective data entries over a longer period of time. A human watching a video may just write down whether somebody touched his arm or not, when a machine may note for how long, what type of gesture, at what speed, etc.

In addition, passenger-screening technology, like the AVATAR (‘Automated Virtual Agent for Truth Assessments in Real-Time’) combines input of about twenty sensors, not merely looking at movements but also at things like body temperature, vocabulary, and tone of voice. This multimodal approach will be essential, Van der Zee expects - if you train very hard, you may be able to mislead the machine on one of these aspects, but no person will be able to control language, tone of voice and physical reactions all at the same time - unless one is absolutely convinced of speaking the truth.

Machines may deliver justice with better accuracy than people, but we will have to be careful how we program them, and keep understanding how they arrive at their conclusions. If you put in racist data or algorhithms, you will get racist output. For now, Van der Zee is glad that lie detector results are not allowed as evidence in Dutch courts, but this may well change in the future if progress continues. The AVATAR screening robots are already being used at several European and US airports, whilst innovations like her lie-detecting suit may help police to assess statements.

Ramses Singeling

Sophie van der Zee in action during Kingsday, convincing the crowd to vote for her and make her New Scientist Wetenschapstalent 2018.

You can still vote here until 6 May.

Vergelijk @count opleiding

  • @title

    • Tijdsduur: @duration
Vergelijk opleidingen