AI: A new era of (in)visible digital threats

portretfoto van Marc Schuilenburg

Although the public debate on AI and the role of algorithms in our society has erupted, several topics remain invisible and underexposed. These include the impact of AI on the nature and extent of crime. After all, what new forms of crime is AI leading to? Who are the victims of AI crime? And what measures can be taken to counter AI crime? Indeed, AI turns out to be not only a powerful tool for innovation, but also a potential threat with major societal implications. We discussed this with Marc Schuilenburg, Professor of Digital Surveillance at Erasmus School of Law, who recently published a book on the subject.

In his book Making Surveillance Public - Why You Should Be More Woke About AI and Algorithms, Schuilenburg shares his research on the deployment of AI applications, the rise of AI crime and how AI is changing the issue of security. Furthermore, he explains why other forms of public accountability are necessary. In a scientific article that will soon be published, Schuilenburg dives deeper into the issues surrounding the development of AI crime.

As society digitises, so does crime

Today's society is amid a digital revolution; not only daily life is digitising, but crime is changing as well. In a world where technological developments are the norm, their downsides are also growing. While the discussion on AI and algorithms is in full swing, a crucial aspect seems to be underexposed: the impact of AI on crime. “It is important to distinguish between cybercrime, such as online fraud and cyberbullying, and AI crime because the use of AI will significantly broaden and facilitate the playing field of cybercrime. AI can also lead to new forms of crime that can cause more social harm than cybercrime", Schuilenburg explains.

Three forms of AI crime

In his paper, Schuilenburg distinguishes three forms of AI crime: crime with AI, crime targeting AI and crime committed by AI. In the first form, AI is used as a tool for traditional forms of crime. This could include threshold-reducing chatbots that make crime more accessible to those without technical knowledge. But think also of the use of deepfakes and voicecloning for criminal offences such as spreading disinformation, and for pornographic and fraudulent purposes. In the second form, an AI system is the target of crime. An example is hacking autonomous vehicles for terrorist purposes. The third form refers to crime made possible only by AI, with human actions taking a back seat. This raises important questions of liability and criminality, as AI independently makes decisions that may be considered criminal under the law.

In addition, Schuilenburg stresses that much has changed regarding the frequency of online crime. The Professor expects an increase: “The latest figures show that recorded crime has fallen by more than a quarter since 2002 in Western countries. But that decline does not extend to cybercrime. In 2022, fifteen per cent of Dutch people aged fifteen or older were victims to one or more forms of online crime. That is over two million people. The extent of cybercrime is expected to increase further in the coming years, and AI crime is now adding to that."

The future of (online) safety

Although the impact of AI crime on society will become significant, Schuilenburg argues that we should not thwart developments but rather guide them in the right direction: “Technological innovation is inevitable”. In this regard, Schuilenburg sees a key role in developing innovations for security issues using AI and algorithms for government, industry, and knowledge centres to emphasise three sets of public values: driving, anchoring, and process values. “Only by considering all public values can we increase security for everyone in society”, Schuilenburg explains.

Professor
More information

Symposium on AI Experiences
On 8 and 9 April, a private symposium will take place where experts from different countries and domains will share their views on 'AI Experiences'. Much has been said about AI's technical and legal side but remarkably little about the human factor of AI. But when humans and technology are intertwined, it is necessary to examine how individuals relate to AI in practice and what their personal experiences are. Keynote speakers include Belgian philosopher Antoinette Rouvroy and aggrieved parents of the Dutch childcare benefits scandal. Click here for more information on the symposium.

Related links
Interview with Vrij Nederland: ‘Surveillance is een vast onderdeel van ons leven geworden, en dat zal alleen maar toenemen’
Interview with Follow the Money:
Interview with TNO: AI surveilleert erop los. Maakt dat onze samenleving veiliger?
Interview with De Volkskrant: Aan 1,2 miljoen voordeuren in Nederland hangt een deurbel met een camera

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes