Regulation of Artificial Intelligence remains a challenge

Evert Stamhuis

From production robots and chatbots to personalized suggestions by Netflix and Spotify, artificial intelligence has an increasing influence over our lives. The European Commission drafted a proposal in April 2021 to regulate Artificial Intelligence (AI). This proposal (AI Act) is leading globally, but there are still some fundamental challenges. Evert Stamhuis, Professor of Law & Innovation at Erasmus School of Law, talks about this in an article in Science | Business. This article is a follow-up to his recent participation in a webinar on worldwide AI regulation.

The AI Act is a step forward to restrain artificial intelligence. However, because of a growing universal connection, geopolitical tensions, and rising polarization, it is essential that a global consensus is reached regarding the use of AI.

Although many other countries have already developed AI laws, those laws currently do not match their primary goals and the types of artificial intelligence being regulated. For example, China also regulates the use of algorithms, while the United States aims to regulate as less as possible on a federal level. Differing application frameworks between countries will make it more difficult to develop global technology and adaptations in business.

EU regulation

The EU AI Act organizes artificial intelligence into three different risk categories: unacceptable risk, high risk, and a rest category. The first category will be prohibited, the second one will be regulated, and the last one will be unregulated. Stamhuis considers such an organization insufficient for the future: “You cannot reach certainty if you use such simple categories.”

The dynamic nature of AI makes the high-risk category hard to define in the long term. “A definition for more than five years is not viable”, explains the Professor. The taking effect of EU laws is overly complicated. “That is why there is such a big resistance against the quick changing of these rules, which makes them inflexible. Normally, flexibility is built-in by setting up an expert procedure that periodically reclassifies the categories. That is what the European Commission also proposes in the AI Act. This subject, however, needs fundamental political debate and cannot be delegated to an expert procedure, in which interested parties no longer have influence”, concludes Stamhuis.

Stamhuis also has his doubts about the part of the AI Act that concerns the certification and registration of AI systems; “What do we actually certify? The models? The systems? Moreover, what happens when the procedure changes or the model improves? Do we keep a trustworthy register?”

Professor
More information

Read the entire article of Science I Business here.

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes