Composite systems in public safety and the AI Act

Blogpost for the AI-MAPS project by Evert Stamhuis

March 2024 saw the final approval of the AI Act in the European legislative procedure, which attracted some celebrations and congratulatory responses. The EU had managed to launch the first and most comprehensive regulation of AI in the world. Optimists predict the dissemination of these standards across the globe, in a way comparable to the standards for personal data protection since their re-enactment in the GDPR in 2018. True or not - who can reliably predict the future? – it is unmistakably so that AI production and deployment will be impacted by these new rules for every developer who targets the European market and for every user within the EU territory. One important novelty is the introduction of a CE marking for certain categories of AI systems, in public administration for example.

In the field of AI for public safety the deployers of the technology are first of all public authorities, but not only. The presence of AI power in smart cameras in private safety systems, access systems for apartment blocks and smart doorbells cannot be ignored. All these devices need to undergo the test of compatibility with the AI Act, some needing CE marking, others not. For such systems, the coming two years are left for the description of industry standards and the launch of the CE marking process.

Public authorities who are considering the development and deployment of such AI powered practices face an important normative challenge. It is unlikely that only fully developed CE marked systems will be purchased from the industry. We may also see the procurement of a trained model, to be further retrained with the data that are relevant for the context of deployment. On top of that comes the integration in the workflow of the office, introducing a number of interfaces. It will be such composite systems that are finally released for impacting the practices of individual public officers and the persons in the streets.

Because of this, the normative challenge will be to reliably answer a couple of legal and ethical questions. Is it right to provide the data for further training to an outside industry partner? If yes, what needs to happen afterwards with those data and how can one cover this satisfactorily in contract clauses? Is the retrained model open for sale by the industry partner to a next user and how does that translate into the contract with the first buyer, whose data were used for retraining? Would it be ethical for the public authority just to not embark upon such endeavors at all and only practice one-off purchases of products and keep the development of further components inhouse? Such normative questions will be on our AI MAPS agenda in the coming months, to be discussed with our consortium partners and to try to come up with sensible recommendations.

Related content
Blogpost for the AI-MAPS project by Evert Stamhuis
Lightbulb with smaller lightsources on the outside
Blogpost for the AI-MAPS project by Evert Stamhuis
Evert Stamhuis at the Winelands Conference 2023, Stellenbosch, South Africa
Related links
Overview blogposts | AI Maps

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes