By Marc Steen.
An overall ambition of the AI-MAPS project is to develop and utilize a framework, and associated methods, to support people in integrating ethical, legal, societal, and societal aspects during the development and deployment of AI systems—with a focus on applications in public safety.
That was a long sentence. Lots of jargon also. Indeed. But it is our ambition. What we can do is make this a bit more attractive. A bit more practical, also for ourselves. So that we have a practical starting point. What would be nice is one page, with a framework. Almost like a checklist. Let’s call it ‘ELSA in a box’. Here’s a first draft, a paper prototype:
Although the T of Technology is not in in the abbreviation (‘ELSA’), we can very well start with a description of the technology or application. What are we talking about? Possibly, you are (still) working on the application; it is not (yet) finished. No worries. On the contrary, it’s great! It means that you can take what you learn from the ELSA exercise into further development. So, let’s start with clarifying the technology or application at hand. What’s on the table? When that is clear, you can start to discuss technology aspects like: robustness, safety, type of machine learning, network architecture, effectiveness, efficiency, reliability, explainability, etc.
For ethical aspects, we can start with looking at what values are at stake. You can invite diverse stakeholders, who may look differently at different values. Furthermore, we can turn to different ethical traditions. Consequentialism, to look at positive and negative impacts. Duty ethics, which looks at duties, e.g., legal obligations, and rights, e.g., human rights—yes, there is overlap with a legal perspective. Relational ethics, to understand how technologies can affect interactions and distribution of power between people. And virtue ethics, which has an overall ambition to enable people to cultivate virtues that will help them to live well together—and which looks at technologies as tools. Examples of aspects that can be discussed are the following: human dignity, human autonomy, fairness, no-harm, freedom, agency, meaningful control, responsibility, etc.
With regards to legal aspects, we can first clarify which legislations are relevant for the application at hand. We can turn to international law, e.g., to treaties that codify human rights, or national law, e.g., constitutional law or criminal law. Typically, in the domain of public safety, we can start with clarifying what government agencies can do. Such a question deals first with legality (is there a legal basis), and then with necessity, which can be evaluated in terms of subsidiarity and proportionality (can the application of this technology be justified). Furthermore, we need to look at specific legal aspects, like the right to privacy, non-discrimination, and at how processes of accountability are organized, to promote procedural justice—do citizens have effective ways to seek remedy or repair?
Societal aspects can have some overlap with ethical and legal aspects. However, there are topics that need to be looked at on the level of society. E.g., if we look at this specific application, how can democratic control and government oversight be organized best? That will depend, e.g., on how it is implemented and deployed. Furthermore, there are topics that are of interest to the larger public, like surveillance. For such a topic, we can discuss whether the values that are associated with the technology at hand are (more or less) aligned with values that many people in society espouse. Moreover, we can look at the two-way interaction between society and technology, e.g., probe support from citizens, or look at the technology’s impact on people and on the environment.
Visually, all these aspects fit into one box. This fitting into a box does require a bit of simplification. We can say much more about each aspect. Entire university departments are dedicated to each aspect. In AI-MAPS, we have four PhD students, for technology, ethical, legal, and societal aspects.
We are currently trying out various ways to ‘do ELSA’, one of which is this ‘ELSA in a Box’. Please feel free to follow our research project. And please reach out if you want to join our exploration..