Ethical & legal challenges of AI: from principles to practices

Ethical & Legal challenges of AI

Health AI comes with high hopes regarding tailored care, early risk detection and information integration. These developments however also come with various ethical and legal challenges, for example about bias, discrimination, privacy, responsibility and transparency. Solutions to such issues are predominantly sought in ethical principles for AI, the development of technical solutions and better regulatory frameworks (e.g. the GDPR and the MDR). What these have in common is the idea that ethical and legal issues surrounding AI can be solved upfront.

These current approaches in law and in ethics are however not fully equipped to address the challenges raised by AI in health and welfare. Although undoubtedly valuable, embedding AI responsibly in health and welfare asks for more than technical solutions, formal rules and regulations, and the establishment of ethical principles. In this empirical research program, we study how ethical decisions are made and negotiated in the daily practices of health practitioners, data scientists and other stakeholders (such as patient groups) in various AI-initiatives. By studying how ethical arguments are negotiated between different groups in medical practice, how ethical considerations are made and justified, and how data scientists, technologists and medical practitioners jointly work on establishing norms and shared values in concrete AI initiatives, we learn how ethical and legal challenges play out in practice.

We shift the perspective from suitable legal-ethical frameworks and principles for AI in healthcare towards ethnographic and comparative approaches that study ethics-in-practice and responsibility-in-the-making. In this research line we focus on the following questions:

  1. Responsible knowledge practices: how do AI initiatives reconfigure responsible knowledge practices in various healthcare domains? How can we find productive ways to combine the domain knowledge of medical experts, the experiential knowledge that patients have gathered through their lived experiences, and the technical knowledge produced by data scientists to facilitate human-centred forms of AI that enrich our understanding of health and illness?
  2. Ethical work in situ: how are ethical decisions made within the mundane work of health practitioners, data scientists and other stakeholders (such as patient groups) in machine learning initiatives? How do medical professionals, civil servants, healthcare managers and regulators make ethical decisions in real-life contexts, where information is always partial and incomplete?
  3. From rules to resilience: how can we make AI initiatives in healthcare more resilient? Instead of fixing responsibilities in rules, how can we explore the ‘texture’ of responsible AI practices in healthcare? Important dimensions here concern the need to balance formal regulation with discretionary space for professionals to make decisions on the ground as the layering of multiple, conflicting rules can obfuscate healthcare work and lead to increased regulatory pressure that may reduce spaces to provide good care.
  4. Legitimacy in practice: uptake of AI depends in large part on how the technology is able to project legitimacy to potential users and other stakeholders. This raises questions about AI legitimate to various audiences; how legitimacy is constructed, and what legitimacy entails for a technology like AI in healthcare?

Links

Compare @count study programme

  • @title

    • Duration: @duration
Compare study programmes