Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Through this workshop, we hope to facilitate an interdisciplinary dialogue between technologists, medical practitioners and ethicists.

Link to workshop programme and registration

Artificial intelligence (AI) is slowly but steadily being introduced in medical and healthcare settings. From basic research to applied systems already deployed in hospitals, AI-based systems (AIS) are now used in applications that go far beyond the initial role that the first expert systems had in the 1970s and the 1980s. AIS are now helping not only with diagnosis tasks, but also with risk stratification, assistance during surgical procedures, or monitoring of biomarkers in chronically ill patients. This list is far from being exhaustive. Any kind of decision that may be taken in healthcare is now susceptible to be supported by AI.

However, AIS cannot be considered as mere tools that will not affect medical practice or biomedical ethics. By nature—and typically when they are based on Machine Learning—AIS can be opaque for users or even programmers, becoming so-called black boxes. More generally, AIS can help to computationally obtain decision models that are sometimes hardly explainable to the commoners. The use of such AIS raises many ethical questions: how can we trust opaque medical systems—be they black or grey boxes? How should the responsibility be shared when these systems are used? How can we define reasonable trade-offs in terms of opacity, safety and efficiency of those systems? What is the role of explanation in medicine and how could AIS disrupt it? Finally, once we have understood the normative expectations we direct towards AIS, how can we foster ethical design practices to actually build these ethical AI-based systems we are envisioning?

These questions are a mere sample of all the ethical stakes linked to the introduction of AIS in healthcare. These stakes are neither purely theoretical nor purely practical. Instead, they are intertwined in a conundrum—a wicked problem—we need to collectively tackle using the expertise of every stakeholder. Through this workshop, we hope to facilitate an interdisciplinary dialogue between technologists, medical practitioners and ethicists.

Link to workshop programme and registration