Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Abstract The literature on the ethics of machine learning in healthcare contains a great deal of work on algorithmic fairness. But a focus on fairness has not been matched with sufficient attention to the relationship between machine learning and distributive justice in healthcare. A significant number of clinical prediction models have been developed which could be used to inform the allocation of scarce healthcare resources. As such, philosophical theories of distributive justice are relevant when considering the ethics of their design and implementation. This paper considers the relationship between machine learning in healthcare and distributive justice with a focus on four aspects of algorithmic design and deployment: the choice of target variable, the model's socio-technical context, the choice of input variables, and the membership of the datasets that models are trained and validated on. Procedural recommendations for how these considerations should be accounted for in the design and implementation of such models follow.

Original publication

DOI

10.5406/21521123.62.1.03

Type

Journal

American Philosophical Quarterly

Publisher

University of Illinois Press

Publication Date

01/01/2025

Volume

62

Pages

33 - 52