Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.


Technological developments have undoubtedly changed our world and, as a result, questions concerning moral responsibility are increasingly perplexing. How should we think of responsibility when humans collaborate with technology, for example, where our devices remind us of important appointments, or where clinics employ artificially intelligent decision support systems? Who (or what) can be held responsible when harms or benefits appear to result from technology alone? Some authors suggest a problematic “responsibility gap”, while others promote various “bridging” strategies, from pinning responsibility onto proxy individuals or corporations, to locating novel sorts of group agency in human-machine composites. Few, however, have seriously considered more direct strategies, namely how we might hold machines responsible. Although it may sound farfetched, seeing machines as responsible can depend less on seeking sophisticated capacities, like consciousness, and more on rethinking the nature of responsibility itself. Focusing on the latter approach, I put forward a pragmatic account of moral responsibility for the technological world. In short, we stand only to gain by rethinking responsibility in ways that accommodate sophisticated technologies and that help us adjust to one another amidst our increasing reliance upon technology.


Zoom registration link