Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

What is a good use of digital mental health technologies, including AI therapy chatbots (AITCs)? In my commentary on Amitabha Palmer and David Schwan's article "Digital Mental Health Tools and AI Therapy Chatbots: A Balanced Approach to Regulation," I analyze the challenges of describing an AITC, with its simulation of human characteristics and abilities. The core challenge posed by this type of technology is the following ethical gap: AITCs simulate therapeutic conversations or even relationships but cannot fulfill ethical requirements connected with these. This poses risks for individuals seeking mental health support. My central recommendations include establishing standards for AITCs' interaction and the degree of AITCs' humanlikeness; refraining from promoting these chatbots as capable of forming therapeutic relationships until there is more evidence about the long-term effects of the technology and until the ethical gap is meaningfully addressed; and focusing on AITCs, not as humanlike agents, but as systems that can create conditions in which human values and values of mental health care are embedded and embodied.

More information Original publication

DOI

10.1002/hast.5010

Type

Journal article

Publication Date

2025-01-01T00:00:00+00:00

Volume

55

Pages

33 - 35

Total pages

2

Keywords

agent, bioethics, ethical requirements, human values, humanlike, mental health care, therapy chatbot, Humans, Artificial Intelligence, Generative Artificial Intelligence