Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Conversational artificial intelligence (CAI) presents many opportunities in the psychotherapeutic landscape-such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. The objective of this paper is to complement current research on the ethics of AI for mental health by proposing a holistic, ethical, and epistemic analysis of CAI adoption. First, we focus on the question of whether CAI is rather a tool or an agent. This question serves as a framework for the subsequent ethical analysis of CAI focusing on topics of (self-) knowledge, (self-)understanding, and relationships. Second, we propose further conceptual and ethical analysis regarding human-AI interaction and argue that CAI cannot be considered as an equal partner in a conversation as is the case with a human therapist. Instead, CAI's role in a conversation should be restricted to specific functions.

More information Original publication

DOI

10.1080/15265161.2022.2048739

Type

Journal article

Publication Date

2023-05-01T00:00:00+00:00

Volume

23

Pages

4 - 13

Total pages

9

Keywords

Artificial intelligence, agency, ethics, psychotherapy, therapeutic alliance, Humans, Artificial Intelligence, Psychotherapy, Communication, Ethical Analysis, Mental Health