Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Close up of a wearable camera © Bram Schöfeldt, 2022

Continuous first person point of view images – a major leap forward in behavioural understanding or an unethical step too far?

Blog post by Federica Lucivero and and Abram Schonfeldt 

We have been running the Big Data Ethics Forum since 2017 (with a long interruption due to the pandemic and my maternity leave (FL)), the goal of the forum is to create a safe space where researchers can discuss ethical problems and share best practices; a sort of ethics clinic. We try to keep meetings interactive and informal, and lately we have experimented with having a panel of presenters instead of a single speaker, which worked very well.

In the most recent forum, held on the 10th November at the Big Data Institute, a group of researchers came together to discuss the various ethical issues that arise in their research involving wearable cameras. Typically, these matchbox sized cameras are worn on lanyards around participants' necks and capture images of daily living roughly every twenty seconds. These cameras provide a new means of capturing lifestyle choices and environmental exposures that could transform our understanding of the health consequences associated with people's physical activity behaviours, whether people are exposed to pollution, or even people’s eating habits. Researchers using wearable cameras face both technical issues (such as the limited battery life of the devices, and the difficulty of annotating images at scale), and ethical issues. While technological advances and increasingly accurate deep learning methods promise to lower the technical barriers, this forum was a space to explore the ethical issues of using these cameras in research settings.

Six panellists presented during the Forum, each one of them sharing different ethical problems encountered during the various stages of their research. The panel consisted of: Aiden Doherty (Professor of Biomedical Informatics, Big Data Institute) who brought us up to speed on the use of wearable cameras in health research; Laura Brocklebank (Research Fellow in Epidemiology / Statistics / Accelerometry, University College London), who discussed some takeaways from the recent data collection in a study involving 85 British older adults; Sally Fenton (Associate Professor in Lifestyle Behaviour Change, University of Birmingham), who reflected on the usefulness of involving participants in developing wearable camera protocols and securing NHS approval; Goylette Chami (Associate Professor and Robertson Fellow in Infectious Disease Epidemiology, Big Data Institute), who explored the challenge of justifying the public benefit of using wearable cameras in the context of low-income settings (Uganda) and infectious disease research; Scott Small (Researcher in Wearable Sensors, Big Data Institute), who provided insights on the type of data that cameras provide and the type of information that can be drawn from them; and Ronald Clark (Associate Professor, Department of Computer Science, University of Oxford), who reflected on computer vision related aspects of processing image data.

Two of the issues that raised lively discussions concerned the role of third parties inadvertently filmed by the research participant and the complexity in handling illegal activities captured by the cameras. Regarding the first issue, cameras may film bystanders and there is a sense that although they are not research participants, researchers and study participants have some responsibilities towards them. Several questions were raised: Should participants ask third parties for consent or could this be an excessive burden on participants? How could technologies such as face blurring help protect some third parties? Who is the participant in the study and what is the role of other people captured in the images? Can the concept of relational autonomy help to address these issues at the practical level?

The second issue examined the various ambiguities and difficulties raised when cameras inadvertently capture illegal and sensitive activities. The discussion explored the conflict that researchers face in trying to balance their responsibilities towards reporting or even providing data to law enforcement authorities with their responsibilities to preserving the autonomy and privacy of research participants. The discussion also raised the need for setting up safeguarding procedures to prevent emotional distress in annotators who could potentially encounter upsetting images.

However, we also saw that certain sensitive activities are unlikely to be caught on camera, and that this also needs to be considered during study design. Participants are able to remove their cameras at any time, and when a particular exposure of interest overlaps with activities that participants feel uneasy having captured, this makes it difficult to study that exposure. During the discussion, this was illustrated by the difficulty of studying fishing as a high risk exposure for particular infections, in cases where fishing is highly regulated, but also the potential difficulty of studying social interactions when anecdotal evidence points to participants preferring to wear cameras on days when they have not arranged socal events.

Though this forum raised many questions, it also highlighted practical suggestions on how to carry out research with wearable cameras such as the importance of having privacy shutters to block recording, the importance of being able to refer to existing protocols and having precedents in seeking ethics approval, allowing participants to request data removal without question, and distributing help cards to participants to pass on to third parties curious about the study. The discussion remained lively throughout, and we appreciated the many questions and comments from researchers doing similar studies and encountering similar problems, as well as the comments and reflections from people at the Ethox Centre who were able to provide advice around important concepts, such as that of relational autonomy, that helped frame the discussion. There are plans to turn these discussions into more concrete guidelines to support research practices and educational tools, but we will talk about them in another post!

The core idea of the forum is to raise questions and engage in discussions with people from diverse backgrounds, brought together by a shared interest in a specific topic and its ethical dimensions. This format ensures that discussions are both research driven and ethically complex.

The next Big Data Ethics Forum will focus on 'Automated analyses of images of people: practical ethical issues'. It will take place at the Big Data Institute on 8 March 2023. If you are interested in attending the next forum, or presenting in the future, please email federica.lucivero@ethox.ox.ac.uk.