RESUMEN
Technology-facilitated abuse in relationships (TAR) is a widespread social problem that has a significant impact on victim-survivors. Most contemporary evidence on TAR focuses on victim-survivor and practitioner perspectives rather than those of perpetrators who choose to enact this form of harm. Addressing this deficit, this study explored perpetrators' discourses on emotions and motivations associated with engaging in TAR. Using story completion method, 35 self-identified perpetrators of TAR completed story stems describing scenarios that may precede the use of abusive online behaviors. Reflexive thematic analysis generated three themes. Abusive behaviors and negative emotions speaks to maladaptive experiences of anger and/or sadness that can precede a decision to use TAR. A loss of trust, a desire for control describes potential motives for using TAR. Finally, inhibitors of abusive behavior investigates rationales perpetrators use for avoidance of TAR behaviors, suggesting avenues for working with perpetrators to refrain from using TAR. We conclude by discussing policy, practice, and research recommendations including strategies for technology designers and suggestions for primary prevention and response to TAR.
Asunto(s)
Emociones , Motivación , Humanos , Emociones/fisiología , Ira/fisiología , Agresión , ConfianzaRESUMEN
This paper presents a critical review of key ethical issues raised by the emergence of mental health chatbots. Chatbots use varying degrees of artificial intelligence and are increasingly deployed in many different domains including mental health. The technology may sometimes be beneficial, such as when it promotes access to mental health information and services. Yet, chatbots raise a variety of ethical concerns that are often magnified in people experiencing mental ill-health. These ethical challenges need to be appreciated and addressed throughout the technology pipeline. After identifying and examining four important ethical issues by means of a recognised ethical framework comprised of five key principles, the paper offers recommendations to guide chatbot designers, purveyers, researchers and mental health practitioners in the ethical creation and deployment of chatbots for mental health.