Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
JMIR Ment Health ; 11: e62679, 2024 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-39321450

RESUMEN

BACKGROUND: Empathy is a driving force in our connection to others, our mental well-being, and resilience to challenges. With the rise of generative artificial intelligence (AI) systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds toward stories from human versus AI narrators and how transparency plays a role in user emotions. OBJECTIVE: We aim to understand how empathy shifts across human-written versus AI-written stories, and how these findings inform ethical implications and human-centered design of using mental health chatbots as objects of empathy. METHODS: We conducted crowd-sourced studies with 985 participants who each wrote a personal story and then rated empathy toward 2 retrieved stories, where one was written by a language model, and another was written by a human. Our studies varied disclosing whether a story was written by a human or an AI system to see how transparent author information affects empathy toward the narrator. We conducted mixed methods analyses: through statistical tests, we compared user's self-reported state empathy toward the stories across different conditions. In addition, we qualitatively coded open-ended feedback about reactions to the stories to understand how and why transparency affects empathy toward human versus AI storytellers. RESULTS: We found that participants significantly empathized with human-written over AI-written stories in almost all conditions, regardless of whether they are aware (t196=7.07, P<.001, Cohen d=0.60) or not aware (t298=3.46, P<.001, Cohen d=0.24) that an AI system wrote the story. We also found that participants reported greater willingness to empathize with AI-written stories when there was transparency about the story author (t494=-5.49, P<.001, Cohen d=0.36). CONCLUSIONS: Our work sheds light on how empathy toward AI or human narrators is tied to the way the text is presented, thus informing ethical considerations of empathetic artificial social support or mental health chatbots.


Asunto(s)
Inteligencia Artificial , Empatía , Apoyo Social , Humanos , Inteligencia Artificial/ética , Femenino , Masculino , Adulto , Adulto Joven , Salud Mental , Persona de Mediana Edad , Narración
2.
Int J Artif Intell Educ ; : 1-59, 2022 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-35935456

RESUMEN

Artificial Intelligence (AI) is revolutionizing many industries and becoming increasingly ubiquitous in everyday life. To empower children growing up with AI to navigate society's evolving sociotechnical context, we developed three middle school AI literacy curricula: Creative AI, Dancing with AI, and How to Train Your Robot. In this paper we discuss how we leveraged three design principles-active learning, embedded ethics, and low barriers to access - to effectively engage students in learning to create and critique AI artifacts. During the summer of 2020, we recruited and trained in-service, middle school teachers from across the United States to co-instruct online workshops with students from their schools. In the workshops, a combination of hands-on unplugged and programming activities facilitated students' understanding of AI. As students explored technical concepts in tandem with ethical ones, they developed a critical lens to better grasp how AI systems work and how they impact society. We sought to meet the specified needs of students from a range of backgrounds by minimizing the prerequisite knowledge and technology resources students needed to participate. Finally, we conclude with lessons learned and design recommendations for future AI curricula, especially for K-12 in-person and virtual learning.

3.
Int J Artif Intell Educ ; : 1-35, 2022 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-35573722

RESUMEN

The rapid expansion of artificial intelligence (AI) necessitates promoting AI education at the K-12 level. However, educating young learners to become AI literate citizens poses several challenges. The components of AI literacy are ill-defined and it is unclear to what extent middle school students can engage in learning about AI as a sociotechnical system with socio-political implications. In this paper we posit that students must learn three core domains of AI: technical concepts and processes, ethical and societal implications, and career futures in the AI era. This paper describes the design and implementation of the Developing AI Literacy (DAILy) workshop that aimed to integrate middle school students' learning of the three domains. We found that after the workshop, most students developed a general understanding of AI concepts and processes (e.g., supervised learning and logic systems). More importantly, they were able to identify bias, describe ways to mitigate bias in machine learning, and start to consider how AI may impact their future lives and careers. At exit, nearly half of the students explained AI as not just a technical subject, but one that has personal, career, and societal implications. Overall, this finding suggests that the approach of incorporating ethics and career futures into AI education is age appropriate and effective for developing AI literacy among middle school students. This study contributes to the field of AI Education by presenting a model of integrating ethics into the teaching of AI that is appropriate for middle school students.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...