Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
JMIR Hum Factors ; 10: e40533, 2023 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-36409300

RESUMO

BACKGROUND: The COVID-19 pandemic raised novel challenges in communicating reliable, continually changing health information to a broad and sometimes skeptical public, particularly around COVID-19 vaccines, which, despite being comprehensively studied, were the subject of viral misinformation. Chatbots are a promising technology to reach and engage populations during the pandemic. To inform and communicate effectively with users, chatbots must be highly usable and credible. OBJECTIVE: We sought to understand how young adults and health workers in the United States assessed the usability and credibility of a web-based chatbot called Vira, created by the Johns Hopkins Bloomberg School of Public Health and IBM Research using natural language processing technology. Using a mixed method approach, we sought to rapidly improve Vira's user experience to support vaccine decision-making during the peak of the COVID-19 pandemic. METHODS: We recruited racially and ethnically diverse young people and health workers, with both groups from urban areas of the United States. We used the validated Chatbot Usability Questionnaire to understand the tool's navigation, precision, and persona. We also conducted 11 interviews with health workers and young people to understand the user experience, whether they perceived the chatbot as confidential and trustworthy, and how they would use the chatbot. We coded and categorized emerging themes to understand the determining factors for participants' assessment of chatbot usability and credibility. RESULTS: In all, 58 participants completed a web-based usability questionnaire and 11 completed in-depth interviews. Most questionnaire respondents said the chatbot was "easy to navigate" (51/58, 88%) and "very easy to use" (50/58, 86%), and many (45/58, 78%) said its responses were relevant. The mean Chatbot Usability Questionnaire score was 70.2 (SD 12.1) and scores ranged from 40.6 to 95.3. Interview participants felt the chatbot achieved high usability due to its strong functionality, performance, and perceived confidentiality and that the chatbot could attain high credibility with a redesign of its cartoonish visual persona. Young people said they would use the chatbot to discuss vaccination with hesitant friends or family members, whereas health workers used or anticipated using the chatbot to support community outreach, save time, and stay up to date. CONCLUSIONS: This formative study conducted during the pandemic's peak provided user feedback for an iterative redesign of Vira. Using a mixed method approach provided multidimensional feedback, identifying how the chatbot worked well-being easy to use, answering questions appropriately, and using credible branding-while offering tangible steps to improve the product's visual design. Future studies should evaluate how chatbots support personal health decision-making, particularly in the context of a public health emergency, and whether such outreach tools can reduce staff burnout. Randomized studies should also be conducted to measure how chatbots countering health misinformation affect user knowledge, attitudes, and behavior.

2.
J Med Internet Res ; 24(7): e38418, 2022 07 06.
Artigo em Inglês | MEDLINE | ID: mdl-35737898

RESUMO

BACKGROUND: Automated conversational agents, or chatbots, have a role in reinforcing evidence-based guidance delivered through other media and offer an accessible, individually tailored channel for public engagement. In early-to-mid 2021, young adults and minority populations disproportionately affected by COVID-19 in the United States were more likely to be hesitant toward COVID-19 vaccines, citing concerns regarding vaccine safety and effectiveness. Successful chatbot communication requires purposive understanding of user needs. OBJECTIVE: We aimed to review the acceptability of messages to be delivered by a chatbot named VIRA from Johns Hopkins University. The study investigated which message styles were preferred by young, urban-dwelling Americans as well as public health workers, since we anticipated that the chatbot would be used by the latter as a job aid. METHODS: We conducted 4 web-based focus groups with 20 racially and ethnically diverse young adults aged 18-28 years and public health workers aged 25-61 years living in or near eastern-US cities. We tested 6 message styles, asking participants to select a preferred response style for a chatbot answering common questions about COVID-19 vaccines. We transcribed, coded, and categorized emerging themes within the discussions of message content, style, and framing. RESULTS: Participants preferred messages that began with an empathetic reflection of a user concern and concluded with a straightforward, fact-supported response. Most participants disapproved of moralistic or reasoning-based appeals to get vaccinated, although public health workers felt that such strong statements appealing to communal responsibility were warranted. Responses tested with humor and testimonials did not appeal to the participants. CONCLUSIONS: To foster credibility, chatbots targeting young people with vaccine-related messaging should aim to build rapport with users by deploying empathic, reflective statements, followed by direct and comprehensive responses to user queries. Further studies are needed to inform the appropriate use of user-customized testimonials and humor in the context of chatbot communication.


Assuntos
Vacinas contra COVID-19 , COVID-19 , Adolescente , COVID-19/prevenção & controle , Vacinas contra COVID-19/uso terapêutico , Comunicação , Humanos , Saúde Pública , Pesquisa Qualitativa , Estados Unidos , Adulto Jovem
3.
Nature ; 591(7850): 379-384, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33731946

RESUMO

Artificial intelligence (AI) is defined as the ability of machines to perform tasks that are usually associated with intelligent beings. Argument and debate are fundamental capabilities of human intelligence, essential for a wide range of human activities, and common to all human societies. The development of computational argumentation technologies is therefore an important emerging discipline in AI research1. Here we present Project Debater, an autonomous debating system that can engage in a competitive debate with humans. We provide a complete description of the system's architecture, a thorough and systematic evaluation of its operation across a wide range of debate topics, and a detailed account of the system's performance in its public debut against three expert human debaters. We also highlight the fundamental differences between debating with humans as opposed to challenging humans in game competitions, the latter being the focus of classical 'grand challenges' pursued by the AI research community over the past few decades. We suggest that such challenges lie in the 'comfort zone' of AI, whereas debating with humans lies in a different territory, in which humans still prevail, and for which novel paradigms are required to make substantial progress.


Assuntos
Inteligência Artificial , Comportamento Competitivo , Dissidências e Disputas , Atividades Humanas , Inteligência Artificial/normas , Humanos , Processamento de Linguagem Natural
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA