Your browser doesn't support javascript.
loading
Patient and Consumer Safety Risks When Using Conversational Assistants for Medical Information: An Observational Study of Siri, Alexa, and Google Assistant.
Bickmore, Timothy W; Trinh, Ha; Olafsson, Stefan; O'Leary, Teresa K; Asadi, Reza; Rickles, Nathaniel M; Cruz, Ricardo.
Afiliação
  • Bickmore TW; College of Computer and Information Science, Northeastern University, Boston, MA, United States.
  • Trinh H; College of Computer and Information Science, Northeastern University, Boston, MA, United States.
  • Olafsson S; College of Computer and Information Science, Northeastern University, Boston, MA, United States.
  • O'Leary TK; College of Computer and Information Science, Northeastern University, Boston, MA, United States.
  • Asadi R; College of Computer and Information Science, Northeastern University, Boston, MA, United States.
  • Rickles NM; School of Pharmacy, University of Connecticut, Storrs, CT, United States.
  • Cruz R; General Internal Medicine, Boston Medical Center, Boston, MA, United States.
J Med Internet Res ; 20(9): e11510, 2018 09 04.
Article em En | MEDLINE | ID: mdl-30181110
BACKGROUND: Conversational assistants, such as Siri, Alexa, and Google Assistant, are ubiquitous and are beginning to be used as portals for medical services. However, the potential safety issues of using conversational assistants for medical information by patients and consumers are not understood. OBJECTIVE: To determine the prevalence and nature of the harm that could result from patients or consumers using conversational assistants for medical information. METHODS: Participants were given medical problems to pose to Siri, Alexa, or Google Assistant, and asked to determine an action to take based on information from the system. Assignment of tasks and systems were randomized across participants, and participants queried the conversational assistants in their own words, making as many attempts as needed until they either reported an action to take or gave up. Participant-reported actions for each medical task were rated for patient harm using an Agency for Healthcare Research and Quality harm scale. RESULTS: Fifty-four subjects completed the study with a mean age of 42 years (SD 18). Twenty-nine (54%) were female, 31 (57%) Caucasian, and 26 (50%) were college educated. Only 8 (15%) reported using a conversational assistant regularly, while 22 (41%) had never used one, and 24 (44%) had tried one "a few times." Forty-four (82%) used computers regularly. Subjects were only able to complete 168 (43%) of their 394 tasks. Of these, 49 (29%) reported actions that could have resulted in some degree of patient harm, including 27 (16%) that could have resulted in death. CONCLUSIONS: Reliance on conversational assistants for actionable medical information represents a safety risk for patients and consumers. Patients should be cautioned to not use these technologies for answers to medical questions they intend to act on without further consultation from a health care provider.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Mídias Sociais / Troca de Informação em Saúde Tipo de estudo: Clinical_trials / Etiology_studies / Observational_studies / Risk_factors_studies Limite: Adult / Female / Humans / Male Idioma: En Revista: J Med Internet Res Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2018 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Mídias Sociais / Troca de Informação em Saúde Tipo de estudo: Clinical_trials / Etiology_studies / Observational_studies / Risk_factors_studies Limite: Adult / Female / Humans / Male Idioma: En Revista: J Med Internet Res Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2018 Tipo de documento: Article País de afiliação: Estados Unidos