Your browser doesn't support javascript.
loading
Addressing 6 challenges in generative AI for digital health: A scoping review.
Templin, Tara; Perez, Monika W; Sylvia, Sean; Leek, Jeff; Sinnott-Armstrong, Nasa.
Afiliación
  • Templin T; Department of Health Policy and Management, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America.
  • Perez MW; Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America.
  • Sylvia S; Department of Genome Sciences, University of Washington, Seattle, Washington, United States of America.
  • Leek J; Carolina Population Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America.
  • Sinnott-Armstrong N; Department of Health Policy and Management, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America.
PLOS Digit Health ; 3(5): e0000503, 2024 May.
Article en En | MEDLINE | ID: mdl-38781686
ABSTRACT
Generative artificial intelligence (AI) can exhibit biases, compromise data privacy, misinterpret prompts that are adversarial attacks, and produce hallucinations. Despite the potential of generative AI for many applications in digital health, practitioners must understand these tools and their limitations. This scoping review pays particular attention to the challenges with generative AI technologies in medical settings and surveys potential solutions. Using PubMed, we identified a total of 120 articles published by March 2024, which reference and evaluate generative AI in medicine, from which we synthesized themes and suggestions for future work. After first discussing general background on generative AI, we focus on collecting and presenting 6 challenges key for digital health practitioners and specific measures that can be taken to mitigate these challenges. Overall, bias, privacy, hallucination, and regulatory compliance were frequently considered, while other concerns around generative AI, such as overreliance on text models, adversarial misprompting, and jailbreaking, are not commonly evaluated in the current literature.

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: PLOS Digit Health Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: PLOS Digit Health Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos
...