Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-39216786

RESUMO

OBJECTIVE: To identify and quantify ability bias in generative artificial intelligence large language model chatbots, specifically OpenAI's ChatGPT and Google's Gemini. DESIGN: Observational study of language usage in generative artificial intelligence models. SETTING: Investigation-only browser profile restricted to ChatGPT and Gemini. PARTICIPANTS: Each chatbot generated 60 descriptions of people prompted without specified functional status, 30 descriptions of people with a disability, 30 descriptions of patients with a disability, and 30 descriptions of athletes with a disability (N=300). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Generated descriptions produced by the models were parsed into words that were linguistically analyzed into favorable qualities or limiting qualities. RESULTS: Both large language models significantly underestimated disability in a population of people, and linguistic analysis showed that descriptions of people, patients, and athletes with a disability were generated as having significantly fewer favorable qualities and significantly more limitations than people without a disability in both ChatGPT and Gemini. CONCLUSIONS: Generative artificial intelligence chatbots demonstrate quantifiable ability bias and often exclude people with disabilities in their responses. Ethical use of these generative large language model chatbots in medical systems should recognize this limitation, and further consideration should be taken in developing equitable artificial intelligence technologies.

2.
Ann Plast Surg ; 92(5): 491-498, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38563555

RESUMO

BACKGROUND: YouTube is a platform for many topics, including plastic surgery. Previous studies have shown poor educational value in YouTube videos of plastic surgery procedures. The purpose of this study was to evaluate the quality and accuracy of YouTube videos concerning gynecomastia surgery (GS). METHODS: The phrases "gynecomastia surgery" (GS) and "man boobs surgery" (MB) were queried on YouTube. The first 50 videos for each search term were examined. The videos were rated using our novel Gynecomastia Surgery Specific Score to measure gynecomastia-specific information, the Patient Education Materials Assessment Tool (PEMAT) to measure understandability and actionability, and the Global Quality Scale to measure general quality. RESULTS: The most common upload source was a board-certified plastic surgeon (35%), and content category was surgery techniques and consultations (51%). Average scores for the Global Quality Scale (x̄ = 2.25), Gynecomastia Surgery Specific Score (x̄ = 3.50), and PEMAT Actionability (x̄ = 44.8%) were low, whereas PEMAT Understandability (x̄ = 77.4%) was moderate to high. There was no difference in all scoring modalities between the GS and MB groups. Internationally uploaded MB videos tended to originate from Asian countries, whereas GS videos tended to originate from non-US Western countries. Patient uploaders had higher PEMAT Actionability scores than plastic surgeon uploaders. CONCLUSIONS: The quality and amount of gynecomastia-specific information in GS videos on YouTube are low and contain few practical, take-home points for patients. However, understandability is adequate. Plastic surgeons and professional societies should strive to create high-quality medical media on platforms such as YouTube.


Assuntos
Ginecomastia , Educação de Pacientes como Assunto , Mídias Sociais , Gravação em Vídeo , Humanos , Ginecomastia/cirurgia , Educação de Pacientes como Assunto/normas , Educação de Pacientes como Assunto/métodos , Mídias Sociais/normas , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA