Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Eur J Investig Health Psychol Educ ; 14(3): 657-668, 2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-38534904

RESUMEN

(1) Background: As the field of artificial intelligence (AI) evolves, tools like ChatGPT are increasingly integrated into various domains of medicine, including medical education and research. Given the critical nature of medicine, it is of paramount importance that AI tools offer a high degree of reliability in the information they provide. (2) Methods: A total of n = 450 medical examination questions were manually entered into ChatGPT thrice, each for ChatGPT 3.5 and ChatGPT 4. The responses were collected, and their accuracy and consistency were statistically analyzed throughout the series of entries. (3) Results: ChatGPT 4 displayed a statistically significantly improved accuracy with 85.7% compared to that of 57.7% of ChatGPT 3.5 (p < 0.001). Furthermore, ChatGPT 4 was more consistent, correctly answering 77.8% across all rounds, a significant increase from the 44.9% observed from ChatGPT 3.5 (p < 0.001). (4) Conclusions: The findings underscore the increased accuracy and dependability of ChatGPT 4 in the context of medical education and potential clinical decision making. Nonetheless, the research emphasizes the indispensable nature of human-delivered healthcare and the vital role of continuous assessment in leveraging AI in medicine.

2.
Biomed Phys Eng Express ; 10(2)2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-38350124

RESUMEN

The human body's vascular system is a finely regulated network: blood vessels can change in shape (i.e. constrict, or dilate), their elastic response may shift and they may undergo temporary and partial blockages due to pressure applied by skeletal muscles in their immediate vicinity. Simultaneous measurement of muscle activation and the corresponding changes in vessel diameter, in particular at anatomical regions such as the face, is challenging, and how muscle activation constricts blood vessels has been experimentally largely overlooked. Here we report on a new electronic skin technology for facial investigations to address this challenge. The technology consists of screen-printed dry carbon electrodes on soft polyurethane substrate. Two dry electrode arrays were placed on the face: One array for bio-potential measurements to capture muscle activity and a second array for bio-impedance. For the bio-potential signals, independent component analysis (ICA) was used to differentiate different muscle activations. Four-contact bio-impedance measurements were used to extract changes (related to artery volume change), as well as beats per minute (BPM). We performed concurrent bio-potential and bio-impedance measurements in the face. From the simultaneous measurements we successfully captured fluctuations in the superficial temporal artery diameter in response to facial muscle activity, which ultimately changes blood flow. The observed changes in the face, following muscle activation, were consistent with measurements in the forearm and were found to be notably more intricate. Both at the arm and the face, a clear increase in the baseline impedance was recorded during muscle activation (artery narrowing), while the impedance changes signifying the pulse had a clear repetitive trend only at the forearm. These results reveal the direct connection between muscle activation and the blood vessels in their vicinity and start to unveil the complex mechanisms through which facial muscles might modulate blood flow and possibly affect human physiology.


Asunto(s)
Músculo Esquelético , Dispositivos Electrónicos Vestibles , Humanos , Impedancia Eléctrica , Electrodos , Arterias
3.
JMIR Med Educ ; 10: e51148, 2024 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-38180782

RESUMEN

BACKGROUND: The United States Medical Licensing Examination (USMLE) has been critical in medical education since 1992, testing various aspects of a medical student's knowledge and skills through different steps, based on their training level. Artificial intelligence (AI) tools, including chatbots like ChatGPT, are emerging technologies with potential applications in medicine. However, comprehensive studies analyzing ChatGPT's performance on USMLE Step 3 in large-scale scenarios and comparing different versions of ChatGPT are limited. OBJECTIVE: This paper aimed to analyze ChatGPT's performance on USMLE Step 3 practice test questions to better elucidate the strengths and weaknesses of AI use in medical education and deduce evidence-based strategies to counteract AI cheating. METHODS: A total of 2069 USMLE Step 3 practice questions were extracted from the AMBOSS study platform. After including 229 image-based questions, a total of 1840 text-based questions were further categorized and entered into ChatGPT 3.5, while a subset of 229 questions were entered into ChatGPT 4. Responses were recorded, and the accuracy of ChatGPT answers as well as its performance in different test question categories and for different difficulty levels were compared between both versions. RESULTS: Overall, ChatGPT 4 demonstrated a statistically significant superior performance compared to ChatGPT 3.5, achieving an accuracy of 84.7% (194/229) and 56.9% (1047/1840), respectively. A noteworthy correlation was observed between the length of test questions and the performance of ChatGPT 3.5 (ρ=-0.069; P=.003), which was absent in ChatGPT 4 (P=.87). Additionally, the difficulty of test questions, as categorized by AMBOSS hammer ratings, showed a statistically significant correlation with performance for both ChatGPT versions, with ρ=-0.289 for ChatGPT 3.5 and ρ=-0.344 for ChatGPT 4. ChatGPT 4 surpassed ChatGPT 3.5 in all levels of test question difficulty, except for the 2 highest difficulty tiers (4 and 5 hammers), where statistical significance was not reached. CONCLUSIONS: In this study, ChatGPT 4 demonstrated remarkable proficiency in taking the USMLE Step 3, with an accuracy rate of 84.7% (194/229), outshining ChatGPT 3.5 with an accuracy rate of 56.9% (1047/1840). Although ChatGPT 4 performed exceptionally, it encountered difficulties in questions requiring the application of theoretical concepts, particularly in cardiology and neurology. These insights are pivotal for the development of examination strategies that are resilient to AI and underline the promising role of AI in the realm of medical education and diagnostics.


Asunto(s)
Cardiología , Educación Médica , Medicina , Humanos , Inteligencia Artificial , Escolaridad
4.
Ann Biomed Eng ; 2023 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-37553555

RESUMEN

PURPOSE: The use of AI-powered technology, particularly OpenAI's ChatGPT, holds significant potential to reshape healthcare and medical education. Despite existing studies on the performance of ChatGPT in medical licensing examinations across different nations, a comprehensive, multinational analysis using rigorous methodology is currently lacking. Our study sought to address this gap by evaluating the performance of ChatGPT on six different national medical licensing exams and investigating the relationship between test question length and ChatGPT's accuracy. METHODS: We manually inputted a total of 1,800 test questions (300 each from US, Italian, French, Spanish, UK, and Indian medical licensing examination) into ChatGPT, and recorded the accuracy of its responses. RESULTS: We found significant variance in ChatGPT's test accuracy across different countries, with the highest accuracy seen in the Italian examination (73% correct answers) and the lowest in the French examination (22% correct answers). Interestingly, question length correlated with ChatGPT's performance in the Italian and French state examinations only. In addition, the study revealed that questions requiring multiple correct answers, as seen in the French examination, posed a greater challenge to ChatGPT. CONCLUSION: Our findings underscore the need for future research to further delineate ChatGPT's strengths and limitations in medical test-taking across additional countries and to develop guidelines to prevent AI-assisted cheating in medical examinations.

5.
Plast Reconstr Surg Glob Open ; 11(6): e5086, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37396838

RESUMEN

Prominent ears are the most frequently observed congenital deformity of the head and neck. Various techniques have been proposed for their aesthetic correction. Typically, surgical treatment for protruding ears involves a combination of suture, cutting, and scoring techniques. Herein, we present the clinical case of an 11-year-old child who developed bilateral keloid formations 12 months after otoplasty. Keloids and hypertrophic scars can result from extensive retroauricular skin excisions that do not allow for tension-free wound closure. In addition, skin tension and friction on immature surgical scars are common risk factors for keloid formation. To comply with school guidelines aimed at reducing the transmission of SARS-CoV-2, the patient has consistently worn FFP2 masks with ear loops positioned behind the concha. Although masks play a critical role in preventing the spread of infectious diseases, they can lead to friction in the postauricular area. In light of the presented case, it is important to examine potential cofactors that may contribute to keloid formation after otoplasty, as well as suggest a strategy to safeguard the retroauricular scar.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...