Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo de estudio
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Adv Health Sci Educ Theory Pract ; 28(5): 1441-1465, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37097483

RESUMEN

Automatic Item Generation (AIG) refers to the process of using cognitive models to generate test items using computer modules. It is a new but rapidly evolving research area where cognitive and psychometric theory are combined into digital framework. However, assessment of the item quality, usability and validity of AIG relative to traditional item development methods lacks clarification. This paper takes a top-down strong theory approach to evaluate AIG in medical education. Two studies were conducted: Study I-participants with different levels of clinical knowledge and item writing experience developed medical test items both manually and through AIG. Both item types were compared in terms of quality and usability (efficiency and learnability); Study II-Automatically generated items were included in a summative exam in the content area of surgery. A psychometric analysis based on Item Response Theory inspected the validity and quality of the AIG-items. Items generated by AIG presented quality, evidences of validity and were adequate for testing student's knowledge. The time spent developing the contents for item generation (cognitive models) and the number of items generated did not vary considering the participants' item writing experience or clinical knowledge. AIG produces numerous high-quality items in a fast, economical and easy to learn process, even for inexperienced and without clinical training item writers. Medical schools may benefit from a substantial improvement in cost-efficiency in developing test items by using AIG. Item writing flaws can be significantly reduced thanks to the application of AIG's models, thus generating test items capable of accurately gauging students' knowledge.


Asunto(s)
Educación de Pregrado en Medicina , Educación Médica , Humanos , Evaluación Educacional/métodos , Educación de Pregrado en Medicina/métodos , Psicometría , Estudiantes
2.
Adv Health Sci Educ Theory Pract ; 27(2): 405-425, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35230589

RESUMEN

BACKGROUND: Current demand for multiple-choice questions (MCQs) in medical assessment is greater than the supply. Consequently, an urgency for new item development methods arises. Automatic Item Generation (AIG) promises to overcome this burden, generating calibrated items based on the work of computer algorithms. Despite the promising scenario, there is still no evidence to encourage a general application of AIG in medical assessment. It is therefore important to evaluate AIG regarding its feasibility, validity and item quality. OBJECTIVE: Provide a narrative review regarding the feasibility, validity and item quality of AIG in medical assessment. METHODS: Electronic databases were searched for peer-reviewed, English language articles published between 2000 and 2021 by means of the terms 'Automatic Item Generation', 'Automated Item Generation', 'AIG', 'medical assessment' and 'medical education'. Reviewers screened 119 records and 13 full texts were checked according to the inclusion criteria. A validity framework was implemented in the included studies to draw conclusions regarding the validity of AIG. RESULTS: A total of 10 articles were included in the review. Synthesized data suggests that AIG is a valid and feasible method capable of generating high-quality items. CONCLUSIONS: AIG can solve current problems related to item development. It reveals itself as an auspicious next-generation technique for the future of medical assessment, promising several quality items both quickly and economically.


Asunto(s)
Proyectos de Investigación , Estudios de Factibilidad , Humanos
3.
BMC Psychol ; 11(1): 93, 2023 Mar 31.
Artículo en Inglés | MEDLINE | ID: mdl-37004114

RESUMEN

BACKGROUND AND OBJECTIVE: Recent developments in Europe and Portugal provide a fertile ground for the rise of populism. Despite the growing interest in the topic, there is no reliable tool to gauge Portuguese citizens' populist attitudes to date. The Populist Attitudes Scale (POP-AS), developed by Akkerman et al. [1], is one of the best-known instruments for measuring populist attitudes. However, no version for use in the Portuguese population is available. This paper describes the psychometric validation of the POP-AS for the Portuguese population. METHODS: Trustworthy measures of validity suggested by Boateng et al. [2] to address the psychometric features of the POP-AS were approached. A robust psychometrical pipeline evaluated the reliability, construct validity, cross national/educational validity, and internal validity of the POP-AS. RESULTS: The Portuguese version of the POP-AS exhibited sound internal consistency and demonstrated adequate properties of validity: a one-factor model was obtained, revealing evidence of construct validity; invariance was ensured for education and partially ensured for the country; All the items of the POP-AS revealed relatively good values of discrimination and contributed adequately to the total score of the scale, ensuring evidence of internal validity. CONCLUSION: Psychometric analysis supports the POP-AS as a valid and reliable instrument for measuring populist attitudes among Portuguese citizens. A validation framework for measurement instruments in political science was proposed. Implications of the findings are discussed.


Asunto(s)
Actitud , Humanos , Portugal , Psicometría , Reproducibilidad de los Resultados , Encuestas y Cuestionarios
4.
Med Educ Online ; 28(1): 2228550, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37347808

RESUMEN

With AI's advancing technology and chatbots becoming more intertwined in our daily lives, pedagogical challenges are occurring. While chatbots can be used in various disciplines, they play a particularly significant role in medical education. We present the development process of OSCEBot ®, a chatbot to train medical students in the clinical interview approach. The SentenceTransformers, or SBERT, framework was used to develop this chatbot. To enable semantic search for various phrases, SBERT uses siamese and triplet networks to build sentence embeddings for each sentence that can then be compared using a cosine-similarity. Three clinical cases were developed using symptoms that followed the SOCRATES approach. The optimal cutoffs were determined, and each case's performance metrics were calculated. Each question was divided into different categories based on their content. Regarding the performance between cases, case 3 presented higher average confidence values, explained by the continuous improvement of the cases following the feedback acquired after the sessions with the students. When evaluating performance between categories, it was found that the mean confidence values were highest for previous medical history. It is anticipated that the results can be improved upon since this study was conducted early in the chatbot deployment process. More clinical scenarios must be created to broaden the options available to students.


Asunto(s)
Educación Médica , Estudiantes de Medicina , Humanos , Programas Informáticos , Retroalimentación
5.
BMC Psychol ; 9(1): 166, 2021 Oct 27.
Artículo en Inglés | MEDLINE | ID: mdl-34706783

RESUMEN

BACKGROUND: Test anxiety is a crucial factor in determining academic outcomes, and it may lead to poor cognitive performance, academic underachievement, and psychological distress, interfering specifically with their ability to think and perform during tests. The main objective of this study was to explore the applicability and psychometric properties of a Portuguese version of the Reactions to Tests scale (RTT) in a sample of medical students. METHOD: A sample of 672 medical students completed the RTT. The sample was randomly split in half to allow for independent Exploratory Factor Analysis (EFA) and to test the best fit model-Confirmatory Factor Analysis (CFA). CFA was used to test both the first-order factor structure (four subscales) and second-order factor structure, in which the four subscales relate to a general factor, Test Anxiety. The internal consistency of the RTT was assessed through Cronbach's alpha, Composite reliability (CR) and Average Variance Extracted (AVE) for the total scale and each of the four subscales. Convergent validity was evaluated through the correlation between RTT and the State-Trait Anxiety Inventory (STAI-Y).To explore the comparability of measured attributes across subgroups of respondents, measurement invariance was also studied. RESULTS: Results from exploratory and confirmatory factor analyses showed acceptable fits for the Portuguese RTT version. Concerning internal consistency, results indicate that RTT was found to be reliable to measure test anxiety in this sample. Convergent validity of the RTT with both state and trait anxiety STAI-Y's subscales was also shown. Moreover, multigroup analyses showed metric invariance across gender and curriculum phase. CONCLUSION: Our results suggest that the RTT scale is a valid and reliable instrument for the measurement of test anxiety among Portuguese Medical Students.


Asunto(s)
Estudiantes de Medicina , Análisis Factorial , Humanos , Portugal , Psicometría , Reproducibilidad de los Resultados , Encuestas y Cuestionarios
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA