Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38649529

RESUMO

INTRODUCTION: Research in various areas indicates that expert judgment can be highly inconsistent. However, expert judgment is indispensable in many contexts. In medical education, experts often function as examiners in rater-based assessments. Here, disagreement between examiners can have far-reaching consequences. The literature suggests that inconsistencies in ratings depend on the level of performance a to-be-evaluated candidate shows. This possibility has not been addressed deliberately and with appropriate statistical methods. By adopting the theoretical lens of ecological rationality, we evaluate if easily implementable strategies can enhance decision making in real-world assessment contexts. METHODS: We address two objectives. First, we investigate the dependence of rater-consistency on performance levels. We recorded videos of mock-exams and had examiners (N=10) evaluate four students' performances and compare inconsistencies in performance ratings between examiner-pairs using a bootstrapping procedure. Our second objective is to provide an approach that aids decision making by implementing simple heuristics. RESULTS: We found that discrepancies were largely a function of the level of performance the candidates showed. Lower performances were rated more inconsistently than excellent performances. Furthermore, our analyses indicated that the use of simple heuristics might improve decisions in examiner pairs. DISCUSSION: Inconsistencies in performance judgments continue to be a matter of concern, and we provide empirical evidence for them to be related to candidate performance. We discuss implications for research and the advantages of adopting the perspective of ecological rationality. We point to directions both for further research and for development of assessment practices.

2.
BMC Med Educ ; 24(1): 1016, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39285419

RESUMO

BACKGROUND: The ability of an expert's item difficulty ratings to predict test-taker actual performance is an important aspect of licensure examinations. Expert judgment is used as a primary source of information for users to make prior decisions to determine the pass rate of test takers. The nature of raters involved in predicting item difficulty is central to set credible standards. Therefore, this study aimed to assess and compare raters' prediction and actual Multiple-Choice Questions' difficulty of the undergraduate medicine licensure examination (UGMLE) in Ethiopia. METHOD: 815 examinees' responses to 200 Multiple-Choice Questions (MCQs) were used in this study. The study also included experts' item difficulty ratings of seven physicians who participated in the standard settings of UGMLE. Then, analysis was conducted to understand experts' rating variation in predicting the actual difficulty levels of examinees. Descriptive statistics was used to profile the mean rater's and actual difficulty value for MCQs, and ANOVA was used to compare the mean differences between raters' prediction of item difficulty. Additionally, regression analysis was used to understand the interrater variations in item difficulty predictions compared to the actual difficulty. The proportion of variance of actual difficulty explained from rater prediction was computed using regression analysis. RESULTS: In this study, the mean difference between raters' prediction and examinees' actual performance was inconsistent across the exam domains. The study revealed a statistically significant strong positive correlation between the actual and predicted item difficulty in exam domains eight and eleven. However, a non-statistically significant very weak positive correlation was reported in exam domains seven and twelve. The multiple comparison analysis showed significant differences in mean item difficulty ratings between raters. In the regression analysis, experts' item difficulty ratings of the UGMLE had 33% power in predicting the actual difficulty level. The regression model also showed a moderate positive correlation (R = 0.57) that was statistically significant at F (6, 193) = 15.58, P = 0.001. CONCLUSION: This study demonstrated the complex process for assessing the difficulty level of MCQs in the UGMLE and emphasized the benefits of using experts' ratings in advance. To ensure the exams maintain the necessary reliable and valid scores, raters' accuracy on the UGMLE must be improved. To achieve this, techniques that align with the evolving assessment methodologies must be developed.


Assuntos
Educação de Graduação em Medicina , Avaliação Educacional , Licenciamento em Medicina , Humanos , Etiópia , Avaliação Educacional/métodos , Avaliação Educacional/normas , Educação de Graduação em Medicina/normas , Licenciamento em Medicina/normas , Masculino , Feminino , Competência Clínica/normas , Estudantes de Medicina , Adulto
3.
BMC Med Educ ; 23(1): 976, 2023 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-38115062

RESUMO

The COVID-19 pandemic had a disruptive effect on higher education. A critical question is whether these changes affected students' learning outcomes. Knowledge gaps have consequences for future learning and may-in health professionals' education-also pose a threat to patient safety. Current research has shortcomings and does not allow for clear-cut interpretation. Our context is instruction in human physiology in an undergraduate medical program from high stakes end of term examinations. The sequence of imposed measures to slow the COVID-19 pandemic created a natural experiment, allowing for comparisons in performance during in-person versus remote instruction.In a two-factorial design, mode of instruction (in-person vs. remote) and mode of assessment (in-person vs. remote) were analyzed using both basic (non-parametric statistics, T-tests) and advanced statistical methods (linear mixed-effects model; resampling techniques). Test results from a total of N = 1095 s-year medical students were included in the study.We did not find empirical evidence of knowledge gaps; rather, students received comparable or higher scores during remote teaching. We interpret these findings as empirical evidence that both students and teachers adapted to pandemic disruption in a way that did not lead to knowledge gaps.We conclude that highly motivated students had no reduction in academic achievement. Moreover, we have developed an accessible digital exam system for secure, fair, and effective assessments which is sufficiently defensible for making pass/fail decisions.


Assuntos
Sucesso Acadêmico , COVID-19 , Estudantes de Medicina , Humanos , Pandemias , COVID-19/epidemiologia , Escolaridade
4.
Med Educ ; 55(10): 1172-1182, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34291481

RESUMO

INTRODUCTION: Wrong and missed diagnoses contribute substantially to medical error. Can a prompt to generate alternative diagnoses (prompt) or a differential diagnosis checklist (DDXC) increase diagnostic accuracy? How do these interventions affect the diagnostic process and self-monitoring? METHODS: Advanced medical students (N = 90) were randomly assigned to one of four conditions to complete six computer-based patient cases: group 1 (prompt) was instructed to write down all diagnoses they considered while acquiring diagnostic test results and to finally rank them. Groups 2 and 3 received the same instruction plus a list of 17 differential diagnoses for the chief complaint of the patient. For half of the cases, the DDXC contained the correct diagnosis (DDXC+), and for the other half, it did not (DDXC-; counterbalanced). Group 4 (control) was only instructed to indicate their final diagnosis. Mixed-effects models were used to analyse results. RESULTS: Students using a DDXC that contained the correct diagnosis had better diagnostic accuracy, mean (standard deviation), 0.75 (0.44), compared to controls without a checklist, 0.49 (0.50), P < 0.001, but those using a DDXC that did not contain the correct diagnosis did slightly worse, 0.43 (0.50), P = 0.602. The number and relevance of diagnostic tests acquired were not affected by condition, nor was self-monitoring. However, participants spent more time on a case in the DDXC-, 4:20 min (2:36), P ≤ 0.001, and DDXC+ condition, 3:52 min (2:09), than in the control condition, 2:59 min (1:44), P ≤ 0.001. DISCUSSION: Being provided a list of possible diagnoses improves diagnostic accuracy compared with a prompt to create a differential diagnosis list, if the provided list contains the correct diagnosis. However, being provided a diagnosis list without the correct diagnosis did not improve and might have slightly reduced diagnostic accuracy. Interventions neither affected information gathering nor self-monitoring.


Assuntos
Lista de Checagem , Estudantes de Medicina , Diagnóstico Diferencial , Erros de Diagnóstico , Humanos
5.
Adv Health Sci Educ Theory Pract ; 26(4): 1339-1354, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33977409

RESUMO

The use of response formats in assessments of medical knowledge and clinical reasoning continues to be the focus of both research and debate. In this article, we report on an experimental study in which we address the question of how much list-type selected response formats and short-essay type constructed response formats are related to differences in how test takers approach clinical reasoning tasks. The design of this study was informed by a framework developed within cognitive psychology which stresses the importance of the interplay between two components of reasoning-self-monitoring and response inhibition-while solving a task or case. The results presented support the argument that different response formats are related to different processing behavior. Importantly, the pattern of how different factors are related to a correct response in both situations seem to be well in line with contemporary accounts of reasoning. Consequently, we argue that when designing assessments of clinical reasoning, it is crucial to tap into the different facets of this complex and important medical process.


Assuntos
Raciocínio Clínico , Resolução de Problemas , Humanos
6.
Med Teach ; 42(12): 1374-1384, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32857621

RESUMO

BACKGROUND: In high-stakes assessments in medical education, the decision to let a particular participant pass or fail has far-reaching consequences. Reliability coefficients are usually used to support the trustworthiness of assessments and their accompanying decisions. However, coefficients such as Cronbach's Alpha do not indicate the precision with which an individual's performance was measured. OBJECTIVE: Since estimates of precision need to be aligned with the level on which inferences are made, we illustrate how to adequately report the precision of pass-fail decisions for single individuals. METHOD: We show how to calculate the precision of individual pass-fail decisions using Item Response Theory and illustrate that approach using a real exam. In total, 70 students sat this exam (110 items). Reliability coefficients were above recommendations for high stakes test (> 0.80). At the same time, pass-fail decisions around the cut score were expected to show low accuracy. CONCLUSIONS: Our results illustrate that the most important decisions-i.e. those based on scores near the pass-fail cut-score-are often ambiguous, and that reporting a traditional reliability coefficient is not an adequate description of the uncertainty encountered on an individual level.


Assuntos
Educação Médica , Avaliação Educacional , Competência Clínica , Humanos , Reprodutibilidade dos Testes , Estudantes
7.
Med Teach ; 42(10): 1154-1162, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32767902

RESUMO

BACKGROUND: The widespread use of mobile devices among students favors the use of mobile learning scenarios at universities. In this study, we explore whether a time- and location-independent variant of a formative progress test has an impact on the students' acceptance, its validity and reliability and if there is a difference in response processes between the two exam conditions. METHODS: Students were randomly assigned to two groups of which one took the test free of local or temporal fixations, while the other group took the test at the local testing center under usual examination conditions. Beside the generated test data, such as test score, time-on-test, and semester status, students also evaluated the settings. RESULTS: While there was no significant effect on the test score between the two groups, students in the mobile group spent more time on the test and were more likely to use the help of books or online resources. The results of the evaluation show that the acceptability among students is increased by a mobile version of the formative progress test. CONCLUSIONS: The results suggest that the acceptance and motivation to participate in formative tests is enhanced by lifting local and temporal restrictions. The mobile version nonetheless does not have an impact on the students' performance.


Assuntos
Educação de Graduação em Medicina , Avaliação Educacional , Currículo , Humanos , Aprendizagem , Reprodutibilidade dos Testes
8.
Emerg Med J ; 37(9): 546-551, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32647026

RESUMO

OBJECTIVES: A major cause for concern about increasing ED visits is that ED care is expensive. Recent research suggests that ED resource consumption is affected by patients' health status, varies between physicians and is context dependent. The aim of this study is to determine the relative proportion of characteristics of the patient, the physician and the context that contribute to ED resource consumption. METHODS: Data on patients, physicians and the context were obtained in a prospective observational cohort study of patients hospitalised to an internal medicine ward through the ED of the University Hospital Bern, Switzerland, between August and December 2015. Diagnostic resource consumption in the ED was modelled through a multilevel mixed effects linear regression. RESULTS: In total, 473 eligible patients seen by one of 38 physicians were included in the study. Diagnostic resource consumption heavily depends on physicians' ratings of case difficulty (p<0.001, z-standardised regression coefficient: 147.5, 95% CI 87.3 to 207.7) and-less surprising-on patients' acuity (p<0.001, 126.0, 95% CI 65.5 to 186.6). Neither the physician per se, nor their experience, the patients' chronic health status or the context seems to have a measurable impact (all p>0.05). CONCLUSIONS: Diagnostic resource consumption in the ED is heavily affected by physicians' situational confidence. Whether we should aim at altering physician confidence ultimately depends on its calibration with accuracy.


Assuntos
Diagnóstico por Imagem/economia , Testes Diagnósticos de Rotina/economia , Serviço Hospitalar de Emergência/economia , Padrões de Prática Médica/economia , Alocação de Recursos/economia , Humanos , Medicina Interna , Estudos Prospectivos , Índice de Gravidade de Doença , Inquéritos e Questionários , Suíça
9.
Med Educ ; 53(7): 735-744, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-30761597

RESUMO

CONTEXT: The ability to self-monitor one's performance in clinical settings is a critical determinant of safe and effective practice. Various studies have shown this form of self-regulation to be more trustworthy than aggregate judgements (i.e. self-assessments) of one's capacity in a given domain. However, little is known regarding what cues inform learners' self-monitoring, which limits an informed exploration of interventions that might facilitate improvements in self-monitoring capacity. The purpose of this study is to understand the influence of characteristics of the individual (e.g. ability) and characteristics of the problem (e.g. case difficulty) on the accuracy of self-monitoring by medical students. METHODS: In a cross-sectional study, 283 medical students from 5 years of study completed a computer-based clinical reasoning exercise. Confidence ratings were collected after completing each of six cases and the accuracy of self-monitoring was considered to be a function of confidence when the eventual answer was correct relative to when the eventual answer was incorrect. The magnitude of that difference was then explored as a function of year of seniority, gender, case difficulty and overall aptitude. RESULTS: Students demonstrated accurate self-monitoring by virtue of giving higher confidence ratings (57.3%) and taking a shorter time to work through cases (25.6 seconds) when their answers were correct relative to when they were wrong (41.8% and 52.0 seconds, respectively; p< 0.001 and d > 0.5 in both instances). Self-monitoring indices were related to student seniority and case difficulty, but not to overall ability or student gender. CONCLUSIONS: This study suggests that the accuracy of self-monitoring is context specific, being heavily influenced by the struggles students experience with a particular case rather than reflecting a generic ability to know when one is right or wrong. That said, the apparent capacity to self-monitor increases developmentally because increasing experience provides a greater likelihood of success with presented problems.


Assuntos
Aptidão , Competência Clínica , Sinais (Psicologia) , Autoavaliação (Psicologia) , Adulto , Estudos Transversais , Feminino , Humanos , Masculino , Fatores Sexuais , Treinamento por Simulação , Estudantes de Medicina/psicologia , Adulto Jovem
10.
Adv Health Sci Educ Theory Pract ; 23(1): 217-232, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28303398

RESUMO

Despite the frequent use of state-of-the-art psychometric models in the field of medical education, there is a growing body of literature that questions their usefulness in the assessment of medical competence. Essentially, a number of authors raised doubt about the appropriateness of psychometric models as a guiding framework to secure and refine current approaches to the assessment of medical competence. In addition, an intriguing phenomenon known as case specificity is specific to the controversy on the use of psychometric models for the assessment of medical competence. Broadly speaking, case specificity is the finding of instability of performances across clinical cases, tasks, or problems. As stability of performances is, generally speaking, a central assumption in psychometric models, case specificity may limit their applicability. This has probably fueled critiques of the field of psychometrics with a substantial amount of potential empirical evidence. This article aimed to explain the fundamental ideas employed in psychometric theory, and how they might be problematic in the context of assessing medical competence. We further aimed to show why and how some critiques do not hold for the field of psychometrics as a whole, but rather only for specific psychometric approaches. Hence, we highlight approaches that, from our perspective, seem to offer promising possibilities when applied in the assessment of medical competence. In conclusion, we advocate for a more differentiated view on psychometric models and their usage.


Assuntos
Desempenho Acadêmico/normas , Competência Clínica/normas , Educação Médica/normas , Avaliação Educacional/normas , Psicometria/normas , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
11.
Med Teach ; 40(11): 1123-1129, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-29950124

RESUMO

Background: Progress testing is a longitudinal assessment that aims at tracking students' development of knowledge. This approach is used in many medical schools internationally. Although progress tests are longitudinal in nature, and their focus and use of developmental aspects is a key advantage, individual students' learning trajectories themselves play, to date, only a minor role in the use of the information obtained through progress testing. Methods: We investigate in how far between-person differences in initial levels of performance and within-person rate of growth can be regarded as distinct components of students' development and analyze the extent to which these two components are related to performances on national licensing examinations using a latent growth curve model. Results: Both, higher initial levels of performances and steepness of growth are positively related to long-term outcomes as measured by performance on national licensing examinations. We interpret these findings as evidence for progress tests' suitability to monitor students' growth of knowledge across the course of medical training. Conclusions: This study indicates that individual development as obtained by formative progress tests is related to performance in high-stakes assessments. Future studies may put more focus on the use of between-persons differences in growth of knowledge.


Assuntos
Avaliação Educacional/métodos , Licenciamento/estatística & dados numéricos , Estudantes de Medicina/estatística & dados numéricos , Alemanha , Humanos , Modelos Estatísticos
12.
Med Teach ; 43(5): 608-609, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33119998

Assuntos
Estudantes , Humanos
13.
Adv Health Sci Educ Theory Pract ; 20(4): 1033-52, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25616720

RESUMO

In medical education, the effect of the educational environment on student achievement has primarily been investigated in comparisons between traditional and problem-based learning (PBL) curricula. As many of these studies have reached no clear conclusions on the superiority of the PBL approach, the effect of curricular reform on student performance remains an issue. We employed a theoretical framework that integrates antecedents of student achievement from various psychosocial domains to examine how students interact with their curricular environment. In a longitudinal study with N = 1,646 participants, we assessed students in a traditional and a PBL-centered curriculum. The measures administered included students' perception of the learning environment, self-efficacy beliefs, positive study-related affect, social support, indicators of self-regulated learning, and academic achievement assessed through progress tests. We compared the relations between these characteristics in the two curricular environments. The results are two-fold. First, substantial relations of various psychosocial domains and their associations with achievement were identified. Second, our analyses indicated that there are no substantial differences between traditional and PBL-based curricula concerning the relational structure of psychosocial variables and achievement. Drawing definite conclusions on the role of curricular-level interventions in the development of student's academic achievement is constrained by the quasi-experimental design as wells as the selection of variables included. However, in the specific context described here, our results may still support the view of student activity as the key ingredient in the acquisition of achievement and performance.


Assuntos
Logro , Currículo , Educação de Graduação em Medicina/métodos , Aprendizagem Baseada em Problemas , Meio Social , Avaliação Educacional , Feminino , Alemanha , Humanos , Estudos Longitudinais , Masculino , Modelos Educacionais , Psicometria , Autoeficácia , Apoio Social
14.
Teach Learn Med ; 27(1): 57-62, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25584472

RESUMO

UNLABELLED: CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. BACKGROUND: Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. APPROACH: A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. RESULTS: The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. CONCLUSIONS: Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing students) guess at a higher rate. Further studies are necessary to understand this finding.


Assuntos
Computadores , Avaliação Educacional/métodos , Papel , Estudantes de Medicina/psicologia , Adulto , Feminino , Alemanha , Humanos , Masculino
15.
Adv Simul (Lond) ; 9(1): 38, 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39261889

RESUMO

BACKGROUND: Inadequate collaboration in healthcare can lead to medical errors, highlighting the importance of interdisciplinary teamwork training. Virtual reality (VR) simulation-based training presents a promising, cost-effective approach. This study evaluates the effectiveness of the Team Emergency Assessment Measure (TEAM) for assessing healthcare student teams in VR environments to improve training methodologies. METHODS: Forty-two medical and nursing students participated in a VR-based neurological emergency scenario as part of an interprofessional team training program. Their performances were assessed using a modified TEAM tool by two trained coders. Reliability, internal consistency, and concurrent validity of the tool were evaluated using intraclass correlation coefficients (ICC) and Cronbach's alpha. RESULTS: Rater agreement on TEAM's leadership, teamwork, and task management domains was high, with ICC values between 0.75 and 0.90. Leadership demonstrated strong internal consistency (Cronbach's alpha = 0.90), while teamwork and task management showed moderate to acceptable consistency (alpha = 0.78 and 0.72, respectively). Overall, the TEAM tool exhibited high internal consistency (alpha = 0.89) and strong concurrent validity with significant correlations to global performance ratings. CONCLUSION: The TEAM tool proved to be a reliable and valid instrument for evaluating team dynamics in VR-based training scenarios. This study highlights VR's potential in enhancing medical education, especially in remote or distanced learning contexts. It demonstrates a dependable approach for team performance assessment, adding value to VR-based medical training. These findings pave the way for more effective, accessible interdisciplinary team assessments, contributing significantly to the advancement of medical education.

16.
Allergy Asthma Clin Immunol ; 20(1): 35, 2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38822425

RESUMO

BACKGROUND: Anaphylaxis is the most severe form of acute systemic and potentially life-threatening reactions triggered by mast and basophilic cells. Recent studies show a worldwide incidence between 50 and 112 occurrences per 100,000 person-years. The most identified triggers are food, medications, and insect venoms. We aimed to analyze triggers and clinical symptoms of patients presenting to a Swiss university emergency department for adults. METHODS: Six-year retrospective analysis (01/2013 to 12/2018) of all patients (> 16 years of age) admitted with moderate or severe anaphylaxis (classification of Ring and Messmer ≥ 2) to the emergency department. Patient and clinical data were extracted from the electronic medical database of the emergency department. RESULTS: Of the 531 includes patients, 53.3% were female, the median age was 38 [IQR 26-51] years. The most common suspected triggers were medications (31.8%), food (25.6%), and insect stings (17.1%). Organ manifestations varied among the different suspected triggers: for medications, 90.5% of the patients had skin symptoms, followed by respiratory (62.7%), cardiovascular (44.4%) and gastrointestinal symptoms (33.7%); for food, gastrointestinal symptoms (39.7%) were more frequent than cardiovascular symptoms (36.8%) and for insect stings cardiovascular symptoms were apparent in 63.8% of the cases. CONCLUSIONS: Average annual incidence of moderate to severe anaphylaxis during the 6-year period in subjects > 16 years of age was 10.67 per 100,000 inhabitants. Medications (antibiotics, NSAID and radiocontrast agents) were the most frequently suspected triggers. Anaphylaxis due to insect stings was more frequently than in other studies. Regarding clinical symptoms, gastrointestinal symptoms need to be better considered, especially that initial treatment with epinephrine is not delayed.

17.
Med Educ ; 47(12): 1223-35, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24206156

RESUMO

CONTEXT: Basic science teaching in undergraduate medical education faces several challenges. One prominent discussion is focused on the relevance of biomedical knowledge to the development and integration of clinical knowledge. Although the value of basic science knowledge is generally emphasised, theoretical positions on the relative role of this knowledge and the optimal approach to its instruction differ. The present paper addresses whether and to what extent biomedical knowledge is related to the development of clinical knowledge. METHODS: We analysed repeated-measures data for performances on basic science and clinical knowledge assessments. A sample of 598 medical students on a traditional curriculum participated in the study. The entire study covered a developmental phase of 2 years of medical education. Structural equation modelling was used to analyse the temporal relationship between biomedical knowledge and the acquisition of clinical knowledge. RESULTS: At the point at which formal basic science education ends and clinical training begins, students show the highest levels of biomedical knowledge. The present data suggest a decline in basic science knowledge that is complemented by a growth in clinical knowledge. Statistical comparison of several structural equation models revealed that the model to best explain the data specified unidirectional relationships between earlier states of biomedical knowledge and subsequent changes in clinical knowledge. However, the parameter estimates indicate that this association is negative. DISCUSSION: Our analysis suggests a negative relationship between earlier levels of basic science knowledge and subsequent gains in clinical knowledge. We discuss the limitations of the present study, such as the educational context in which it was conducted and its non-experimental nature. Although the present results do not necessarily contradict the relevance of basic sciences, we speculate on mechanisms that might be related to our findings. We conclude that our results hint at possibly critical issues in basic science education that have been rarely addressed thus far.


Assuntos
Disciplinas das Ciências Biológicas/educação , Competência Clínica , Educação de Graduação em Medicina/métodos , Conhecimentos, Atitudes e Prática em Saúde , Aprendizagem , Estudantes de Medicina/psicologia , Berlim , Currículo , Tomada de Decisões , Avaliação Educacional , Alemanha , Humanos , Ensino/métodos
18.
J Speech Lang Hear Res ; 66(10): 3988-4008, 2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37708514

RESUMO

PURPOSE: The purpose of this study was to examine quality of life (QOL) and its relation to language skills in children with developmental language disorder (DLD). This was examined by comparing QOL to a control group of children with typical development (TD), as well as children with cochlear implants (CIs), who potentially struggle with language for language, although for a different reason than children with DLD. METHOD: Two groups of children, a group with TD (n = 29) and a group of children with CIs (n = 29), were matched to the DLD group (n = 29) on chronological age, gender, nonverbal IQ, and parental educational level through a propensity matching procedure. A third group consisting of children with CIs was also matched to the DLD group but additionally matched on language abilities. QOL scores were compared across groups, and the association between language skills and QOL was examined in the DLD group. RESULT: The DLD group was reported by parents to have statistically significantly poorer QOL scores than peers with TD or CIs. When controlling for language skills, either statistically or through an additional CI group matched on language abilities, there were no statistically significant differences in QOL scores across groups. In the DLD group, language skills explained 16% of the variation in QOL. CONCLUSION: DLD is associated with the children's overall QOL, and the degree of reduced QOL relates to the severity of the language impairment.

19.
Front Psychol ; 14: 1232628, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37941756

RESUMO

Introduction: Effective teamwork plays a critical role in achieving high-performance outcomes in healthcare. Consequently, conducting a comprehensive assessment of team performance is essential for providing meaningful feedback during team trainings and enabling comparisons in scientific studies. However, traditional methods like self-reports or behavior observations have limitations such as susceptibility to bias or being resource consuming. To overcome these limitations and gain a more comprehensive understanding of team processes and performance, the assessment of objective measures, such as physiological parameters, can be valuable. These objective measures can complement traditional methods and provide a more holistic view of team performance. The aim of this study was to explore the potential of the use of objective measures for evaluating team performance for research and training purposes. For this, experts in the field of research and medical simulation training were interviewed to gather their opinions, ideas, and concerns regarding this novel approach. Methods: A total of 34 medical and research experts participated in this exploratory qualitative study, engaging in semi-structured interviews. During the interview, experts were asked for (a) their opinion on measuring team performance with objective measures, (b) their ideas concerning potential objective measures suitable for measuring team performance of healthcare teams, and (c) their concerns regarding the use of objective measures for evaluating team performance. During data analysis responses were categorized per question. Results: The findings from the 34 interviews revealed a predominantly positive reception of the idea of utilizing objective measures for evaluating team performance. However, the experts reported limited experience in actively incorporating objective measures into their training and research. Nevertheless, they identified various potential objective measures, including acoustical, visual, physiological, and endocrinological measures and a time layer. Concerns were raised regarding feasibility, complexity, cost, and privacy issues associated with the use of objective measures. Discussion: The study highlights the opportunities and challenges associated with employing objective measures to assess healthcare team performance. It particularly emphasizes the concerns expressed by medical simulation experts and team researchers, providing valuable insights for developers, trainers, researchers, and healthcare professionals involved in the design, planning or utilization of objective measures in team training or research.

20.
Diagnosis (Berl) ; 10(4): 398-405, 2023 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-37480571

RESUMO

OBJECTIVES: Existing computerized diagnostic decision support tools (CDDS) accurately return possible differential diagnoses (DDx) based on the clinical information provided. The German versions of the CDDS tools for clinicians (Isabel Pro) and patients (Isabel Symptom Checker) from ISABEL Healthcare have not been validated yet. METHODS: We entered clinical features of 50 patient vignettes taken from an emergency medical text book and 50 real cases with a confirmed diagnosis derived from the electronic health record (EHR) of a large academic Swiss emergency room into the German versions of Isabel Pro and Isabel Symptom Checker. We analysed the proportion of DDx lists that included the correct diagnosis. RESULTS: Isabel Pro and Symptom Checker provided the correct diagnosis in 82 and 71 % of the cases, respectively. Overall, the correct diagnosis was ranked in 71 , 61 and 37 % of the cases within the top 20, 10 and 3 of the provided DDx when using Isabel Pro. In general, accuracy was higher with vignettes than ED cases, i.e. listed the correct diagnosis more often (non-significant) and ranked the diagnosis significantly more often within the top 20, 10 and 3. On average, 38 ± 4.5 DDx were provided by Isabel Pro and Symptom Checker. CONCLUSIONS: The German versions of Isabel achieved a somewhat lower accuracy compared to previous studies of the English version. The accuracy decreases substantially when the position in the suggested DDx list is taken into account. Whether Isabel Pro is accurate enough to improve diagnostic quality in clinical ED routine needs further investigation.


Assuntos
Diclorodifenil Dicloroetileno , Projetos de Pesquisa , Humanos , Diagnóstico Diferencial , Registros Eletrônicos de Saúde , Idioma
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA