Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
BMC Med Educ ; 24(1): 842, 2024 Aug 06.
Artigo em Inglês | MEDLINE | ID: mdl-39107735

RESUMO

BACKGROUND: The integration of Health System Science (HSS) in medical education emphasizes mastery of competencies beyond mere knowledge acquisition. With the shift to online platforms during the COVID-19 pandemic, there is an increased emphasis on Technology Enhanced Assessment (TEA) methods, such as video assessments, to evaluate these competencies. This study investigates the efficacy of online video assessments in evaluating medical students' competency in HSS. METHODS: A comprehensive assessment was conducted on first-year medical students (n = 10) enrolled in a newly developed curriculum integrating Health System Science (HSS) into the Bachelor of Medicine program in 2021. Students undertook three exams focusing on HSS competency. Their video responses were evaluated by a panel of seven expert assessors using a detailed rubric. Spearman rank correlation and the Interclass Correlation Coefficient (ICC) were utilized to determine correlations and reliability among assessor scores, while a mixed-effects model was employed to assess the relationship between foundational HSS competencies (C) and presentation skills (P). RESULTS: Positive correlations were observed in inter-rater reliability, with ICC values suggesting a range of reliability from poor to moderate. A positive correlation between C and P scores was identified in the mixed-effects model. The study also highlighted variations in reliability and correlation, which might be attributed to differences in content, grading criteria, and the nature of individual exams. CONCLUSION: Our findings indicate that effective presentation enhances the perceived competency of medical students, emphasizing the need for standardized assessment criteria and consistent assessor training in online environments. This study highlights the critical roles of comprehensive competency assessments and refined presentation skills in online medical education, ensuring accurate and reliable evaluations.


Assuntos
COVID-19 , Competência Clínica , Currículo , Avaliação Educacional , Gravação em Vídeo , Humanos , Avaliação Educacional/métodos , Competência Clínica/normas , Educação de Graduação em Medicina/normas , Estudantes de Medicina , Reprodutibilidade dos Testes , Educação a Distância , SARS-CoV-2 , Masculino
2.
Adv Health Sci Educ Theory Pract ; 28(3): 827-845, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-36469231

RESUMO

Competency-based assessment is undergoing an evolution with the popularisation of programmatic assessment. Fundamental to programmatic assessment are the attributes and buy-in of the people participating in the system. Our previous research revealed unspoken, yet influential, cultural and relationship dynamics that interact with programmatic assessment to influence success. Pulling at this thread, we conducted secondary analysis of focus groups and interviews (n = 44 supervisors) using the critical lens of Positioning Theory to explore how workplace supervisors experienced and perceived their positioning within programmatic assessment. We found that supervisors positioned themselves in two of three ways. First, supervisors universally positioned themselves as a Teacher, describing an inherent duty to educate students. Enactment of this position was dichotomous, with some supervisors ascribing a passive and disempowered position onto students while others empowered students by cultivating an egalitarian teaching relationship. Second, two mutually exclusive positions were described-either Gatekeeper or Team Member. Supervisors positioning themselves as Gatekeepers had a duty to protect the community and were vigilant to the detection of inadequate student performance. Programmatic assessment challenged this positioning by reorientating supervisor rights and duties which diminished their perceived authority and led to frustration and resistance. In contrast, Team Members enacted a right to make a valuable contribution to programmatic assessment and felt liberated from the burden of assessment, enabling them to assent power shifts towards students and the university. Identifying supervisor positions revealed how programmatic assessment challenged traditional structures and ideologies, impeding success, and provides insights into supporting supervisors in programmatic assessment.


Assuntos
Estudantes , Local de Trabalho , Humanos , Grupos Focais , Emoções , Condições de Trabalho
3.
Int J Psychol ; 58(3): 237-246, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36720650

RESUMO

Objective structured clinical examinations (OSCEs) have been widely used in health care education to simultaneously assess knowledge, skill and attitude. Due to the high cost of running an OSCE, its application in professional psychology is still limited. To solve this problem, virtual standardised patient (VSP) implementations in creating psychology OSCEs can be a cost-effective method for administering a psychology OSCE regularly. This study aimed to develop and examine the psychometric properties of the VSP version of the Intake OSCE (VSP-Intake OSCE) in measuring psychologists' psychological assessment competencies (PACs) from entry to early practice. The initial development of the VSP-Intake OSCE contains a VSP station and a follow-up written station to measure PACs when conducting an intake assessment. To administer the VSP station, we built a new VSP system that allows psychologists to interact with a VSP verbally. A sample of 36 participants, including 27 graduate students and nine psychologists, were recruited to examine the psychometric properties of the VSP-Intake OSCE. As a newly developed instrument, the VSP-Intake OSCE revealed good inter-rater reliability and construct validity. We believe using VSP implementations to develop psychology OSCEs will be essential in promoting OSCE applications in professional psychology.


Assuntos
Competência Clínica , Avaliação Educacional , Humanos , Reprodutibilidade dos Testes , Psicometria/métodos , Avaliação Educacional/métodos
4.
Respiration ; 101(11): 990-1005, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36088910

RESUMO

BACKGROUND: Competency using radiologic images for bronchoscopic navigation is presumed during subspecialty training, but no assessments objectively measure combined knowledge of radiologic interpretation and ability to maneuver a bronchoscope into peripheral airways. OBJECTIVES: The objectives of this study were (i) to determine whether the Bronchoscopy-Radiology Skills and Tasks Assessment Tool (BRadSTAT) discriminates between bronchoscopists of various levels of experience and (ii) to improve construct validity using study findings. METHODS: BRadSTAT contains 10 questions that assess chest X-ray and CT scan interpretation using multiple images per question and 2 technical skill assessments. After administration to 33 bronchoscopists (5 Beginners, 9 Intermediates, 10 Experienced, and 9 Experts), discriminative power was strengthened using differential weighting on CT-related questions, producing the BRadSTAT-CT score. Cut points for both scores were determined via cross-validation. RESULTS: Mean BRadSTAT scores for Beginner, Intermediate, Experienced, and Expert were 74 (±13 SD), 78 (±14), 86 (±9), and 88 (±8), respectively. Statistically significant differences were noted between Expert and Beginner, Expert and Intermediate, and Experienced and Beginner (all p ≤ 0.05). Mean BRadSTAT-CT scores for Beginner, Intermediate, Experienced, and Expert were 63 (±14), 74 (±15), 82 (±13), and 90 (±9), respectively, all statistically significant (p ≤ 0.03). Cut points for BRadSTAT-CT had lower sensitivity but greater specificity and accuracy than for BRadSTAT. CONCLUSION: BRadSTAT represents the first validated assessment tool measuring knowledge and skills for bronchoscopic access to peripheral airways, which discriminates between bronchoscopists of various experience levels. Refining BRadSTAT produced the BRadSTAT-CT, which had higher discriminative power. Future studies should focus on their usefulness in competency-based bronchoscopy programs.


Assuntos
Broncoscopia , Radiologia , Humanos , Broncoscopia/métodos , Competência Clínica
5.
Teach Learn Med ; 34(2): 155-166, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34238091

RESUMO

PhenomenonEnsuring that future physicians are competent to practice medicine is necessary for high quality patient care and safety. The shift toward competency-based education has placed renewed emphasis on direct observation via workplace-based assessments in authentic patient care contexts. Despite this interest and multiple studies focused on improving direct observation, challenges regarding the objectivity of this assessment approach remain underexplored and unresolved. Approach: We conducted a literature review of direct observation in authentic patient contexts by systematically searching databases PubMed, Embase, Web of Science, and ERIC. Included studies comprised original research conducted in the patient care context with authentic patients, either as a live encounter or a video recording of an actual encounter, which focused on factors affecting the direct observation of undergraduate medical education (UME) or graduate medical education (GME) trainees. Because the patient care context adds factors that contribute to the cognitive load of the learner and of the clinician-observer we focused our question on such contexts, which are most useful in judgments about advancement to the next level of training or practice. We excluded articles or published abstracts not conducted in the patient care context (e.g., OSCEs) or those involving simulation, allied health professionals, or non-UME/GME trainees. We also excluded studies focused on end-of-rotation evaluations and in-training evaluation reports. We extracted key data from the studies and used Activity Theory as a lens to identify factors affecting these observations and the interactions between them. Activity Theory provides a framework to understand and analyze complex human activities, the systems in which people work, and the interactions or tensions between multiple associated factors. Findings: Nineteen articles were included in the analysis; 13 involved GME learners and 6 UME learners. Of the 19, six studies were set in the operating room and four in the Emergency department. Using Activity Theory, we discovered that while numerous studies focus on rater and tool influences, very few study the impact of social elements. These are the rules that govern how the activity happens, the environment and members of the community involved in the activity and how completion of the activity is divided up among the members of the community. Insights: Viewing direct observation via workplace-based assessment through the lens of Activity Theory may enable educators to implement curricular changes to improve direct observation of assessment. Activity Theory may allow researchers to design studies to focus on the identified underexplored interactions and influences in relation to direct observation.


Assuntos
Educação de Pós-Graduação em Medicina , Educação de Graduação em Medicina , Educação Baseada em Competências , Humanos
6.
BMC Med Educ ; 22(1): 76, 2022 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-35114990

RESUMO

BACKGROUND: To validate a competency-based assessment scale for students majoring in clinical medicine, ASK-SEAT. Students' competency growth across grade years was also examined for trends and gaps. METHODS: Questionnaires were distributed online from May through August in 2018 to Year-2 to Year-6 students who majored in clinical medicine at the Shantou University Medical College (China). Cronbach alpha values were calculated for reliability of the scale, and exploratory factor analysis employed for structural validity. Predictive validity was explored by correlating Year-4 students' self-assessed competency ratings with their licensing examination scores (based on Kendall's tau-b values). All students' competency development over time was examined using the Mann-Whitney U test. RESULTS: A total of 760 questionnaires meeting the inclusion criteria were analyzed. The overall Cronbach's alpha value was 0.964, and the item-total correlations were all greater than 0.520. The overall KMO measure was 0.966 and the KMO measure for each item was greater than 0.930 (P < 0.001). The eigenvalues of the top 3 components extracted were all greater than 1, explaining 55.351, 7.382, and 5.316% of data variance respectively, and 68.048% cumulatively. These components were aligned with the competency dimensions of skills (S), knowledge (K), and attitude (A). Significant and positive correlations (0.135 < Kendall's tau-b < 0.276, p < 0.05) were found between Year-4 students' self-rated competency levels and their scores for the licensing examination. Steady competency growth was associated with almost all indicators, with the most pronounced growth in the domain of skills. A lack of steady growth was seen in the indicators of "applying the English language" and "conducting scientific research & innovating". CONCLUSIONS: The ASK-SEAT, a competency-based assessment scale developed to measure medical students' competency development shows good reliability and structural validity. For predictive validity, weak-to-moderate correlations are found between Year-4 students' self-assessment and their performance at the national licensing examination (Year-4 students start their clinical clerkship during the 2nd semester of their 4th year of study). Year-2 to Year-6 students demonstrate steady improvement in the great majority of clinical competency indicators, except in the indicators of "applying the English language" and "conducting scientific research & innovating".


Assuntos
Estágio Clínico , Medicina Clínica , Estudantes de Medicina , Competência Clínica , Avaliação Educacional , Humanos , Reprodutibilidade dos Testes , Inquéritos e Questionários
7.
BMC Med Educ ; 22(1): 347, 2022 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-35524304

RESUMO

BACKGROUND: Entrustable Professional Activities (EPAs) assessments measure learners' competence with an entrustment or supervisory scale. Designed for workplace-based assessment EPA assessments have also been proposed for undergraduate medical education (UME), where assessments frequently occur outside the workplace and may be less intuitive, raising validity concerns. This study explored how assessors make entrustment determinations in UME, with additional specific comparison based on familiarity with prior performance in the context of longitudinal student-assessor relationships. METHODS: A qualitative approach using think-alouds was employed. Assessors assessed two students (familiar and unfamiliar) completing a history and physical examination using a supervisory scale and then thought-aloud after each assessment. We conducted a thematic analysis of assessors' response processes and compared them based on their familiarity with a student. RESULTS: Four themes and fifteen subthemes were identified. The most prevalent theme related to "student performance." The other three themes included "frame of reference," "assessor uncertainty," and "the patient." "Previous student performance" and "affective reactions" were subthemes more likely to inform scoring when faculty were familiar with a student, while unfamiliar faculty were more likely to reference "self" and "lack confidence in their ability to assess." CONCLUSIONS: Student performance appears to be assessors' main consideration for all students, providing some validity evidence for the response process in EPA assessments. Several problematic themes could be addressed with faculty development while others appear to be inherent to entrustment and may be more challenging to mitigate. Differences based on assessor familiarity with student merits further research on how trust develops over time.


Assuntos
Educação Baseada em Competências , Educação de Graduação em Medicina , Competência Clínica , Cognição , Docentes , Humanos
8.
J Postgrad Med ; 67(1): 18-23, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33533748

RESUMO

The uncertainty in all spheres of higher education due to the COVID-19 pandemic has had an unprecedented impact on teaching-learning and assessments in medical colleges across the globe. The conventional ways of assessment are now neither possible nor practical for certifying medical graduates. This has necessitated thoughtful considerations in making adjustments to the assessment system, with most institutions transitioning to online assessments that so far have remained underutilized. Programmatic assessment encourages the deliberate and longitudinal use of diverse assessment methods to maximize learning and assessment and at present can be utilized optimally as it ensures the collection of multiple low-stake assessment data which can be aggregated for high-stake pass/fail decisions by making use of every opportunity for formative feedback to improve performance. Though efforts have been made to introduce programmatic assessment in the competency-based undergraduate curriculum, transitioning to online assessment can be a potential opportunity if the basic tenets of programmatic assessment, choice of online assessment tools, strategies, good practices of online assessments and challenges are understood and explored explicitly for designing and implementing online assessments. This paper explores the possibility of introducing online assessment with face-to-face assessment and structuring a blended programmatic assessment in competency-based medical education.


Assuntos
Educação Baseada em Competências/métodos , Currículo , Educação a Distância/métodos , Educação Médica/métodos , Avaliação Educacional/métodos , Humanos , Índia
9.
Med Teach ; 43(2): 168-173, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33073665

RESUMO

BACKGROUND: Assessing learners' competence in diagnostic reasoning is challenging and unstandardized in medical education. We developed a theory-informed, behaviorally anchored rubric, the Assessment of Reasoning Tool (ART), with content and response process validity. This study gathered evidence to support the internal structure and the interpretation of measurements derived from this tool. METHODS: We derived a reconstructed version of ART (ART-R) as a 15-item, 5-point Likert scale using the ART domains and descriptors. A psychometric evaluation was performed. We created 18 video variations of learner oral presentations, portraying different performance levels of the ART-R. RESULTS: 152 faculty viewed two videos and rated the learner globally and then using the ART-R. The confirmatory factor analysis showed a favorable comparative fit index = 0.99, root mean square error of approximation = 0.097, and standardized root mean square residual = 0.026. The five domains, hypothesis-directed information gathering, problem representation, prioritized differential diagnosis, diagnostic evaluation, and awareness of cognitive tendencies/emotional factors, had high internal consistency. The total score for each domain had a positive association with the global assessment of diagnostic reasoning. CONCLUSIONS: Our findings provide validity evidence for the ART-R as an assessment tool with five theoretical domains, internal consistency, and association with global assessment.


Assuntos
Educação Médica , Resolução de Problemas , Diagnóstico Diferencial , Análise Fatorial , Humanos , Psicometria , Reprodutibilidade dos Testes
10.
Clin Otolaryngol ; 46(5): 961-968, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33779051

RESUMO

INTRODUCTION: Cortical mastoidectomy is a core skill that Otolaryngology trainees must gain competency in. Automated competency assessments have the potential to reduce assessment subjectivity and bias, as well as reducing the workload for surgical trainers. OBJECTIVES: This study aimed to develop and validate an automated competency assessment system for cortical mastoidectomy. PARTICIPANTS: Data from 60 participants (Group 1) were used to develop and validate an automated competency assessment system for cortical mastoidectomy. Data from 14 other participants (Group 2) were used to test the generalisability of the automated assessment. DESIGN: Participants drilled cortical mastoidectomies on a virtual reality temporal bone simulator. Procedures were graded by a blinded expert using the previously validated Melbourne Mastoidectomy Scale: a different expert assessed procedures by Groups 1 and 2. Using data from Group 1, simulator metrics were developed to map directly to the individual items of this scale. Metric value thresholds were calculated by comparing automated simulator metric values to expert scores. Binary scores per item were allocated using these thresholds. Validation was performed using random sub-sampling. The generalisability of the method was investigated by performing the automated assessment on mastoidectomies performed by Group 2, and correlating these with scores of a second blinded expert. RESULTS: The automated binary score compared with the expert score per item had an accuracy, sensitivity and specificity of 0.9450, 0.9547 and 0.9343, respectively, for Group 1; and 0.8614, 0.8579 and 0.8654, respectively, for Group 2. There was a strong correlation between the total scores per participant assigned by the expert and calculated by the automatic assessment method for both Group 1 (r = .9144, P < .0001) and Group 2 (r = .7224, P < .0001). CONCLUSION: This study outlines a virtual reality-based method of automated assessment of competency in cortical mastoidectomy, which proved comparable to the assessment provided by human experts.


Assuntos
Competência Clínica , Educação Médica/métodos , Mastoidectomia/educação , Treinamento por Simulação/métodos , Realidade Virtual , Adulto , Feminino , Humanos , Masculino
11.
Nurs Health Sci ; 23(1): 148-156, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32896047

RESUMO

The concept of Entrustable Professional Activities, recently pioneered in medical education, has emerged to support the implementation of competency-based education. Although competency-based frameworks are widely used in healthcare professional education to develop outcomes-based curricula, assessment of student competency in professional placement settings remains challenging. The novel concept of Entrustable Professional Activities together with established methods of competency assessment, namely e-portfolios and self-assessment, was implemented in the "[La Trobe University Dietetic program in 2015-2016. This study aimed to appraise the e-portfolio and evaluate the use of Entrustable Professional Activities to assess competence. A mixed-methods evaluation, using qualitative and quantitative surveys with follow-up structured consultations, was conducted with final year dietetics students and their supervisors. Dietetics students were comfortable with Entrustable Professional Activities and competency-based assessment, whereas supervisors preferred Entrustable Professional Activity based assessment. All stakeholders valued student self-assessment and the ongoing use of structured e-portfolios to develop and document competency. The use of structured e-portfolios, student self-assessment, and the emerging concept of Entrustable Professional Activities are useful tools to support dietetics student education in professional placement settings.


Assuntos
Competência Clínica , Educação Baseada em Competências , Dietética/educação , Avaliação Educacional , Currículo , Humanos , Autoavaliação (Psicologia)
12.
Med J Armed Forces India ; 77(Suppl 2): S466-S474, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34393331

RESUMO

BACKGROUND: There is an urgent need for more diverse methods for student evaluation, given the sudden shift to online learning necessitated by the coronavirus disease 2019 (COVID-19) pandemic. Innovative assessment tools will need to cover the required competencies and should be used to drive self-learning. Self-assessments and peer assessments may be added to the traditional classroom-based evaluations to identify individual insecurities or overconfidence. Identification of these factors is essential to medical education and is a focus of current research. METHODS: A modified operational assessment was introduced for the evaluation of third-year medical students. This intervention has facilitated sustained education and has promoted interactive student learning. Members of the entering class of 2017 participated in an integrated team and a competency-based online project that involved innovative item creation and case presentation methods. RESULTS: The new assessment process has been implemented successfully with positive feedback from all the participants; a usable product has been generated. CONCLUSIONS: We created new assessment tools in response to the COVID-19 pandemic that have been used successfully at our institution. These tools have provided a framework for integrated and interactive evaluations that can be used to facilitate the modification of traditional assessment methods.

13.
Adv Health Sci Educ Theory Pract ; 25(1): 189-193, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32030572

RESUMO

When educators are developing an effective and workable assessment program in graduate medical education by employing action research and stakeholder mapping to identify core competency domains and directives, the multi-stage process can be guided and informed by utilizing the story of designing, building and sea-testing sailing ships as a metaphor. However, the current challenge of physician burnout demands additional attention when formulating medical training frameworks, assessment guidelines and mentoring programs in 2020. The possibility of job-crafting is raised for consideration by designers of core competency frameworks in the health professions.


Assuntos
Médicos , Navios , Educação de Pós-Graduação em Medicina , Humanos , Avaliação de Programas e Projetos de Saúde
14.
Teach Learn Med ; 32(5): 541-551, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32529844

RESUMO

Problem: Prior studies have reported significant negative attitudes amongst both faculty and residents toward direct observation and feedback. Numerous contributing factors have been identified, including insufficient time for direct observation and feedback, poorly understood purpose, inadequate training, disbelief in the formative intent, inauthentic resident-patient clinical interactions, undermining of resident autonomy, lack of trust between the faculty-resident dyad, and low-quality feedback information that lacks credibility. Strategies are urgently needed to overcome these challenges and more effectively engage faculty and residents in direct observation and feedback. Otherwise, the primary goals of supporting both formative and summative assessment will not be realized and the viability of competency-based medical education will be threatened. Intervention: Toward this end, recent studies have recommended numerous strategies to overcome these barriers: protected time for direct observation and feedback; ongoing faculty and resident training on goals and bidirectional, co-constructed feedback; repeated direct observations and feedback within a longitudinal resident-supervisor relationship; utilization of assessment tools with evidence for validity; and monitoring for engagement. Given the complexity of the problem, it is likely that bundling multiple strategies together will be necessary to overcome the challenges. The Direct Observation Structured Feedback Program (DOSFP) incorporated many of the recommended features, including protected time for direct observation and feedback within longitudinal faculty-resident relationships. Using a qualitative thematic approach the authors conducted semi-structured interviews, during February and March, 2019, with 10 supervisors and ten residents. Participants were asked to reflect on their experiences. Interview guide questions explored key themes from the literature on direct observation and feedback. Transcripts were anonymized. Two authors independently and iteratively coded the transcripts. Coding was theory-driven and differences were discussed until consensus was reached. The authors then explored the relationships between the codes and used a semantic approach to construct themes. Context: The DOSFP was implemented in a psychiatry continuity clinic for second and third year residents. Impact: Faculty and residents were aligned around the goals. They both perceived the DOSFP as focused on growth rather than judgment even though residents understood that the feedback had both formative and summative purposes. The DOSFP facilitated educational alliances characterized by trust and respect. With repeated practice within a longitudinal relationship, trainees dropped the performance orientation and described their interactions with patients as authentic. Residents generally perceived the feedback as credible, described feedback quality as high, and valued the two-way conversation. However, when receiving feedback with which they did not agree, residents demurred or, at most, would ask a clarifying question, but then internally discounted the feedback. Lessons Learned: Direct observation and structured feedback programs that bundle recent recommendations may overcome many of the challenges identified by previous research. Yet, residents discounted disagreeable feedback, illustrating a significant limitation and the need for other strategies that help residents reconcile conflict between external data and one's self-appraisal.


Assuntos
Feedback Formativo , Internato e Residência , Observação , Estudantes de Medicina/psicologia , Educação Baseada em Competências , Educação de Pós-Graduação em Medicina , Humanos , Entrevistas como Assunto , Pesquisa Qualitativa , Apoio ao Desenvolvimento de Recursos Humanos
15.
J Vet Med Educ ; 47(2): 239-247, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31194627

RESUMO

Video- versus handout-based instructions may influence student outcomes during simulation training and competency-based assessments. Forty-five third-year veterinary students voluntarily participated in a simulation module on canine endotracheal intubation. A prospective, randomized, double-blinded study investigated the impact of video (n = 23) versus handout (n = 22) instructions on student confidence, anxiety, and task performance. Students self-scored their confidence and anxiety before and after the simulation. During the simulation laboratory, three raters independently evaluated student performance using a 20-item formal assessment tool with a 5-point global rating scale. No significant between- or within-group differences (p > .05) were found for both confidence and anxiety scores. Video-based instructions were associated with significantly higher (p < .05) total formal assessment scores compared with handout-based instructions. The video group had significantly higher scores than the handout group on 3 of the 20 individual skills (items) assessed: placement of tie to the adaptor-endotracheal tube complex (p < .05), using the anesthetic machine (p < .01), and pop-off valve management (p < .001). Inter-rater reliability as assessed by Cronbach's α (.92), and Kendall's W (.89) was excellent and almost perfect, respectively. A two-faceted crossed-design generalizability analysis yielded G coefficients for both the handout (Ep2 = .68) and the video (Ep2 = .72) groups. Video instructions may be associated with higher performance scores than handout instructions during endotracheal intubation simulation training. Further research into skill retention and learning styles is warranted.


Assuntos
Educação em Veterinária , Intubação Intratraqueal , Treinamento por Simulação , Estudantes , Análise e Desempenho de Tarefas , Animais , Competência Clínica , Cães , Educação em Veterinária/métodos , Educação em Veterinária/normas , Avaliação Educacional , Humanos , Intubação Intratraqueal/veterinária , Estudos Prospectivos , Reprodutibilidade dos Testes , Estudantes/psicologia , Estudantes/estatística & dados numéricos
16.
Adv Health Sci Educ Theory Pract ; 24(2): 413-421, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-29777463

RESUMO

Educational assessment for the health professions has seen a major attempt to introduce competency based frameworks. As high level policy developments, the changes were intended to improve outcomes by supporting learning and skills development. However, we argue that previous experiences with major innovations in assessment offer an important road map for developing and refining assessment innovations, including careful piloting and analyses of their measurement qualities and impacts. Based on the literature, numerous assessment workshops, personal interactions with potential users, and our 40 years of experience in implementing assessment change, we lament the lack of a coordinated approach to clarify and improve measurement qualities and functionality of competency based assessment (CBA). To address this worrisome situation, we offer two roadmaps to guide CBA's further development. Initially, reframe and address CBA as a measurement development opportunity. Secondly, using a roadmap adapted from the management literature on sustainable innovation, the medical assessment community needs to initiate an integrated plan to implement CBA as a sustainable innovation within existing educational programs and self-regulatory enterprises. Further examples of down-stream opportunities to refocus CBA at the implementation level within faculties and within the regulatory framework of the profession are offered. In closing, we challenge the broader assessment community in medicine to step forward and own the challenge and opportunities to reframe CBA as an innovation to improve the quality of the clinical educational experience. The goal is to optimize assessment in health education and ultimately improve the public's health.


Assuntos
Educação Baseada em Competências/métodos , Avaliação Educacional/métodos , Ocupações em Saúde/educação , Competência Clínica , Educação Baseada em Competências/normas , Ocupações em Saúde/normas , Humanos , Aprendizagem , Reprodutibilidade dos Testes
17.
J Vet Med Educ ; 46(4): 423-428, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30806563

RESUMO

A retrospective review of the first-year surgical skills competency-based assessment was performed at the Western College of Veterinary Medicine (WCVM) using 6 years of data from 475 students. The cumulative pass rate was 88.2% on first attempt and 99.2% upon remediation. Student gender did not influence overall pass/fail rates, with a failure rate of 11.1% for female students and 10.5% for male students (p = 0.88). Significantly decreased pass rates were associated with identification of the Mayo scissors (p = 0.03), explanation of using Allis tissue forceps (p = 0.002), and performance of a Lembert suture pattern (p < 0.01). An increased pass rate was observed for the cruciate pattern (p < 0.01). No differences were found in pass/fail rates for hand ties (p = 0.80) or instrument ties (p = 0.60). The most common errors occurred with half hitch ties: hand ties (53%) and instrument ties (38%). The most common errors were also recognized for instrument handling (31%) and needle management (20%) during the suture pattern section. The veterinary medical education community may benefit from the evidence-based findings of this research, in terms of understanding student performance across competencies, identifying areas requiring additional mentoring, and determining appropriate competencies for first-year veterinary students.


Assuntos
Competência Clínica , Educação em Veterinária , Avaliação Educacional , Estudantes/psicologia , Animais , Feminino , Humanos , Laboratórios , Masculino , Estudos Retrospectivos
18.
Artigo em Alemão | MEDLINE | ID: mdl-29230515

RESUMO

In Germany, future physicians have to pass a national licensing examination at the end of their medical studies. Passing this examination is the requirement for the license to practice medicine. The Masterplan Medizinstudium 2020 with its 41 measures aims to shift the paradigm in medical education and medical licensing examinations.The main goals of the Masterplan include the development towards competency-based and practical medical education and examination as well as the strengthening of general medicine. The healthcare policy takes into account social developments, which are very important for the medical education and licensing examination.Seven measures of the Masterplan relate to the realignment of the licensing examinations. Their function to drive learning should better support students in achieving the study goal defined in the German Medical Licensure Act: to educate a medical doctor scientifically and practically who is qualified for autonomous and independent professional practice, postgraduate education and continuous training.


Assuntos
Competência Clínica/legislação & jurisprudência , Educação Baseada em Competências/legislação & jurisprudência , Educação Médica/legislação & jurisprudência , Licenciamento em Medicina/legislação & jurisprudência , Competência Clínica/normas , Educação Baseada em Competências/normas , Educação Baseada em Competências/tendências , Currículo/normas , Currículo/tendências , Educação Médica/normas , Educação Médica/tendências , Educação Médica Continuada/legislação & jurisprudência , Educação Médica Continuada/normas , Educação Médica Continuada/tendências , Educação de Pós-Graduação em Medicina/legislação & jurisprudência , Educação de Pós-Graduação em Medicina/normas , Educação de Pós-Graduação em Medicina/tendências , Previsões , Alemanha , Objetivos , Humanos , Licenciamento em Medicina/tendências
19.
Adv Exp Med Biol ; 989: 217-233, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28971430

RESUMO

Programmatic assessment is being adopted as a preferred method of assessment in postgraduate medical education in Australia. Programmatic assessment of professionalism is likely to receive increasing attention. This paper reviews the literature regarding the assessment of professionalism in psychiatry. A search using the terms 'professionalism AND psychiatry' was conducted in the ERIC database. Only original articles relevant to professionalism education and assessment in psychiatry were selected, rather than theoretical or review papers that applied research from other fields of medicine to psychiatry. Articles regarding the need for professionalism education in psychiatry were included as they provided a rationale for curriculum development in this field as a precursor to assessment. Key findings from the literature were summarised in light of the author's own experience as an educator and assessor of both medical students and trainees in psychiatry, and incorporated into a guide to implementing programmatic assessment of professionalism in psychiatry. Within psychiatry, the specific evidence base for use of particular tools in assessing professionalism is limited. However, used in conjunction with psychiatrists' views about what is important in professionalism education, as well as knowledge from other medical disciplines regarding professionalism assessment tools, this evidence can inform implementation of programmatic assessment of professionalism in undergraduate, postgraduate and continuing professional development settings. Given the emergent nature of such assessment initiatives, they should be subjected to rigorous evaluation.


Assuntos
Educação Médica , Profissionalismo , Psiquiatria , Austrália
20.
Adv Exp Med Biol ; 988: 159-180, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28971397

RESUMO

Over the last two decades, Objective Structured Clinical Examination (OSCE) has become an increasingly important part of psychiatry education and assessment in the Australian context. A reappraisal of the evidence base regarding the use of OSCE in psychiatry is therefore timely. This paper reviews the literature regarding the use of OSCE as an assessment tool in both undergraduate and postgraduate psychiatry training settings. Suitable articles were identified using the search terms 'psychiatry AND OSCE' in the ERIC (educational) and PubMed (healthcare) databases and grouped according to their predominant focus: (1) the validity of OSCEs in psychiatry; (2) candidate preparation and other factors impacting on performance; and (3) special topics. The literature suggests that the OSCE has been widely adopted in psychiatry education, as a valid and reliable method of assessing psychiatric competencies that is acceptable to both learners and teachers alike. The limited evidence base regarding its validity for postgraduate psychiatry examinations suggests that more research is needed in this domain. Despite any shortcomings, OSCEs are currently ubiquitous in all areas of undergraduate and postgraduate medicine and proposing a better alternative for competency-based assessment is difficult. A critical question is whether OSCE is sufficient on its own to assess high-level consultancy skills, and aspects of professionalism and ethical practice, that are essential for effective specialist practice, or whether it needs to be supplemented by additional testing modalities.


Assuntos
Competência Clínica , Educação Médica , Avaliação Educacional , Psiquiatria/educação , Austrália
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA