Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Med Teach ; : 1-9, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38635469

RESUMO

INTRODUCTION: Whilst rarely researched, the authenticity with which Objective Structured Clinical Exams (OSCEs) simulate practice is arguably critical to making valid judgements about candidates' preparedness to progress in their training. We studied how and why an OSCE gave rise to different experiences of authenticity for different participants under different circumstances. METHODS: We used Realist evaluation, collecting data through interviews/focus groups from participants across four UK medical schools who participated in an OSCE which aimed to enhance authenticity. RESULTS: Several features of OSCE stations (realistic, complex, complete cases, sufficient time, autonomy, props, guidelines, limited examiner interaction etc) combined to enable students to project into their future roles, judge and integrate information, consider their actions and act naturally. When this occurred, their performances felt like an authentic representation of their clinical practice. This didn't work all the time: focusing on unavoidable differences with practice, incongruous features, anxiety and preoccupation with examiners' expectations sometimes disrupted immersion, producing inauthenticity. CONCLUSIONS: The perception of authenticity in OSCEs appears to originate from an interaction of station design with individual preferences and contextual expectations. Whilst tentatively suggesting ways to promote authenticity, more understanding is needed of candidates' interaction with simulation and scenario immersion in summative assessment.

2.
Med Teach ; 44(8): 836-850, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35771684

RESUMO

INTRODUCTION: In 2011, a consensus report was produced on technology-enhanced assessment (TEA), its good practices, and future perspectives. Since then, technological advances have enabled innovative practices and tools that have revolutionised how learners are assessed. In this updated consensus, we bring together the potential of technology and the ultimate goals of assessment on learner attainment, faculty development, and improved healthcare practices. METHODS: As a material for the report, we used the scholarly publications on TEA in both HPE and general higher education, feedback from 2020 Ottawa Conference workshops, and scholarly publications on assessment technology practices during the Covid-19 pandemic. RESULTS AND CONCLUSION: The group identified areas of consensus that remained to be resolved and issues that arose in the evolution of TEA. We adopted a three-stage approach (readiness to adopt technology, application of assessment technology, and evaluation/dissemination). The application stage adopted an assessment 'lifecycle' approach and targeted five key foci: (1) Advancing authenticity of assessment, (2) Engaging learners with assessment, (3) Enhancing design and scheduling, (4) Optimising assessment delivery and recording learner achievement, and (5) Tracking learner progress and faculty activity and thereby supporting longitudinal learning and continuous assessment.


Assuntos
COVID-19 , Pandemias , Currículo , Humanos , Aprendizagem , Tecnologia
3.
Med Teach ; 42(11): 1250-1260, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32749915

RESUMO

INTRODUCTION: Novel uses of video aim to enhance assessment in health-professionals education. Whilst these uses presume equivalence between video and live scoring, some research suggests that poorly understood variations could challenge validity. We aimed to understand examiners' and students' interaction with video whilst developing procedures to promote its optimal use. METHODS: Using design-based research we developed theory and procedures for video use in assessment, iteratively adapting conditions across simulated OSCE stations. We explored examiners' and students' perceptions using think-aloud, interviews and focus group. Data were analysed using constructivist grounded-theory methods. RESULTS: Video-based assessment produced detachment and reduced volitional control for examiners. Examiners ability to make valid video-based judgements was mediated by the interaction of station content and specifically selected filming parameters. Examiners displayed several judgemental tendencies which helped them manage videos' limitations but could also bias judgements in some circumstances. Students rarely found carefully-placed cameras intrusive and considered filming acceptable if adequately justified. DISCUSSION: Successful use of video-based assessment relies on balancing the need to ensure station-specific information adequacy; avoiding disruptive intrusion; and the degree of justification provided by video's educational purpose. Video has the potential to enhance assessment validity and students' learning when an appropriate balance is achieved.


Assuntos
Competência Clínica , Educação Médica , Avaliação Educacional , Humanos , Julgamento
4.
MedEdPublish (2016) ; 9: 18, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-38073781

RESUMO

This article was migrated. The article was marked as recommended. BACKGROUND: Within assessment of physical examination skills, two approaches are common: "Describing Findings" (students comment throughout); and examining as "Usual Practice" (students only report findings at the end). Despite numerous potential influences on both students' performances and assessors' judgements, no prior studies have investigated the influence of either approach on assessments. METHODS: Two group, randomised, crossover design. Within a 2-station simulated physical examination OSCE, we manipulated whether students "described findings" or examined as "usual practice", collecting 1/. performance scores; 2/. Students'/examiners' cognitive load ratings; ratings of the 3/. fluency and 4/. completeness of students' presentations and 5/. Students' task-finishing, comparing all 5 end-points across conditions. RESULTS: Neither students' performance scores nor examiners' cognitive load were influenced by experimental condition. Students reported higher cognitive load (7/9) when "describing findings" than "usual practice" (6/9, p=0.002), and were less likely to finish (4 vs 12, p=0.007). Presentation completeness was higher for "describing findings" (mean=2.40, (95CIs=2.05-2.74)) than "usual practice" (mean=1.92 (1.65-2.18),p=0.016), whilst fluency ratings showed a similar trend. CONCLUSIONS: The decision to "Describe Findings" or examine as "Usual Practice" does not appear neutral, potentially influencing students' efficiency, recall and (by inference) learning. Institutions should explicitly select one option based on assessment goals.

5.
Adv Health Sci Educ Theory Pract ; 23(5): 937-959, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29980956

RESUMO

Recent literature places more emphasis on assessment comments rather than relying solely on scores. Both are variable, however, emanating from assessment judgements. One established source of variability is "contrast effects": scores are shifted away from the depicted level of competence in a preceding encounter. The shift could arise from an effect on the range-frequency of assessors' internal scales or the salience of performance aspects within assessment judgments. As these suggest different potential interventions, we investigated assessors' cognition by using the insight provided by "clusters of consensus" to determine whether any change in the salience of performance aspects was induced by contrast effects. A dataset from a previous experiment contained scores and comments for 3 encounters: 2 with significant contrast effects and 1 without. Clusters of consensus were identified using F-sort and latent partition analysis both when contrast effects were significant and non-significant. The proportion of assessors making similar comments only significantly differed when contrast effects were significant with assessors more frequently commenting on aspects that were dissimilar with the standard of competence demonstrated in the preceding performance. Rather than simply influencing range-frequency of assessors' scales, preceding performances may affect salience of performance aspects through comparative distinctiveness: when juxtaposed with the context some aspects are more distinct and selectively draw attention. Research is needed to determine whether changes in salience indicate biased or improved assessment information. The potential should be explored to augment existing benchmarking procedures in assessor training by cueing assessors' attention through observation of reference performances immediately prior to assessment.


Assuntos
Avaliação Educacional/normas , Ocupações em Saúde/educação , Variações Dependentes do Observador , Competência Clínica , Cognição , Comunicação , Avaliação Educacional/métodos , Humanos , Julgamento , Anamnese , Relações Profissional-Paciente , Método Simples-Cego , Reino Unido
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA