Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
MedEdPublish (2016) ; 13: 37, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37868340

RESUMO

In the paper, the authors offer perspectives on the uses of technology and assessment, that support learning. The perspectives are viewed through validity (from the field of assessment) as a framework and they discuss four aspects of an interconnected technology, learning and assessment space that represent theory informed, authentic practice. The four are: 1) integrated coherence for learning, assessment and technology; 2) responsibilities for equity, diversity, inclusion and wellbeing; 3) sustainability; and 4) balancing resources in global contexts. The authors propose steps and considerations for medical and health professions educators who need to contextualise applications for technology, learning and assessment, for positive impact for learners, faculty, institutions and patient care.

2.
Med Teach ; 45(9): 978-983, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-36786837

RESUMO

INTRODUCTION: The Ottawa Conference on the Assessment of Competence in Medicine and the Healthcare Professions was first convened in 1985 in Ottawa. Since then, what has become known as the Ottawa conference has been held in various locations around the world every 2 years. It has become an important conference for the community of assessment - including researchers, educators, administrators and leaders - to share contemporary knowledge and develop international standards for assessment in medical and health professions education. METHODS: The Ottawa 2022 conference was held in Lyon, France, in conjunction with the AMEE 2022 conference. A diverse group of international assessment experts were invited to present a symposium at the AMEE conference to summarise key concepts from the Ottawa conference. This paper was developed from that symposium. RESULTS AND DISCUSSION: This paper summarises key themes and issues that emerged from the Ottawa 2022 conference. It highlights the importance of the consensus statements and discusses challenges for assessment such as issues of equity, diversity, and inclusion, shifts in emphasis to systems of assessment, implications of 'big data' and analytics, and challenges to ensure published research and practice are based on contemporary theories and concepts.


Assuntos
Medicina , Competência Profissional , Humanos
3.
BMC Med Educ ; 22(1): 6, 2022 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-34980099

RESUMO

INTRODUCTION: This study aimed to explore the decision-making processes of raters during objective structured clinical examinations (OSCEs), in particular to explore the tacit assumptions and beliefs of raters as well as rater idiosyncrasies. METHODS: Thinking aloud protocol interviews were used to gather data on the thoughts of examiners during their decision-making, while watching trigger OSCE videos and rating candidates. A purposeful recruiting strategy was taken, with a view to interviewing both examiners with many years of experience (greater than six years) and those with less experience examining at final medical examination level. RESULTS: Thirty-one interviews were conducted in three centres in three different countries. Three themes were identified during data analysis, entitled 'OSCEs are inauthentic', 'looking for glimpses of truth' and 'evolution with experience'. CONCLUSION: Raters perceive that the shortcomings of OSCEs can have unwanted effects on student behaviour. Some examiners, more likely the more experienced group, may deviate from an organisations directions due to perceived shortcomings of the assessment. No method of assessment is without flaw, and it is important to be aware of the limitations and shortcomings of assessment methods on student performance and examiner perception. Further study of assessor and student perception of OSCE performance would be helpful.


Assuntos
Competência Clínica , Avaliação Educacional , Cognição , Humanos , Exame Físico , Gravação de Videoteipe
4.
Med Teach ; 43(1): 58-67, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33054524

RESUMO

INTRODUCTION: In 2011 the Consensus Statement on Performance Assessment was published in Medical Teacher. That paper was commissioned by AMEE (Association for Medical Education in Europe) as part of the series of Consensus Statements following the 2010 Ottawa Conference. In 2019, it was recommended that a working group be reconvened to review and consider developments in performance assessment since the 2011 publication. METHODS: Following review of the original recommendations in the 2011 paper and shifts in the field across the past 10 years, the group identified areas of consensus and yet to be resolved issues for performance assessment. RESULTS AND DISCUSSION: This paper addresses developments in performance assessment since 2011, reiterates relevant aspects of the 2011 paper, and summarises contemporary best practice recommendations for OSCEs and WBAs, fit-for-purpose methods for performance assessment in the health professions.


Assuntos
Educação Médica , Consenso , Europa (Continente) , Humanos
5.
J Med Educ Curric Dev ; 7: 2382120520970894, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33283046

RESUMO

A preparatory framework called EASI (Evaluate, Align, Student-centred, Implement and Improve) was developed with the aim of creating awareness about interim options and implementation opportunities for online Clinical and Communication Skills (CCS) learning. The framework, when applied requires faculty to evaluate current resources, align sessions to learning outcomes with student-centred approaches and to continuously improve based on implementation experiences. Using the framework, we were able to generate various types of online CCS learning sessions for implementation in a short period of time due to the recent Covid-19 pandemic. Importantly we learnt a few lessons post-implementation from both students and faculty perspective that will be used for planning and delivery of future sessions. In summary, the framework was useful for creating or redesigning CCS sessions which were disrupted during the pandemic, however post-implementation experience suggests the framework can also be used for future solutions in online CCS learning as healthcare systems and delivery are increasingly decentralised and widely distributed.

7.
MedEdPublish (2016) ; 9: 54, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-38058921

RESUMO

This article was migrated. The article was marked as recommended. The COVID-19 pandemic has presented significant challenges for medical schools. It is critical to ensure final year medical school students are not delayed in their entry to the clinical workforce in times of healthcare crisis. However, proceeding with assessment to determine competency for graduation from medical school, and maintaining performance standards for graduating doctors is an unprecedented challenge under pandemic conditions. This challenge is hitherto uncharted territory for medical schools and there is scant guidance for medical educators. In early March 2020, Duke-National University Singapore Medical School embraced the challenge for ensuring competent final year medical students could complete their final year of studies and graduate on time, to enter the medical workforce in Singapore without delay. This paper provides details of how the final year clinical performance examinations were planned and conducted during the COVID-19 pandemic. The aim of the paper is to provide guidance to other medical schools in similar circumstances who need to plan and make suitable adjustments to clinical skills examinations under current pandemic conditions. The paper illustrates how it is possible to design and implement clinical skills examinations (OSCEs) to ensure the validity and reliability of high-stakes performance assessments whilst protecting the safety of all participants, minimising risk and maintaining defensibility to key stakeholders.

8.
MedEdPublish (2016) ; 9: 173, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-38073850

RESUMO

This article was migrated. The article was marked as recommended. The impact of the COVID-19 pandemic on the teaching and assessment of clinical skills continues to pose significant challenges for healthcare education providers worldwide. In March 2020 Duke-National University of Singapore (Duke-NUS) Medical School demonstrated how to design and implement clinical skills examinations (OSCEs) in the early phase of COVID-19. As governing bodies continue to revise restrictions to help 'flatten the curve', educational institutions have to undertake a rapid review of assessment practices and adapt to this ever-changing environment. This case study describes the risk-assessments and challenges faced when delivering high stakes OSCEs during the COVID-19 pandemic. We also describe successful mitigation strategies implemented to combat these risks, and how we also embraced and leveraged technology in a very creative way despite the restrictions and constrained environment. We describe and share practical guidance that may be of help or interest to healthcare education providers across all disciplines on how to effectively deliver clinical and procedural skills examinations that are authentic, valid and comply with the strict national COVID-19 restrictions implemented during this time.

10.
Br J Hosp Med (Lond) ; 71(6): 342-4, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20551875

RESUMO

Clinical teachers are often involved in assessing clinical competence in the workplace, in universities and colleges. Assessments commonly used to formally assess clinical competence include long and short cases and the objective structured clinical examination which, if well designed, is a fair and reliable method of assessing clinical competence.


Assuntos
Competência Clínica/normas , Educação de Pós-Graduação em Medicina , Ensino/métodos , Avaliação Educacional/métodos , Inglaterra
11.
Med Teach ; 32(3): e111-4, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20218825

RESUMO

BACKGROUND: The UK General Medical Council (GMC) in its regulatory capacity conducts formal tests of competence (TOCs) on doctors whose performance is of concern. TOCs are individually tailored to each doctor's specialty and grade. AIMS: To describe the development and implementation of an electronic blueprinting system that supports the delivery of TOCs. METHOD: A case study that describes the evolution of the GMC electronic blueprint including the derivation of its content and its functionality. RESULTS: A question bank has been created with all items classified according to the competencies defined by Good Medical Practice. This database aids test assembly and ensures that each assessment maps across the breadth of the blueprint. CONCLUSIONS: The blueprint described was easy to construct and is easy to use. It reflects the knowledge, skills and behaviours (learning outcomes) to be assessed. It guides commissioning of test material and enables the systematic and faithful sampling of common and important problems. The principles described have potential for wider application to blueprinting in undergraduate or clinical training programmes. Such a blueprint can provide the essential link between a curriculum and its assessment system and ensure that assessment content is stable over time.


Assuntos
Benchmarking/normas , Competência Clínica/normas , Auditoria Médica/normas , Revisão dos Cuidados de Saúde por Pares/normas , Médicos/normas , Software , Educação Baseada em Competências , Currículo , Bases de Dados Factuais , Avaliação Educacional , Humanos , Garantia da Qualidade dos Cuidados de Saúde , Reino Unido
12.
Med Teach ; 31(3): 223-9, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19288309

RESUMO

BACKGROUND: While all graduates from medical schools in the UK are granted the same licence to practise by the medical professional regulatory body, the General Medical Council, individuals institution set their own graduating examination systems. Previous studies have suggested that the equivalence of passing standards across different medical schools cannot be guaranteed. AIMS: To explore and formally document the graduating examinations being used in the UK Medical Schools and to evaluate whether it is possible to make plausible comparisons in relation to the standard of clinical competence of graduates. METHODS: A questionnaire survey of all the UK medical schools was conducted, asking for details of graduating examination systems, including the format and content of tests, testing time and standard setting procedures. RESULTS: Graduating assessment systems vary widely across institutions in the UK, in terms of format, length, content and standard setting procedures. CONCLUSIONS: We question whether is it possible to make plausible comparisons in relation to the equivalence of standards of graduates from the different UK medical schools, as current quality assurance systems do not allow for formal quantitative comparisons of the clinical competence of graduates from different schools. We suggest that national qualifying level examinations should be considered in the UK.


Assuntos
Competência Clínica/normas , Avaliação Educacional/normas , Faculdades de Medicina , Inquéritos e Questionários , Reino Unido
13.
Med Educ ; 41(11): 1024-31, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17973762

RESUMO

CONTEXT: Medical schools in the UK set their own graduating examinations and pass marks. In a previous study we examined the equivalence of passing standards using the Angoff standard-setting method. To address the limitation this imposed on that work, we undertook further research using a standard-setting method specifically designed for objective structured clinical examinations (OSCEs). METHODS: Six OSCE stations were incorporated into the graduating examinations of 3 of the medical schools that took part in the previous study. The borderline group method (BGM) or borderline regression method (BRM) was used to derive the pass marks for all stations in the OSCE. We compared passing standards at the 3 schools. We also compared the results within the schools with their previously generated Angoff pass marks. RESULTS: The pass marks derived using the BGM or BRM were consistent across 2 of the 3 schools, whereas the third school generated pass marks which were (with a single exception) much lower. Within-school comparisons of pass marks revealed that in 2 schools the pass marks generally did not significantly differ using either method, but for 1 school the Angoff mark was consistently and significantly lower than the BRM. DISCUSSION: The pass marks set using the BGM or BRM were more consistent across 2 of the 3 medical schools than pass marks set using the Angoff method. However, 1 medical school set significantly different pass marks from the other 2 schools. Although this study is small, we conclude that passing standards at different medical schools cannot be guaranteed to be equivalent.


Assuntos
Competência Clínica/normas , Educação de Graduação em Medicina , Faculdades de Medicina , Estudantes de Medicina/estatística & dados numéricos , Reino Unido
14.
Emerg Med J ; 24(3): 180-4, 2007 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17351222

RESUMO

BACKGROUND: In our region, it was acknowledged that the process of assessment needed to be improved, but before developing a system for this, there was a need to define the "competent or satisfactory trainee". OBJECTIVE: To outline the process by which a consensus was achieved on this standard, and how a system for formally assessing competency across a wide range of knowledge skills and attitudes was subsequently agreed on, thus enabling increased opportunities for training and feedback and improving the accuracy of assessment in the region. METHODS: The opinions of trainees and trainers from across the region were collated, and a consensus was achieved with regard to the minimum acceptable standard for a trainee in emergency medicine, thus defining a competent trainee. The group that set the standard then focused on identifying the assessment methods most appropriate for the evaluation of the knowledge, skills and attitudes required of an emergency medicine trainee. The tool was subsequently trialled for a period of 6 months, and opinion evaluated by use of a questionnaire. RESULTS: The use of the tool was reviewed from both the trainers' and trainees' perspectives. 42% (n = 11) of trainers and 31% (n = 8) trainees responded to the questionnaire. In the region, there were 26 trainers and 26 trainees. Five trainees and nine trainers had used the tool. 93% (14/15) of respondents thought that the descriptors used to describe the satisfactory trainee were acceptable; 89% (8/9) of trainers thought that it helped them assess trainees more accurately. 60% (3/5) of trainees thought that, as a result, they had a better understanding of their weak areas. CONCLUSION: We believe that we achieved a consensus across our region as to what defined a satisfactory trainee and set the standard against which all our trainees would subsequently be evaluated. The use of this tool to assess trainees during the pilot period was disappointing; however, we were encouraged that most of those using the tool thought that it allowed an objective assessment of trainees and feedback on areas requiring further work. Those who used the tool identified important reasons that may have hindered widespread use of the assessment tool.


Assuntos
Educação de Pós-Graduação em Medicina/normas , Avaliação Educacional/métodos , Medicina de Emergência/educação , Atitude do Pessoal de Saúde , Competência Clínica , Avaliação Educacional/normas , Feminino , Humanos , Masculino , Corpo Clínico Hospitalar/educação , Projetos Piloto
15.
Adv Health Sci Educ Theory Pract ; 11(2): 173-83, 2006 May.
Artigo em Inglês | MEDLINE | ID: mdl-16729244

RESUMO

While Objective Structured Clinical Examinations (OSCEs) have become widely used to assess clinical competence at the end of undergraduate medical courses, the method of setting the passing score varies greatly, and there is no agreed best methodology. While there is an assumption that the passing standard at graduation is the same at all medical schools, there is very little quantitative evidence in the field. In the United Kingdom, there is no national licensing examination; each medical school sets its own graduating assessment and successful completion by candidates leads to the licensed right to practice by the General Medical Council. Academics at five UK medical school were asked to set passing scores for six OSCE stations using the Angoff method, following a briefing session on this technique. The results were collated and analysed. The passing scores set for the each of the stations varied widely across the five medical schools. The implication for individual students at the different medical schools is that a student with the same level of competency may pass at one medical school but would fail at another even when the test is identical. Postulated reasons for this difference include different conceptions of the minimal level of competence acceptable for graduating students and the possible unsuitability of the Angoff method for performance based clinical tests.


Assuntos
Avaliação Educacional/métodos , Competência Profissional/normas , Faculdades de Medicina , Humanos , Reino Unido
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...