Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Med Teach ; 46(4): 471-485, 2024 04.
Artigo em Inglês | MEDLINE | ID: mdl-38306211

RESUMO

Changes in digital technology, increasing volume of data collection, and advances in methods have the potential to unleash the value of big data generated through the education of health professionals. Coupled with this potential are legitimate concerns about how data can be used or misused in ways that limit autonomy, equity, or harm stakeholders. This consensus statement is intended to address these issues by foregrounding the ethical imperatives for engaging with big data as well as the potential risks and challenges. Recognizing the wide and ever evolving scope of big data scholarship, we focus on foundational issues for framing and engaging in research. We ground our recommendations in the context of big data created through data sharing across and within the stages of the continuum of the education and training of health professionals. Ultimately, the goal of this statement is to support a culture of trust and quality for big data research to deliver on its promises for health professions education (HPE) and the health of society. Based on expert consensus and review of the literature, we report 19 recommendations in (1) framing scholarship and research through research, (2) considering unique ethical practices, (3) governance of data sharing collaborations that engage stakeholders, (4) data sharing processes best practices, (5) the importance of knowledge translation, and (6) advancing the quality of scholarship through multidisciplinary collaboration. The recommendations were modified and refined based on feedback from the 2022 Ottawa Conference attendees and subsequent public engagement. Adoption of these recommendations can help HPE scholars share data ethically and engage in high impact big data scholarship, which in turn can help the field meet the ultimate goal: high-quality education that leads to high-quality healthcare.


Assuntos
Big Data , Ocupações em Saúde , Disseminação de Informação , Humanos , Ocupações em Saúde/educação , Consenso
2.
Artigo em Inglês | MEDLINE | ID: mdl-38010576

RESUMO

First impressions can influence rater-based judgments but their contribution to rater bias is unclear. Research suggests raters can overcome first impressions in experimental exam contexts with explicit first impressions, but these findings may not generalize to a workplace context with implicit first impressions. The study had two aims. First, to assess if first impressions affect raters' judgments when workplace performance changes. Second, whether explicitly stating these impressions affects subsequent ratings compared to implicitly-formed first impressions. Physician raters viewed six videos where learner performance either changed (Strong to Weak or Weak to Strong) or remained consistent. Raters were assigned two groups. Group one (n = 23, Explicit) made a first impression global rating (FIGR), then scored learners using the Mini-CEX. Group two (n = 22, Implicit) scored learners at the end of the video solely with the Mini-CEX. For the Explicit group, in the Strong to Weak condition, the FIGR (M = 5.94) was higher than the Mini-CEX Global rating (GR) (M = 3.02, p < .001). In the Weak to Strong condition, the FIGR (M = 2.44) was lower than the Mini-CEX GR (M = 3.96 p < .001). There was no difference between the FIGR and the Mini-CEX GR in the consistent condition (M = 6.61, M = 6.65 respectively, p = .84). There were no statistically significant differences in any of the conditions when comparing both groups' Mini-CEX GR. Therefore, raters adjusted their judgments based on the learners' performances. Furthermore, raters who made their first impressions explicit showed similar rater bias to raters who followed a more naturalistic process.

3.
Med Teach ; 45(9): 1054-1060, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37262177

RESUMO

PURPOSE: The transition towards Competency-Based Medical Education at the Cumming School of Medicine was accelerated by the reduced clinical time caused by the COVID-19 pandemic. The purpose of this study was to define a standard protocol for setting Entrustable Professional Activity (EPA) achievement thresholds and examine their feasibility within the clinical clerkship. METHODS: Achievement thresholds for each of the 12 AFMC EPAs for graduating Canadian medical students were set by using sequential rounds of revision by three consecutive groups of stakeholders and evaluation experts. Structured communication was guided by a modified Delphi technique. The feasibility/consequence models of these EPAs were then assessed by tracking their completion by the graduating class of 2021. RESULTS: The threshold-setting process resulted in set EPA achievement levels ranging from 1 to 8 across the 12 AFMC EPAs. Estimates were stable after the first round for 9 of 12 EPAs. 96.27% of EPAs were successfully completed by clerkship students despite the shortened clinical period. Feasibility was predicted by the slowing rate of EPA accumulation overtime during the clerkship. CONCLUSION: The process described led to consensus on EPA achievement thresholds. Successful completion of the assigned thresholds was feasible within the shortened clerkship.[Box: see text].


Assuntos
COVID-19 , Internato e Residência , Estudantes de Medicina , Humanos , Pandemias , Canadá , Competência Clínica , COVID-19/epidemiologia , Educação Baseada em Competências/métodos
4.
Med Teach ; 44(6): 672-678, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35021934

RESUMO

INTRODUCTION: As competency-based curricula get increasing attention in postgraduate medical education, Entrustable Professional Activities (EPAs) are gaining in popularity. The aim of this survey was to determine the use of EPAs in anesthesiology training programs across Europe and North America. METHODS: A survey was developed and distributed to anesthesiology residency training program directors in Switzerland, Germany, Austria, Netherlands, USA and Canada. A convergent design mixed-methods approach was used to analyze both quantitative and qualitative data. RESULTS: The survey response rate was 38% (108 of 284). Seven percent of respondents used EPAs for making entrustment decisions. Fifty-three percent of institutions have not implemented any specific system to make such decisions. The majority of respondents agree that EPAs should become an integral part of the training of residents in anesthesiology as they are universal and easy to use. CONCLUSION: Although recommended by several national societies, EPAs are used in few anesthesiology training programs. Over half of responding programs have no specific system for making entrustment decisions. Although several countries are adopting or planning to adopt EPAs and national societies are recommending the use of EPAs as a framework in their competency-based programs, few are yet using these to make "competence" decisions.


Assuntos
Anestesiologia , Internato e Residência , Anestesiologia/educação , Competência Clínica , Educação Baseada em Competências/métodos , Currículo , Humanos , Inquéritos e Questionários
5.
Adv Health Sci Educ Theory Pract ; 26(1): 199-214, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-32577927

RESUMO

Learner handover (LH), the process of sharing of information about learners between faculty supervisors, allows for longitudinal assessment fundamental in the competency-based education model. However, the potential to bias future assessments has been raised as a concern. The purpose of this study is to determine whether prior performance information such as LH influences the assessment of learners in the clinical context. Between December 2017 and June 2018, forty-two faculty members and final-year residents from the Department of Medicine at the University of Ottawa were assigned to one of three study groups through quasi-randomisation, taking into account gender, speciality and rater experience. In a counter-balanced design, each group received either positive, negative or no LH prior to watching six simulated learner-patient encounter videos. Participants rated each video using the mini-CEX and completed a questionnaire on the raters' general impressions of LH. A significant difference in the mean mini-CEX competency scale scores between the negative (M = 5.29) and positive (M = 5.97) LH groups (P < .001, d = 0.81) was noted. Similar findings were found for the single overall clinical competence ratings. In the post-study questionnaire, 22/28 (78%) of participants had correctly deduced the purpose of the study and 14/28 (50%) felt LH did not influence their assessment. LH influenced mini-CEX scores despite raters' awareness of the potential for bias. These results suggest that LH could influence a rater's performance assessment and careful consideration of the potential implications of LH is required.


Assuntos
Competência Clínica/normas , Avaliação Educacional/normas , Internato e Residência/organização & administração , Variações Dependentes do Observador , Adulto , Canadá , Educação Baseada em Competências , Avaliação Educacional/métodos , Feminino , Humanos , Internato e Residência/normas , Masculino , Pessoa de Meia-Idade , Fatores Sexuais
6.
Adv Health Sci Educ Theory Pract ; 26(3): 1133-1156, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33566199

RESUMO

Understanding which factors can impact rater judgments in assessments is important to ensure quality ratings. One such factor is whether prior performance information (PPI) about learners influences subsequent decision making. The information can be acquired directly, when the rater sees the same learner, or different learners over multiple performances, or indirectly, when the rater is provided with external information about the same learner prior to rating a performance (i.e., learner handover). The purpose of this narrative review was to summarize and highlight key concepts from multiple disciplines regarding the influence of PPI on subsequent ratings, discuss implications for assessment and provide a common conceptualization to inform research. Key findings include (a) assimilation (rater judgments are biased towards the PPI) occurs with indirect PPI and contrast (rater judgments are biased away from the PPI) with direct PPI; (b) negative PPI appears to have a greater effect than positive PPI; (c) when viewing multiple performances, context effects of indirect PPI appear to diminish over time; and (d) context effects may occur with any level of target performance. Furthermore, some raters are not susceptible to context effects, but it is unclear what factors are predictive. Rater expertise and training do not consistently reduce effects. Making raters more accountable, providing specific standards and reducing rater cognitive load may reduce context effects. Theoretical explanations for these findings will be discussed.


Assuntos
Competência Clínica , Avaliação Educacional , Humanos , Julgamento , Variações Dependentes do Observador , Pesquisadores
7.
Med Teach ; 43(7): 737-744, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33989100

RESUMO

With the rapid uptake of entrustable professional activties and entrustment decision-making as an approach in undergraduate and graduate education in medicine and other health professions, there is a risk of confusion in the use of new terminologies. The authors seek to clarify the use of many words related to the concept of entrustment, based on existing literature, with the aim to establish logical consistency in their use. The list of proposed definitions includes independence, autonomy, supervision, unsupervised practice, oversight, general and task-specific trustworthiness, trust, entrust(ment), entrustable professional activity, entrustment decision, entrustability, entrustment-supervision scale, retrospective and prospective entrustment-supervision scales, and entrustment-based discussion. The authors conclude that a shared understanding of the language around entrustment is critical to strengthen bridges among stages of training and practice, such as undergraduate medical education, graduate medical education, and continuing professional development. Shared language and understanding provide the foundation for consistency in interpretation and implementation across the educational continuum.


Assuntos
Educação de Graduação em Medicina , Internato e Residência , Competência Clínica , Educação Baseada em Competências , Educação de Pós-Graduação em Medicina , Estudos Prospectivos , Estudos Retrospectivos
8.
Med Teach ; 43(7): 780-787, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34020576

RESUMO

Health care revolves around trust. Patients are often in a position that gives them no other choice than to trust the people taking care of them. Educational programs thus have the responsibility to develop physicians who can be trusted to deliver safe and effective care, ultimately making a final decision to entrust trainees to graduate to unsupervised practice. Such entrustment decisions deserve to be scrutinized for their validity. This end-of-training entrustment decision is arguably the most important one, although earlier entrustment decisions, for smaller units of professional practice, should also be scrutinized for their validity. Validity of entrustment decisions implies a defensible argument that can be analyzed in components that together support the decision. According to Kane, building a validity argument is a process designed to support inferences of scoring, generalization across observations, extrapolation to new instances, and implications of the decision. A lack of validity can be caused by inadequate evidence in terms of, according to Messick, content, response process, internal structure (coherence) and relationship to other variables, and in misinterpreted consequences. These two leading frameworks (Kane and Messick) in educational and psychological testing can be well applied to summative entrustment decision-making. The authors elaborate the types of questions that need to be answered to arrive at defensible, well-argued summative decisions regarding performance to provide a grounding for high-quality safe patient care.


Assuntos
Internato e Residência , Médicos , Competência Clínica , Educação Baseada em Competências , Tomada de Decisões , Humanos , Confiança
9.
Med Teach ; 41(5): 569-577, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30299196

RESUMO

Despite the increased emphasis on the use of workplace-based assessment in competency-based education models, there is still an important role for the use of multiple choice questions (MCQs) in the assessment of health professionals. The challenge, however, is to ensure that MCQs are developed in a way to allow educators to derive meaningful information about examinees' abilities. As educators' needs for high-quality test items have evolved so has our approach to developing MCQs. This evolution has been reflected in a number of ways including: the use of different stimulus formats; the creation of novel response formats; the development of new approaches to problem conceptualization; and the incorporation of technology. The purpose of this narrative review is to provide the reader with an overview of how our understanding of the use of MCQs in the assessment of health professionals has evolved to better measure clinical reasoning and to improve both efficiency and item quality.


Assuntos
Educação de Graduação em Medicina , Avaliação Educacional/métodos , Cognição , Educação Baseada em Competências , Instrução por Computador/métodos , Humanos
10.
Adv Health Sci Educ Theory Pract ; 23(4): 721-732, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-29556923

RESUMO

There is an increasing focus on factors that influence the variability of rater-based judgments. First impressions are one such factor. First impressions are judgments about people that are made quickly and are based on little information. Under some circumstances, these judgments can be predictive of subsequent decisions. A concern for both examinees and test administrators is whether the relationship remains stable when the performance of the examinee changes. That is, once a first impression is formed, to what degree will an examiner be willing to modify it? The purpose of this study is to determine the degree that first impressions influence final ratings when the performance of examinees changes within the context of an objective structured clinical examination (OSCE). Physician examiners (n = 29) viewed seven videos of examinees (i.e., actors) performing a physical exam on a single OSCE station. They rated the examinees' clinical abilities on a six-point global rating scale after 60 s (first impression or FIGR). They then observed the examinee for the remainder of the station and provided a final global rating (GRS). For three of the videos, the examinees' performance remained consistent throughout the videos. For two videos, examinee performance changed from initially strong to weak and for two videos, performance changed from initially weak to strong. The mean FIGR rating for the Consistent condition (M = 4.80) and the Strong to Weak condition (M = 4.87) were higher compared to their respective GRS ratings (M = 3.93, M = 2.73) with a greater decline for the Strong to Weak condition. The mean FIGR rating for the Weak to Strong condition was lower (3.60) than the corresponding mean GRS (4.81). This pattern of findings suggests that raters were willing to change their judgments based on examinee performance. Future work should explore the impact of making a first impression judgment explicit versus implicit and the role of context on the relationship between a first impression and a subsequent judgment.


Assuntos
Competência Clínica/normas , Avaliação Educacional/métodos , Avaliação Educacional/normas , Variações Dependentes do Observador , Adulto , Feminino , Humanos , Julgamento , Masculino , Pessoa de Meia-Idade , Fatores Socioeconômicos
11.
BMC Med Educ ; 18(1): 302, 2018 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-30537960

RESUMO

BACKGROUND: Physicians in training must achieve a high degree of proficiency in performing physical examinations and must strive to become experts in the field. Concerns are emerging about physicians' abilities to perform these basic skills, essential for clinical decision making. Learning at the bedside has the potential to support skill acquisition through deliberate practice. Previous skills improvement programs, targeted at teaching physical examinations, have been successful at increasing the frequency of performing and teaching physical examinations. It remains unclear what barriers might persist after such program implementation. This study explores residents' and physicians' perceptions of physical examinations teaching at the bedside following the implementation of a new structured bedside curriculum: What are the potentially persisting barriers and proposed solutions for improvement? METHODS: The study used a constructivist approach using a qualitative inductive thematic analysis that was oriented to construct an understanding of the barriers and facilitators of physical examination teaching in the context of a new bedside curriculum. Participants took part in individual interviews and subsequently focus groups. Transcripts were coded and themes were identified. RESULTS: Data analyses yielded three main themes: (1) the culture of teaching physical examination at the bedside is shaped and threatened by the lack of hospital support, physicians' motivation and expertise, residents' attitudes and dependence on technology, (2) the hospital environment makes bedside teaching difficult because of its chaotic nature, time constraints and conflicting responsibilities, and finally (3) structured physical examination curricula create missed opportunities in being restrictive and pose difficulties in identifying patients with findings. CONCLUSIONS: Despite the implementation of a structured bedside curriculum for physical examination teaching, our study suggests that cultural, environmental and curriculum-related barriers remain important issues to be addressed. Institutions wishing to develop and implement similar bedside curricula should prioritize recruitment of expert clinical teachers, recognizing their time and efforts. Teaching should be delivered in a protected environment, away from clinical duties, and with patients with real findings. Physicians must value teaching and learning of physical examination skills, with multiple hands-on opportunities for direct role modeling, coaching, observation and deliberate practice. Ideally, clinical teachers should master the art of combining both patient care and educational activities.


Assuntos
Competência Clínica/normas , Currículo , Educação de Pós-Graduação em Medicina , Internato e Residência , Exame Físico/normas , Testes Imediatos/normas , Adulto , Atitude do Pessoal de Saúde , Feminino , Grupos Focais , Humanos , Masculino , Pesquisa Qualitativa
12.
Adv Health Sci Educ Theory Pract ; 22(4): 969-983, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27848171

RESUMO

Competency-based assessment is placing increasing emphasis on the direct observation of learners. For this process to produce valid results, it is important that raters provide quality judgments that are accurate. Unfortunately, the quality of these judgments is variable and the roles of factors that influence the accuracy of those judgments are not clearly understood. One such factor is first impressions: that is, judgments about people we do not know, made quickly and based on very little information. This study explores the influence of first impressions in an OSCE. Specifically, the purpose is to begin to examine the accuracy of a first impression and its influence on subsequent ratings. We created six videotapes of history-taking performance. Each video was scripted from a real performance by six examinee residents within a single OSCE station. Each performance was re-enacted with six different actors playing the role of the examinees and one actor playing the role of the patient and videotaped. A total of 23 raters (i.e., physician examiners) reviewed each video and were asked to make a global judgment of the examinee's clinical abilities after 60 s (First Impression GR) by providing a rating on a six-point global rating scale and then to rate their confidence in the accuracy of that judgment by providing a rating on a five-point rating scale (Confidence GR). After making these ratings, raters then watched the remainder of the examinee's performance and made another global rating of performance (Final GR) before moving on to the next video. First impression ratings of ability varied across examinees and were moderately correlated to expert ratings (r = .59, 95% CI [-.13, .90]). There were significant differences in mean ratings for three examinees. Correlations ranged from .05 to .56 but were only significant for three examinees. Rater confidence in their first impression was not related to the likelihood of a rater changing their rating between the first impression and a subsequent rating. The findings suggest that first impressions could play a role in explaining variability in judgments, but their importance was determined by the videotaped performance of the examinees. More work is needed to clarify conditions that support or discourage the use of first impressions.


Assuntos
Educação Médica/métodos , Avaliação Educacional/métodos , Avaliação Educacional/normas , Docentes de Medicina/psicologia , Competência Clínica/normas , Educação Médica/normas , Docentes de Medicina/normas , Humanos , Anamnese/normas , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Gravação de Videoteipe
13.
Med Teach ; 39(6): 609-616, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28598746

RESUMO

The meaningful assessment of competence is critical for the implementation of effective competency-based medical education (CBME). Timely ongoing assessments are needed along with comprehensive periodic reviews to ensure that trainees continue to progress. New approaches are needed to optimize the use of multiple assessors and assessments; to synthesize the data collected from multiple assessors and multiple types of assessments; to develop faculty competence in assessment; and to ensure that relationships between the givers and receivers of feedback are appropriate. This paper describes the core principles of assessment for learning and assessment of learning. It addresses several ways to ensure the effectiveness of assessment programs, including using the right combination of assessment methods and conducting careful assessor selection and training. It provides a reconceptualization of the role of psychometrics and articulates the importance of a group process in determining trainees' progress. In addition, it notes that, to reach its potential as a driver in trainee development, quality care, and patient safety, CBME requires effective information management and documentation as well as ongoing consideration of ways to improve the assessment system.


Assuntos
Competência Clínica , Educação Baseada em Competências , Educação Médica/métodos , Avaliação Educacional/métodos , Aprendizagem , Educação Médica/normas , Avaliação Educacional/normas , Retroalimentação , Humanos , Psicometria
14.
Med Educ ; 50(1): 93-100, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26695469

RESUMO

CONTEXT: Competency-based medical education (CBME) is being adopted wholeheartedly by organisations worldwide in the hope of meeting today's expectations for training a competent doctor. But are we, as medical educators, fulfilling this promise? METHODS: The authors explore, through a personal viewpoint, the problems identified with CBME and the progress made through the development of milestones and entrustable professional activities (EPAs). RESULTS: Proponents of CBME have strong reasons to keep developing and supporting this broad movement in medical education. Critics, however, have legitimate reservations. The authors observe that the recent increase in use of milestones and EPAs can strengthen the purpose of CBME and counter some of the concerns voiced, if properly implemented. CONCLUSIONS: The authors conclude with suggestions for the future and how using EPAs could lead us one step closer to the goals of not only competency-based medical education but also competency-based medical practice.


Assuntos
Competência Clínica , Educação Baseada em Competências , Educação Médica/métodos , Avaliação Educacional , Internato e Residência
15.
Med Educ ; 50(3): 351-8, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26896020

RESUMO

CONTEXT: Progress tests, in which learners are repeatedly assessed on equivalent content at different times in their training and provided with feedback, would seem to lend themselves well to a competency-based framework, which requires more frequent formative assessments. The objective structured clinical examination (OSCE) progress test is a relatively new form of assessment that is used to assess the progression of clinical skills. The purpose of this study was to establish further evidence for the use of an OSCE progress test by demonstrating an association between scores from this assessment method and those from a national high-stakes examination. METHODS: The results of 8 years' of data from an Internal Medicine Residency OSCE (IM-OSCE) progress test were compared with scores on the Royal College of Physicians and Surgeons of Canada Comprehensive Objective Examination in Internal Medicine (RCPSC IM examination), which is comprised of both a written and performance-based component (n = 180). Correlations between scores in the two examinations were calculated. Logistic regression analyses were performed comparing IM-OSCE progress test scores with an 'elevated risk of failure' on either component of the RCPSC IM examination. RESULTS: Correlations between scores from the IM-OSCE (for PGY-1 residents to PGY-4 residents) and those from the RCPSC IM examination ranged from 0.316 (p = 0.001) to 0.554 (<.001) for the performance-based component and 0.305 (p = 0.002) to 0.516 (p < 0.001) for the written component. Logistic regression models demonstrated that PGY-2 and PGY-4 scores from the IM-OSCE were predictive of an 'elevated risk of failure' on both components of the RCPSC IM examination. CONCLUSIONS: This study provides further evidence for the use of OSCE progress testing by demonstrating a correlation between scores from an OSCE progress test and a national high-stakes examination. Furthermore, there is evidence that OSCE progress test scores are predictive of future performance on a national high-stakes examination.


Assuntos
Competência Clínica/normas , Avaliação Educacional/métodos , Internato e Residência/normas , Licenciamento em Medicina , Canadá , Medicina Interna/educação
16.
Med Teach ; 38(2): 168-73, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-25909896

RESUMO

PURPOSE: The purpose of this study was to explore the use of an objective structured clinical examination for Internal Medicine residents (IM-OSCE) as a progress test for clinical skills. METHODS: Data from eight administrations of an IM-OSCE were analyzed retrospectively. Data were scaled to a mean of 500 and standard deviation (SD) of 100. A time-based comparison, treating post-graduate year (PGY) as a repeated-measures factor, was used to determine how residents' performance progressed over time. RESULTS: Residents' total IM-OSCE scores (n = 244) increased over training from a mean of 445 (SD = 84) in PGY-1 to 534 (SD = 71) in PGY-3 (p < 0.001). In an analysis of sub-scores, including only those who participated in the IM OSCE for all three years of training (n = 46), mean structured oral scores increased from 464 (SD = 92) to 533 (SD = 83) (p < 0.001), physical examination scores increased from 464 (SD = 82) to 520 (SD = 75) (p < 0.001), and procedural skills increased from 495 (SD = 99) to 555 (SD = 67) (p = 0.033). There was no significant change in communication scores (p = 0.97). CONCLUSIONS: The IM-OSCE can be used to demonstrate progression of clinical skills throughout residency training. Although most of the clinical skills assessed improved as residents progressed through their training, communication skills did not appear to change.


Assuntos
Competência Clínica/normas , Avaliação Educacional/métodos , Medicina Interna/educação , Internato e Residência , Humanos , Ontário , Estudos Retrospectivos
17.
Med Teach ; 38(8): 838-43, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26998566

RESUMO

With the recent interest in competency-based education, educators are being challenged to develop more assessment opportunities. As such, there is increased demand for exam content development, which can be a very labor-intense process. An innovative solution to this challenge has been the use of automatic item generation (AIG) to develop multiple-choice questions (MCQs). In AIG, computer technology is used to generate test items from cognitive models (i.e. representations of the knowledge and skills that are required to solve a problem). The main advantage yielded by AIG is the efficiency in generating items. Although technology for AIG relies on a linear programming approach, the same principles can also be used to improve traditional committee-based processes used in the development of MCQs. Using this approach, content experts deconstruct their clinical reasoning process to develop a cognitive model which, in turn, is used to create MCQs. This approach is appealing because it: (1) is efficient; (2) has been shown to produce items with psychometric properties comparable to those generated using a traditional approach; and (3) can be used to assess higher order skills (i.e. application of knowledge). The purpose of this article is to provide a novel framework for the development of high-quality MCQs using cognitive models.


Assuntos
Educação de Graduação em Medicina , Avaliação Educacional/métodos , Avaliação Educacional/normas , Modelos Psicológicos , Educação Baseada em Competências , Humanos
18.
Teach Learn Med ; 28(4): 385-394, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27285377

RESUMO

Construct: This article describes the development and validity evidence behind a new rating scale to assess feedback quality in the clinical workplace. BACKGROUND: Competency-based medical education has mandated a shift to learner-centeredness, authentic observation, and frequent formative assessments with a focus on the delivery of effective feedback. Because feedback has been shown to be of variable quality and effectiveness, an assessment of feedback quality in the workplace is important to ensure we are providing trainees with optimal learning opportunities. The purposes of this project were to develop a rating scale for the quality of verbal feedback in the workplace (the Direct Observation of Clinical Skills Feedback Scale [DOCS-FBS]) and to gather validity evidence for its use. APPROACH: Two panels of experts (local and national) took part in a nominal group technique to identify features of high-quality feedback. Through multiple iterations and review, 9 features were developed into the DOCS-FBS. Four rater types (residents n = 21, medical students n = 8, faculty n = 12, and educators n = 12) used the DOCS-FBS to rate videotaped feedback encounters of variable quality. The psychometric properties of the scale were determined using a generalizability analysis. Participants also completed a survey to gather data on a 5-point Likert scale to inform the ease of use, clarity, knowledge acquisition, and acceptability of the scale. RESULTS: Mean video ratings ranged from 1.38 to 2.96 out of 3 and followed the intended pattern suggesting that the tool allowed raters to distinguish between examples of higher and lower quality feedback. There were no significant differences between rater type (range = 2.36-2.49), suggesting that all groups of raters used the tool in the same way. The generalizability coefficients for the scale ranged from 0.97 to 0.99. Item-total correlations were all above 0.80, suggesting some redundancy in items. Participants found the scale easy to use (M = 4.31/5) and clear (M = 4.23/5), and most would recommend its use (M = 4.15/5). Use of DOCS-FBS was acceptable to both trainees (M = 4.34/5) and supervisors (M = 4.22/5). CONCLUSIONS: The DOCS-FBS can reliably differentiate between feedback encounters of higher and lower quality. The scale has been shown to have excellent internal consistency. We foresee the DOCS-FBS being used as a means to provide objective evidence that faculty development efforts aimed at improving feedback skills can yield results through formal assessment of feedback quality.


Assuntos
Educação Baseada em Competências , Educação de Pós-Graduação em Medicina , Retroalimentação , Competência Clínica , Humanos , Estudantes de Medicina
19.
Teach Learn Med ; 28(1): 52-60, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26787085

RESUMO

THEORY: The move to competency-based education has heightened the importance of direct observation of clinical skills and effective feedback. The Objective Structured Clinical Examination (OSCE) is widely used for assessment and affords an opportunity for both direct observation and feedback to occur simultaneously. For feedback to be effective, it should include direct observation, assessment of performance, provision of feedback, reflection, decision making, and use of feedback for learning and change. HYPOTHESES: If one of the goals of feedback is to engage students to think about their performance (i.e., reflection), it would seem imperative that they can recall this feedback both immediately and into the future. This study explores recall of feedback in the context of an OSCE. Specifically, the purpose of this study was to (a) determine the amount and the accuracy of feedback that trainees remember immediately after an OSCE, as well as 1 month later, and (b) assess whether prompting immediate recall improved delayed recall. METHODS: Internal medicine residents received 2 minutes of verbal feedback from physician examiners in the context of an OSCE. The feedback was audio-recorded and later transcribed. Residents were randomly allocated to the immediate recall group (immediate-RG; n = 10) or the delayed recall group (delayed-RG; n = 8). The immediate-RG completed a questionnaire prompting recall of feedback received immediately after the OSCE, and then again 1 month later. The delayed-RG completed a questionnaire only 1 month after the OSCE. The total number and accuracy of feedback points provided by examiners were compared to the points recalled by residents. Results comparing recall at 1 month between the immediate-RG and the delayed-RG were also studied. RESULTS: Physician examiners provided considerably more feedback points (M = 16.3) than the residents recalled immediately after the OSCE (M = 2.61, p < .001). There was no significant difference between the number of feedback points recalled upon completion of the OSCE (2.61) compared to 1 month later (M = 1.96, p = .06, Cohen's d = .70). Prompting immediate recall did not improve later recall. The mean accuracy score for feedback recall immediately after the OSCE was 4.3/9 or "somewhat representative," and at 1 month the score dropped to 3.5/9 or "not representative" (ns). CONCLUSION: Residents recall very few feedback points immediately after the OSCE and 1 month later. The feedback points that are recalled are neither very accurate nor representative of the feedback actually provided.


Assuntos
Retroalimentação , Medicina Interna/educação , Internato e Residência , Rememoração Mental , Avaliação Educacional , Humanos , Médicos , Inquéritos e Questionários , Gravação em Fita
20.
Teach Learn Med ; 28(2): 166-73, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26849247

RESUMO

UNLABELLED: CONSTRUCT: Automatic item generation (AIG) is an alternative method for producing large numbers of test items that integrate cognitive modeling with computer technology to systematically generate multiple-choice questions (MCQs). The purpose of our study is to describe and validate a method of generating plausible but incorrect distractors. Initial applications of AIG demonstrated its effectiveness in producing test items. However, expert review of the initial items identified a key limitation where the generation of implausible incorrect options, or distractors, might limit the applicability of items in real testing situations. BACKGROUND: Medical educators require development of test items in large quantities to facilitate the continual assessment of student knowledge. Traditional item development processes are time-consuming and resource intensive. Studies have validated the quality of generated items through content expert review. However, no study has yet documented how generated items perform in a test administration. Moreover, no study has yet to validate AIG through student responses to generated test items. APPROACH: To validate our refined AIG method in generating plausible distractors, we collected psychometric evidence from a field test of the generated test items. A three-step process was used to generate test items in the area of jaundice. At least 455 Canadian and international medical graduates responded to each of the 13 generated items embedded in a high-stake exam administration. Item difficulty, discrimination, and index of discrimination estimates were calculated for the correct option as well as each distractor. RESULTS: Item analysis results for the correct options suggest that the generated items measured candidate performances across a range of ability levels while providing a consistent level of discrimination for each item. Results for the distractors reveal that the generated items differentiated the low- from the high-performing candidates. CONCLUSIONS: Previous research on AIG highlighted how this item development method can be used to produce high-quality stems and correct options for MCQ exams. The purpose of the current study was to describe, illustrate, and evaluate a method for modeling plausible but incorrect options. Evidence provided in this study demonstrates that AIG can produce psychometrically sound test items. More important, by adapting the distractors to match the unique features presented in the stem and correct option, the generation of MCQs using automated procedure has the potential to produce plausible distractors and yield large numbers of high-quality items for medical education.


Assuntos
Instrução por Computador/métodos , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Melhoria de Qualidade , Automação , Humanos , Icterícia/diagnóstico , Icterícia/terapia , Modelos Educacionais , Psicometria
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA