Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 80
Filtrar
1.
BMC Med Res Methodol ; 20(1): 293, 2020 12 03.
Artigo em Inglês | MEDLINE | ID: mdl-33267819

RESUMO

BACKGROUND: Scores on an outcome measurement instrument depend on the type and settings of the instrument used, how instructions are given to patients, how professionals administer and score the instrument, etc. The impact of all these sources of variation on scores can be assessed in studies on reliability and measurement error, if properly designed and analyzed. The aim of this study was to develop standards to assess the quality of studies on reliability and measurement error of clinician-reported outcome measurement instruments, performance-based outcome measurement instrument, and laboratory values. METHODS: We conducted a 3-round Delphi study involving 52 panelists. RESULTS: Consensus was reached on how a comprehensive research question can be deduced from the design of a reliability study to determine how the results of a study inform us about the quality of the outcome measurement instrument at issue. Consensus was reached on components of outcome measurement instruments, i.e. the potential sources of variation. Next, we reached consensus on standards on design requirements (n = 5), standards on preferred statistical methods for reliability (n = 3) and measurement error (n = 2), and their ratings on a four-point scale. There was one term for a component and one rating of one standard on which no consensus was reached, and therefore required a decision by the steering committee. CONCLUSION: We developed a tool that enables researchers with and without thorough knowledge on measurement properties to assess the quality of a study on reliability and measurement error of outcome measurement instruments.


Assuntos
Técnica Delphi , Viés , Consenso , Humanos , Reprodutibilidade dos Testes
2.
Med Teach ; 42(2): 213-220, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31622126

RESUMO

Introduction: Programmatic assessment (PA) is an approach to assessment aimed at optimizing learning which continues to gain educational momentum. However, the theoretical underpinnings of PA have not been clearly described. An explanation of the theoretical underpinnings of PA will allow educators to gain a better understanding of this approach and, perhaps, facilitate its use and effective implementation. The purpose of this article is twofold: first, to describe salient theoretical perspectives on PA; second to examine how theory may help educators to develop effective PA programs, helping to overcome challenges around PA.Results: We outline a number of learning theories that underpin key educational principles of PA: constructivist and social constructivist theory supporting meaning making, and longitudinality; cognitivist and cognitive development orientation scaffolding the practice of a continuous feedback process; theory of instructional design underpinning assessment as learning; self-determination theory (SDT), self-regulation learning theory (SRL), and principles of deliberate practice providing theoretical tenets for student agency and accountability.Conclusion: The construction of a plausible and coherent link between key educational principles of PA and learning theories should enable educators to pose new and important inquiries, reflect on their assessment practices and help overcome future challenges in the development and implementation of PA in their programs.


Assuntos
Avaliação Educacional , Feedback Formativo , Aprendizagem , Cognição , Humanos , Estudantes
3.
Eur J Dent Educ ; 22 Suppl 1: 21-27, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-29601682

RESUMO

Assessments are widely used in dental education to record the academic progress of students and ultimately determine whether they are ready to begin independent dental practice. Whilst some would consider this a "rite-of-passage" of learning, the concept of assessments in education is being challenged to allow the evolution of "assessment for learning." This serves as an economical use of learning resources whilst allowing our learners to prove their knowledge and skills and demonstrating competence. The Association for Dental Education in Europe and the American Dental Education Association held a joint international meeting in London in May 2017 allowing experts in dental education to come together for the purposes of Shaping the Future of Dental Education. Assessment in a Global Context was one topic in which international leaders could discuss different methods of assessment, identifying the positives, the pitfalls and critiquing the method of implementation to determine the optimum assessment for a learner studying to be a healthcare professional. A post-workshop survey identified that educators were thinking differently about assessment, instead of working as individuals providing isolated assessments; the general consensus was that a longitudinally orientated systematic and programmatic approach to assessment provide greater reliability and improved the ability to demonstrate learning.


Assuntos
Educação em Odontologia/normas , Avaliação Educacional , Cooperação Internacional , Competência Clínica/normas , Congressos como Assunto , Educação , Educação em Odontologia/métodos , Educação em Odontologia/tendências , Avaliação Educacional/métodos , Avaliação Educacional/normas , Previsões , Humanos
4.
Med Teach ; 39(11): 1174-1181, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28784026

RESUMO

BACKGROUND: In clerkships, students are expected to self-regulate their learning. How clinical departments and their routine approach on clerkships influences students' self-regulated learning (SRL) is unknown. AIM: This study explores how characteristic routines of clinical departments influence medical students' SRL. METHODS: Six focus groups including 39 purposively sampled participants from one Dutch university were organized to study how characteristic routines of clinical departments influenced medical students' SRL from a constructivist paradigm, using grounded theory methodology. The focus groups were audio recorded, transcribed verbatim and were analyzed iteratively using constant comparison and open, axial and interpretive coding. RESULTS: Students described that clinical departments influenced their SRL through routines which affected the professional relationships they could engage in and affected their perception of a department's invested effort in them. Students' SRL in a clerkship can be supported by enabling them to engage others in their SRL and by having them feel that effort is invested in their learning. CONCLUSIONS: Our study gives a practical insight in how clinical departments influenced students' SRL. Clinical departments can affect students' motivation to engage in SRL, influence the variety of SRL strategies that students can use and how meaningful students perceive their SRL experiences to be.


Assuntos
Estágio Clínico/organização & administração , Autocontrole/psicologia , Estudantes de Medicina/psicologia , Local de Trabalho/psicologia , Adulto , Competência Clínica , Comportamento Cooperativo , Meio Ambiente , Feminino , Grupos Focais , Teoria Fundamentada , Humanos , Relações Interpessoais , Aprendizagem , Masculino , Motivação , Países Baixos , Equipe de Assistência ao Paciente/organização & administração , Autoeficácia , Adulto Jovem
5.
Med Teach ; 37(7): 641-646, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25410481

RESUMO

Programmatic assessment is an integral approach to the design of an assessment program with the intent to optimise its learning function, its decision-making function and its curriculum quality-assurance function. Individual methods of assessment, purposefully chosen for their alignment with the curriculum outcomes and their information value for the learner, the teacher and the organisation, are seen as individual data points. The information value of these individual data points is maximised by giving feedback to the learner. There is a decoupling of assessment moment and decision moment. Intermediate and high-stakes decisions are based on multiple data points after a meaningful aggregation of information and supported by rigorous organisational procedures to ensure their dependability. Self-regulation of learning, through analysis of the assessment information and the attainment of the ensuing learning goals, is scaffolded by a mentoring system. Programmatic assessment-for-learning can be applied to any part of the training continuum, provided that the underlying learning conception is constructivist. This paper provides concrete recommendations for implementation of programmatic assessment.

6.
BMC Med Educ ; 15: 237, 2015 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-26715145

RESUMO

BACKGROUND: Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the internal validity and reliability of competency constructs from supervisors' end-of-term summative assessments for prevocational medical trainees. METHODS: The populations were medical trainees preparing for full registration as a medical practitioner (74) and supervisors who undertook ≥2 end-of-term summative assessments (n = 349) from a single institution. Confirmatory Factor Analysis was used to evaluate assessment internal construct validity. The hypothesised competency construct model to be tested, identified by exploratory factor analysis, had a theoretical basis established in workplace-psychology literature. Comparisons were made with competing models of potential competency constructs including the competency construct model of the original assessment. The optimal model for the competency constructs was identified using model fit and measurement invariance analysis. Construct homogeneity was assessed by Cronbach's α. Reliability measures were variance components of individual competency items and the identified competency constructs, and the number of assessments needed to achieve adequate reliability of R > 0.80. RESULTS: The hypothesised competency constructs of "general professional job performance", "clinical skills" and "professional abilities" provides a good model-fit to the data, and a better fit than all alternative models. Model fit indices were χ2/df = 2.8; RMSEA = 0.073 (CI 0.057-0.088); CFI = 0.93; TLI = 0.95; SRMR = 0.039; WRMR = 0.93; AIC = 3879; and BIC = 4018). The optimal model had adequate measurement invariance with nested analysis of important population subgroups supporting the presence of full metric invariance. Reliability estimates for the competency construct "general professional job performance" indicated a resource efficient and reliable assessment for such a construct (6 assessments for an R > 0.80). Item homogeneity was good (Cronbach's alpha = 0.899). Other competency constructs are resource intensive requiring ≥11 assessments for a reliable assessment score. CONCLUSION: Internal validity and reliability of clinical competence assessments using judgement-based methods are acceptable when actual competency constructs used by assessors are adequately identified. Validation for interpretation and use of supervisors' assessment in local training schemes is feasible using standard methods for gathering validity evidence.


Assuntos
Competência Clínica/normas , Avaliação Educacional/normas , Corpo Clínico Hospitalar/normas , Pessoal Administrativo/normas , Austrália , Certificação/normas , Avaliação Educacional/métodos , Análise Fatorial , Feminino , Humanos , Julgamento , Masculino , Psicometria , Reprodutibilidade dos Testes
7.
BMC Med Educ ; 15: 140, 2015 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-26306762

RESUMO

BACKGROUND: Problem based learning (PBL) is a powerful learning activity but fidelity to intended models may slip and student engagement wane, negatively impacting learning processes, and outcomes. One potential solution to solve this degradation is by encouraging self-assessment in the PBL tutorial. Self-assessment is a central component of the self-regulation of student learning behaviours. There are few measures to investigate self-assessment relevant to PBL processes. We developed a Self-assessment Scale on Active Learning and Critical Thinking (SSACT) to address this gap. We wished to demonstrated evidence of its validity in the context of PBL by exploring its internal structure. METHODS: We used a mixed methods approach to scale development. We developed scale items from a qualitative investigation, literature review, and consideration of previous existing tools used for study of the PBL process. Expert review panels evaluated its content; a process of validation subsequently reduced the pool of items. We used structural equation modelling to undertake a confirmatory factor analysis (CFA) of the SSACT and coefficient alpha. RESULTS: The 14 item SSACT consisted of two domains "active learning" and "critical thinking." The factorial validity of SSACT was evidenced by all items loading significantly on their expected factors, a good model fit for the data, and good stability across two independent samples. Each subscale had good internal reliability (>0.8) and strongly correlated with each other. CONCLUSIONS: The SSACT has sufficient evidence of its validity to support its use in the PBL process to encourage students to self-assess. The implementation of the SSACT may assist students to improve the quality of their learning in achieving PBL goals such as critical thinking and self-directed learning.


Assuntos
Avaliação Educacional/métodos , Aprendizagem Baseada em Problemas/métodos , Estudantes de Medicina/psicologia , Humanos , Aprendizagem , Reprodutibilidade dos Testes , Autoavaliação (Psicologia) , Pensamento
8.
Med Teach ; 36(7): 602-7, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-24787531

RESUMO

BACKGROUND: The development of reflective learning skills is a continuous process that needs scaffolding. It can be described as a continuum, with the focus of reflection differing in granularity from recent, concrete activities to global competency development. AIM: To explore learners' perceptions regarding the effects of two reflective writing activities designed to stimulate reflection at different degrees of granularity during clinical training. METHODS: Totally 142 respondents (students and recent graduates) completed a questionnaire. Quantitative and qualitative data were triangulated. RESULTS: Immediate reflection-on-action was perceived to be more valuable than delayed reflection-on-competency-development because it facilitated day-to-day improvement. Delayed reflection was perceived to facilitate overall self-assessment, self-confidence and continuous improvement, but this perception was mainly found among graduates. Detailed reflection immediately after a challenging learning experience and broad reflection on progress appeared to serve different learning goals and consequently require different arrangements regarding feedback and timing. CONCLUSIONS: Granularity of focus has consequences for scaffolding reflective learning, with immediate reflection on concrete events and reflection on long-term progress requiring different approaches. Learners appeared to prefer immediate reflection-on-action.


Assuntos
Competência Clínica/normas , Tocologia/educação , Aprendizagem Baseada em Problemas/normas , Autoavaliação (Psicologia) , Estudantes de Ciências da Saúde/psicologia , Bélgica , Humanos , Aprendizagem Baseada em Problemas/métodos , Avaliação de Programas e Projetos de Saúde , Inquéritos e Questionários , Fatores de Tempo
9.
Adv Health Sci Educ Theory Pract ; 18(4): 701-25, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23053869

RESUMO

Supervisor assessments are critical for both formative and summative assessment in the workplace. Supervisor ratings remain an important source of such assessment in many educational jurisdictions even though there is ambiguity about their validity and reliability. The aims of this evaluation is to explore the: (1) construct validity of ward-based supervisor competency assessments; (2) reliability of supervisors for observing any overarching domain constructs identified (factors); (3) stability of factors across subgroups of contexts, supervisors and trainees; and (4) position of the observations compared to the established literature. Evaluated assessments were all those used to judge intern (trainee) suitability to become an unconditionally registered medical practitioner in the Australian Capital Territory, Australia in 2007-2008. Initial construct identification is by traditional exploratory factor analysis (EFA) using Principal component analysis with Varimax rotation. Factor stability is explored by EFA of subgroups by different contexts such as hospital type, and different types of supervisors and trainees. The unit of analysis is each assessment, and includes all available assessments without aggregation of any scores to obtain the factors. Reliability of identified constructs is by variance components analysis of the summed trainee scores for each factor and the number of assessments needed to provide an acceptably reliable assessment using the construct, the reliability unit of analysis being the score for each factor for every assessment. For the 374 assessments from 74 trainees and 73 supervisors, the EFA resulted in 3 factors identified from the scree plot, accounting for only 68 % of the variance with factor 1 having features of a "general professional job performance" competency (eigenvalue 7.630; variance 54.5 %); factor 2 "clinical skills" (eigenvalue 1.036; variance 7.4 %); and factor 3 "professional and personal" competency (eigenvalue 0.867; variance 6.2 %). The percent trainee score variance for the summed competency item scores for factors 1, 2 and 3 were 40.4, 27.4 and 22.9 % respectively. The number of assessments needed to give a reliability coefficient of 0.80 was 6, 11 and 13 respectively. The factor structure remained stable for subgroups of female trainees, Australian graduate trainees, the central hospital, surgeons, staff specialist, visiting medical officers and the separation into single years. Physicians as supervisors, male trainees, and male supervisors all had a different grouping of items within 3 factors which all had competency items that collapsed into the predefined "face value" constructs of competence. These observations add new insights compared to the established literature. For the setting, most supervisors appear to be assessing a dominant construct domain which is similar to a general professional job performance competency. This global construct consists of individual competency items that supervisors spontaneously align and has acceptable assessment reliability. However, factor structure instability between different populations of supervisors and trainees means that subpopulations of trainees may be assessed differently and that some subpopulations of supervisors are assessing the same trainees with different constructs than other supervisors. The lack of competency criterion standardisation of supervisors' assessments brings into question the validity of this assessment method as currently used.


Assuntos
Competência Clínica/normas , Avaliação de Desempenho Profissional/normas , Corpo Clínico Hospitalar , Território da Capital Australiana , Análise Fatorial , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes
10.
Adv Health Sci Educ Theory Pract ; 18(5): 1087-102, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23494202

RESUMO

In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and certification as a specialist are based on the composite results of different WBAs collected in a portfolio, to our knowledge, the reliability of such a WBA toolbox has never been investigated. Using generalisability theory, we analysed the separate and composite reliability of three WBA tools [mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), and multisource feedback (MSF)] included in a resident portfolio. G-studies and D-studies of 12,779 WBAs from a total of 953 residents showed that a reliability coefficient of 0.80 was obtained for eight mini-CEXs, nine DOPS, and nine MSF rounds, whilst the same reliability was found for seven mini-CEXs, eight DOPS, and one MSF when combined in a portfolio. At the end of the first year of residency a portfolio with five mini-CEXs, six DOPS, and one MSF afforded reliable judgement. The results support the conclusion that several WBA tools combined in a portfolio can be a feasible and reliable method for high-stakes judgements.


Assuntos
Educação de Pós-Graduação em Medicina , Avaliação Educacional/métodos , Medicina/normas , Local de Trabalho , Feminino , Humanos , Internato e Residência , Masculino , Países Baixos , Reprodutibilidade dos Testes
11.
Adv Health Sci Educ Theory Pract ; 18(3): 375-96, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22592323

RESUMO

Weaknesses in the nature of rater judgments are generally considered to compromise the utility of workplace-based assessment (WBA). In order to gain insight into the underpinnings of rater behaviours, we investigated how raters form impressions of and make judgments on trainee performance. Using theoretical frameworks of social cognition and person perception, we explored raters' implicit performance theories, use of task-specific performance schemas and the formation of person schemas during WBA. We used think-aloud procedures and verbal protocol analysis to investigate schema-based processing by experienced (N = 18) and inexperienced (N = 16) raters (supervisor-raters in general practice residency training). Qualitative data analysis was used to explore schema content and usage. We quantitatively assessed rater idiosyncrasy in the use of performance schemas and we investigated effects of rater expertise on the use of (task-specific) performance schemas. Raters used different schemas in judging trainee performance. We developed a normative performance theory comprising seventeen inter-related performance dimensions. Levels of rater idiosyncrasy were substantial and unrelated to rater expertise. Experienced raters made significantly more use of task-specific performance schemas compared to inexperienced raters, suggesting more differentiated performance schemas in experienced raters. Most raters started to develop person schemas the moment they began to observe trainee performance. The findings further our understanding of processes underpinning judgment and decision making in WBA. Raters make and justify judgments based on personal theories and performance constructs. Raters' information processing seems to be affected by differences in rater expertise. The results of this study can help to improve rater training, the design of assessment instruments and decision making in WBA.


Assuntos
Competência Clínica/normas , Avaliação Educacional/métodos , Avaliação Educacional/normas , Humanos , Internato e Residência/normas , Médicos/normas , Gravação em Vídeo
12.
Med Teach ; 35(9): 772-8, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23808841

RESUMO

BACKGROUND: Although the literature suggests that reflection has a positive impact on learning, there is a paucity of evidence to support this notion. AIM: We investigated feedback and reflection in relation to the likelihood that feedback will be used to inform action plans. We hypothesised that feedback and reflection present a cumulative sequence (i.e. trainers only pay attention to trainees' reflections when they provided specific feedback) and we hypothesised a supplementary effect of reflection. METHOD: We analysed copies of assessment forms containing trainees' reflections and trainers' feedback on observed clinical performance. We determined whether the response patterns revealed cumulative sequences in line with the Guttman scale. We further examined the relationship between reflection, feedback and the mean number of specific comments related to an action plan (ANOVA) and we calculated two effect sizes. RESULTS: Both hypotheses were confirmed by the results. The response pattern found showed an almost perfect fit with the Guttman scale (0.99) and reflection seems to have supplementary effect on the variable action plan. CONCLUSIONS: Reflection only occurs when a trainer has provided specific feedback; trainees who reflect on their performance are more likely to make use of feedback. These results confirm findings and suggestions reported in the literature.


Assuntos
Educação de Pós-Graduação em Medicina/métodos , Avaliação Educacional , Retroalimentação , Medicina Geral/educação , Autoavaliação (Psicologia) , Estudos Transversais , Feminino , Humanos , Masculino , Países Baixos
13.
Med Teach ; 34 Suppl 1: S32-6, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22409188

RESUMO

It has been shown that medical students have a higher rate of depressive symptoms than the general population and age- and sex-matched peers. This study aimed to estimate the prevalence of depressive symptoms among the medical students of a large school following a traditional curriculum and its relation to personal background variables. A descriptive-analytic, cross-sectional study was conducted in a medical school in Riyadh, Saudi Arabia. The medical students of King Saud University in Riyadh, Saudi Arabia, were screened for depressive symptoms using the 21-item Beck Depression Inventory. A high prevalence of depressive symptoms (48.2%) was found, it was either mild (21%), moderate (17%), or severe (11%). The presence and severity of depressive symptoms had a statistically significant association with early academic years (p < 0.000) and female gender (p < 0.002). The high prevalence of depressive symptoms is an alarming sign and calls for remedial action, particularly for the junior and female students.


Assuntos
Depressão/epidemiologia , Transtorno Depressivo/epidemiologia , Educação de Graduação em Medicina/métodos , Estresse Psicológico/psicologia , Estudantes de Medicina/psicologia , Estudos Transversais , Educação de Graduação em Medicina/normas , Escolaridade , Feminino , Humanos , Masculino , Prevalência , Escalas de Graduação Psiquiátrica , Arábia Saudita/epidemiologia , Fatores Sexuais , Estresse Psicológico/complicações , Estresse Psicológico/etiologia , Adulto Jovem
14.
Med Teach ; 34(3): 205-14, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22364452

RESUMO

We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas high-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.


Assuntos
Avaliação Educacional/métodos , Avaliação de Programas e Projetos de Saúde/métodos , Tomada de Decisões , Humanos , Modelos Educacionais
15.
Ned Tijdschr Tandheelkd ; 119(6): 302-5, 2012 Jun.
Artigo em Holandês | MEDLINE | ID: mdl-22812268

RESUMO

Educational research not only showed that student characteristics are of major importance for study success, but also that education does make a difference. Essentially, teaching is about stimulating students to invest time in learning and to use that time as effectively as possible. Assessment, goal-orientated work, and feedback have a major effect. The teacher is the key figure. With the aim to better understand teaching and learning, educational researchers usefindingsfrom other disciplines more and more often. A pitfall is to apply the findings of educational research without taking into consideration the context and the specific characteristics of students and teachers. Because of the large number offactors that influence the results ofeducation, educational science is referred as 'the hardest science of all'.


Assuntos
Educação em Odontologia , Psicologia Educacional , Estudantes de Odontologia/psicologia , Ensino/métodos , Humanos , Aprendizagem , Motivação
16.
Adv Health Sci Educ Theory Pract ; 16(3): 405-25, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21607744

RESUMO

Even though rater-based judgements of clinical competence are widely used, they are context sensitive and vary between individuals and institutions. To deal adequately with rater-judgement unreliability, evaluating the reliability of workplace rater-based assessments in the local context is essential. Using such an approach, the primary intention of this study was to identify the trainee score variation around supervisor ratings, identify sampling number needs of workplace assessments for certification of competence and position the findings within the known literature. This reliability study of workplace-based supervisors' assessments of trainees has a rater-nested-within-trainee design. Score variation attributable to the trainee for each competency item assessed (variance component) were estimated by the minimum-norm quadratic unbiased estimator. Score variance was used to estimate the number needed for a reliability value of 0.80. The trainee score variance for each of 14 competency items varied between 2.3% for emergency skills to 35.6% for communication skills, with an average for all competency items of 20.3%; the "Overall rating" competency item trainee variance was 28.8%. These variance components translated into 169, 7, 17 and 28 assessments needed for a reliability of 0.80, respectively. Most variation in assessment scores was due to measurement error, ranging from 97.7% for emergency skills to 63.4% for communication skills. Similar results have been demonstrated in previously published studies. In summary, overall supervisors' workplace based assessments have poor reliability and are not suitable for use in certification processes in their current form. The marked variation in the supervisors' reliability in assessing different competencies indicates that supervisors may be able to assess some with acceptable reproducibility; in this case communication and possibly overall competence. However, any continued use of this format for assessment of trainee competencies necessitates the identification of what supervisors in different institutions can reliably assess rather than continuing to impose false expectations from unreliable assessments.


Assuntos
Competência Clínica/estatística & dados numéricos , Educação de Pós-Graduação em Medicina/estatística & dados numéricos , Avaliação Educacional/métodos , Análise de Variância , Avaliação Educacional/estatística & dados numéricos , Escolaridade , Feminino , Conhecimentos, Atitudes e Prática em Saúde , Humanos , Masculino , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estados Unidos , Local de Trabalho/psicologia
17.
Adv Health Sci Educ Theory Pract ; 16(2): 151-65, 2011 May.
Artigo em Inglês | MEDLINE | ID: mdl-20882335

RESUMO

Traditional psychometric approaches towards assessment tend to focus exclusively on quantitative properties of assessment outcomes. This may limit more meaningful educational approaches towards workplace-based assessment (WBA). Cognition-based models of WBA argue that assessment outcomes are determined by cognitive processes by raters which are very similar to reasoning, judgment and decision making in professional domains such as medicine. The present study explores cognitive processes that underlie judgment and decision making by raters when observing performance in the clinical workplace. It specifically focuses on how differences in rating experience influence information processing by raters. Verbal protocol analysis was used to investigate how experienced and non-experienced raters select and use observational data to arrive at judgments and decisions about trainees' performance in the clinical workplace. Differences between experienced and non-experienced raters were assessed with respect to time spent on information analysis and representation of trainee performance; performance scores; and information processing--using qualitative-based quantitative analysis of verbal data. Results showed expert-novice differences in time needed for representation of trainee performance, depending on complexity of the rating task. Experts paid more attention to situation-specific cues in the assessment context and they generated (significantly) more interpretations and fewer literal descriptions of observed behaviors. There were no significant differences in rating scores. Overall, our findings seemed to be consistent with other findings on expertise research, supporting theories underlying cognition-based models of assessment in the clinical workplace. Implications for WBA are discussed.


Assuntos
Competência Clínica , Cognição , Avaliação Educacional/métodos , Clínicos Gerais/educação , Conhecimentos, Atitudes e Prática em Saúde , Tomada de Decisões , Escolaridade , Humanos , Julgamento , Estatísticas não Paramétricas , Análise e Desempenho de Tarefas , Aprendizagem Verbal , Local de Trabalho
18.
Adv Health Sci Educ Theory Pract ; 16(3): 359-73, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21188514

RESUMO

Workplace learning in undergraduate medical education has predominantly been studied from a cognitive perspective, despite its complex contextual characteristics, which influence medical students' learning experiences in such a way that explanation in terms of knowledge, skills, attitudes and single determinants of instructiveness is unlikely to suffice. There is also a paucity of research which, from a perspective other than the cognitive or descriptive one, investigates student learning in general practice settings, which are often characterised as powerful learning environments. In this study we took a socio-cultural perspective to clarify how students learn during a general practice clerkship and to construct a conceptual framework that captures this type of learning. Our analysis of group interviews with 44 fifth-year undergraduate medical students about their learning experiences in general practice showed that students needed developmental space to be able to learn and develop their professional identity. This space results from the intertwinement of workplace context, personal and professional interactions and emotions such as feeling respected and self-confident. These forces framed students' participation in patient consultations, conversations with supervisors about consultations and students' observation of supervisors, thereby determining the opportunities afforded to students to mind their learning. These findings resonate with other conceptual frameworks and learning theories. In order to refine our interpretation, we recommend that further research from a socio-cultural perspective should also explore other aspects of workplace learning in medical education.


Assuntos
Estágio Clínico/métodos , Cultura , Educação Médica Continuada , Relações Interpessoais , Aprendizagem , Percepção Social , Adulto , Atitude do Pessoal de Saúde , Educação de Graduação em Medicina , Emoções , Feminino , Clínicos Gerais/educação , Conhecimentos, Atitudes e Prática em Saúde , Humanos , Masculino , Modelos Educacionais , Pesquisa Qualitativa , Estudantes de Medicina , Local de Trabalho , Adulto Jovem
19.
Adv Health Sci Educ Theory Pract ; 16(1): 131-42, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-20559868

RESUMO

We reviewed the literature on instruments for work-based assessment in single clinical encounters, such as the mini-clinical evaluation exercise (mini-CEX), and examined differences between these instruments in characteristics and feasibility, reliability, validity and educational effect. A PubMed search of the literature published before 8 January 2009 yielded 39 articles dealing with 18 different assessment instruments. One researcher extracted data on the characteristics of the instruments and two researchers extracted data on feasibility, reliability, validity and educational effect. Instruments are predominantly formative. Feasibility is generally deemed good and assessor training occurs sparsely but is considered crucial for successful implementation. Acceptable reliability can be achieved with 10 encounters. The validity of many instruments is not investigated, but the validity of the mini-CEX and the 'clinical evaluation exercise' is supported by strong and significant correlations with other valid assessment instruments. The evidence from the few studies on educational effects is not very convincing. The reports on clinical assessment instruments for single work-based encounters are generally positive, but supporting evidence is sparse. Feasibility of instruments seems to be good and reliability requires a minimum of 10 encounters, but no clear conclusions emerge on other aspects. Studies on assessor and learner training and studies examining effects beyond 'happiness data' are badly needed.


Assuntos
Estágio Clínico , Avaliação Educacional/métodos , Relações Médico-Paciente , Estudantes de Medicina , Escolaridade , Retroalimentação , Humanos , Local de Trabalho
20.
Eur J Dent Educ ; 15(3): 159-64, 2011 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-21762320

RESUMO

INTRODUCTION: For health professionals, the development of insight into their performance is vital for safe practice, professional development and self-regulation. This study investigates whether the development of dental trainees' insight, when provided with external feedback on performance, can be assessed using a single criterion on a simple global ratings form such as the Longitudinal Evaluation of Performance or Mini Clinical Evaluation Exercise. METHODS: Postgraduate dental trainees (N = 139) were assessed using this tool on a weekly basis for 6 months. Regression analysis of the data was carried out using SPSS, and a short trainer questionnaire was implemented to investigate feasibility. RESULTS: Ratings for insight were shown to increase with time in a similar manner to the growth observed in other essential skills. The gradient of the slope for growth of insight was slightly less than that of the other observed skills. Trainers were mostly positive about the new criterion assessing trainees' insight, although the importance of training for trainers in this process was highlighted. DISCUSSION: Our data suggest that practitioners' insight into their performance can be developed with experience and regular feedback. However, this is most likely a complex skill dependent on a number of intrinsic and external factors. CONCLUSION: The development of trainees' insight into their performance can be assessed using a single criterion on a simple global ratings form. The process involves no additional burden on evaluators in terms of their time or cost, and promotes best practice in the provision of feedback for trainees.


Assuntos
Competência Clínica/normas , Cognição , Odontólogos/psicologia , Educação de Pós-Graduação em Odontologia , Avaliação de Desempenho Profissional/métodos , Preceptoria , Autoavaliação (Psicologia) , Atitude do Pessoal de Saúde , Retroalimentação , Humanos , Modelos Lineares , Escócia , Programas de Autoavaliação , Local de Trabalho
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa