Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
J Contin Educ Health Prof ; 38(4): 235-243, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30169379

RESUMO

INTRODUCTION: Fellows of the Royal College of Physicians and Surgeons of Canada are required to participate in assessment activities for all new 5-year cycles beginning on or after January 2014 to meet the maintenance of certification program requirements. This study examined the assessment activities which psychiatrists reported in their maintenance of certification e-portfolios to determine the types and frequency of activities reported; the resultant learning, planned learning, and/or changes to the practice they planned or implemented; and the interrelationship between the types of assessment activities, learning that was affirmed or planned, and changes planned or implemented. METHODS: A total of 5000 entries from 2195 psychiatrists were examined. A thematic analysis drawing on the framework analysis was undertaken of the 2016 entries. RESULTS: There were 3841 entries for analysis; 1159 entries did not meet the criteria for assessment. The most commonly reported activities were self-assessment programs, feedback on teaching, regular performance reviews, and chart reviews. Less frequent were direct observation, peer supervision, and reviews by provincial medical regulatory authorities. In response to the data, psychiatrists affirmed that their practices were appropriate, identified gaps they intended to address, planned future learning, and/or planned or implemented changes. The assessment activities were internally or externally initiated and resulted in no or small changes (accommodations and adjustments) or redirections. DISCUSSION: Psychiatrists reported participating in a variety of assessment activities that resulted in variable impact on learning and change. The study underscores the need to ensure that assessments being undertaken are purposeful, relevant, and designed to enable identification of outcomes that impact practice.


Assuntos
Documentação/tendências , Psiquiatria/métodos , Canadá , Certificação/métodos , Competência Clínica/normas , Documentação/métodos , Documentação/normas , Educação Médica Continuada/tendências , Humanos , Avaliação de Resultados em Cuidados de Saúde/métodos , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos
2.
J Vet Med Educ ; 43(1): 104-10, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26983054

RESUMO

Effective faculty development for veterinary preceptors requires knowledge about their learning needs and delivery preferences. Veterinary preceptors at community practice locations in Alberta, Canada, were surveyed to determine their confidence in teaching ability and interest in nine faculty development topics. The study included 101 veterinarians (48.5% female). Of these, 43 (42.6%) practiced veterinary medicine in a rural location and 54 (53.5%) worked in mixed-animal or food-animal practice. Participants reported they were more likely to attend an in-person faculty development event than to participate in an online presentation. The likelihood of attending an in-person event differed with the demographics of the respondent. Teaching clinical reasoning, assessing student performance, engaging and motivating students, and providing constructive feedback were topics in which preceptors had great interest and high confidence. Preceptors were least confident in the areas of student learning styles, balancing clinical workload with teaching, and resolving conflict involving the student. Disparities between preceptors' interest and confidence in faculty development topics exist, in that topics with the lowest confidence scores were not rated as those of greatest interest. While the content and format of clinical teaching faculty development events should be informed by the interests of preceptors, consideration of preceptors' confidence in teaching ability may be warranted when developing a faculty development curriculum.


Assuntos
Educação em Veterinária , Avaliação das Necessidades , Preceptoria , Ensino , Adulto , Idoso , Alberta , Docentes , Feminino , Humanos , Aprendizagem , Masculino , Pessoa de Meia-Idade , Modelos Teóricos , Adulto Jovem
3.
Resuscitation ; 83(7): 887-93, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22286047

RESUMO

INTRODUCTION: It is critical that competency in pediatric resuscitation is achieved and assessed during residency or post graduate medical training. The purpose of this study was to create and evaluate a tool to measure all elements of pediatric resuscitation team leadership competence. METHODS: An initial set of items, derived from a literature review and a brainstorming session, were refined to a 26 item assessment tool through the use of Delphi methodology. The tool was tested using videos of standardized resuscitations. A psychometric assessment of the evidence for instrument validity and reliability was undertaken. RESULTS: The performance of 30 residents on two videotaped scenarios was assessed by 4 pediatricians using the tool, with 12 items assessing 'leadership and communication skills' (LCS) and 14 items assessing 'knowledge and clinical skills' (KCS). The instrument showed evidence of reliability; the Cronbach's alpha and generalizability co-efficients for the overall instrument were α=0.818 and Ep(2)=0.76, for LCS were α=0.827 and Ep(2)=0.844, and for KCS were α=0.673 and Ep(2)=0.482. While validity was initially established through literature review and brainstorming by the panel of experts, it was further built through the high strength of correlation between global scores and scores for overall performance (r=0.733), LCS (r=0.718) and KCS (r=0.662) as well as the factor analysis which accounted for 40.2% of the variance. CONCLUSION: The results of the study demonstrate that the instrument is a valid and reliable tool to evaluate pediatric resuscitation team leader competence.


Assuntos
Competência Clínica/normas , Avaliação Educacional/métodos , Internato e Residência/normas , Pediatria/educação , Ressuscitação/educação , Humanos , Internato e Residência/métodos , Simulação de Paciente , Reprodutibilidade dos Testes , Ressuscitação/normas
4.
Med Educ ; 45(6): 636-47, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21564201

RESUMO

CONTEXT: Conceptualisations of self-assessment are changing as its role in professional development comes to be viewed more broadly as needing to be both externally and internally informed through activities that enable access to and the interpretation and integration of data from external sources. Education programmes use various activities to promote learners' reflection and self-direction, yet we know little about how effective these activities are in 'informing' learners' self-assessments. OBJECTIVES: This study aimed to increase understanding of the specific ways in which undergraduate and postgraduate learners used learning and assessment activities to inform self-assessments of their clinical performance. METHODS: We conducted an international qualitative study using focus groups and drawing on principles of grounded theory. We recruited volunteer participants from three undergraduate and two postgraduate programmes using structured self-assessment activities (e.g. portfolios). We asked learners to describe their perceptions of and experiences with formal and informal activities intended to inform self-assessment. We conducted analysis as a team using a constant comparative process. RESULTS: Eighty-five learners (53 undergraduate, 32 postgraduate) participated in 10 focus groups. Two main findings emerged. Firstly, the perceived effectiveness of formal and informal assessment activities in informing self-assessment appeared to be both person- and context-specific. No curricular activities were considered to be generally effective or ineffective. However, the availability of high-quality performance data and standards was thought to increase the effectiveness of an activity in informing self-assessment. Secondly, the fostering and informing of self-assessment was believed to require credible and engaged supervisors. CONCLUSIONS: Several contextual and personal conditions consistently influenced learners' perceptions of the extent to which assessment activities were useful in informing self-assessments of performance. Although learners are not guaranteed to be accurate in their perceptions of which factors influence their efforts to improve performance, their perceptions must be taken into account; assessment strategies that are perceived as providing untrustworthy information can be anticipated to have negligible impact.


Assuntos
Competência Clínica/normas , Educação de Pós-Graduação em Medicina/métodos , Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Autoavaliação (Psicologia) , Estudantes de Medicina/psicologia , Bélgica , Currículo , Educação de Pós-Graduação em Medicina/normas , Educação de Graduação em Medicina/normas , Avaliação Educacional/normas , Humanos , Países Baixos , Programas de Autoavaliação , Reino Unido
5.
Arch Pathol Lab Med ; 133(8): 1301-8, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19653730

RESUMO

CONTEXT: There is increasing interest in ensuring that physicians demonstrate the full range of Accreditation Council for Graduate Medical Education competencies. OBJECTIVE: To determine whether it is possible to develop a feasible and reliable multisource feedback instrument for pathologists and laboratory medicine physicians. DESIGN: Surveys with 39, 30, and 22 items were developed to assess individual physicians by 8 peers, 8 referring physicians, and 8 coworkers (eg, technologists, secretaries), respectively, using 5-point scales and an unable-to-assess category. Physicians completed a self-assessment survey. Items addressed key competencies related to clinical competence, collaboration, professionalism, and communication. RESULTS: Data from 101 pathologists and laboratory medicine physicians were analyzed. The mean number of respondents per physician was 7.6, 7.4, and 7.6 for peers, referring physicians, and coworkers, respectively. The reliability of the internal consistency, measured by Cronbach alpha, was > or = .95 for the full scale of all instruments. Analysis indicated that the medical peer, referring physician, and coworker instruments achieved a generalizability coefficient of .78, .81, and .81, respectively. Factor analysis showed 4 factors on the peer questionnaire accounted for 68.8% of the total variance: reports and clinical competency, collaboration, educational leadership, and professional behavior. For the referring physician survey, 3 factors accounted for 66.9% of the variance: professionalism, reports, and clinical competency. Two factors on the coworker questionnaire accounted for 59.9% of the total variance: communication and professionalism. CONCLUSIONS: It is feasible to assess this group of physicians using multisource feedback with instruments that are reliable.


Assuntos
Competência Clínica/normas , Retroalimentação , Pessoal de Laboratório Médico , Patologia Clínica , Padrões de Prática Médica/normas , Pessoal Técnico de Saúde/estatística & dados numéricos , Estudos de Viabilidade , Humanos , Revisão dos Cuidados de Saúde por Pares , Garantia da Qualidade dos Cuidados de Saúde , Reprodutibilidade dos Testes , Autoavaliação (Psicologia) , Inquéritos e Questionários , Recursos Humanos
6.
Can J Psychiatry ; 53(8): 525-33, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18801214

RESUMO

OBJECTIVE: To assess the feasibility and evidence for the reliability and validity of a set of questionnaires for psychiatrists, given that multisource feedback (MSF) or 360 degrees evaluation allows medical colleagues, coworkers, and patients to provide feedback about competencies to enhance physician improvement in intended directions. METHOD: Surveys with 40, 22, 38, and 37 items were developed to assess psychiatrists by 25 patients, 8 coworkers, 8 psychiatrist colleagues, and self, respectively, using a 5-point agreement scale with an unable-to-assess category. Items addressed key competencies related to communication skills, professionalism, collegiality, and self-management. Feasibility was assessed with response rates for each instrument. Validity was assessed with a table of specifications, the percentage of participants unable to assess the psychiatrist for each item, and exploratory factor analyses to determine which items grouped together into scales. Reliability was assessed by Cronbach's alpha and generalizability coefficients. RESULTS: A sample of 101 psychiatrists provided data. A total of 2456 patients (24.32/25.00 per psychiatrist), 744 coworkers (7.37/8.00 per psychiatrist), 764 colleagues (7.56/8.00 per psychiatrist), and 101 self forms were analyzed. The overall internal consistency reliability of the instruments was a Cronbach's alpha of 0.98, 0.96, and 0.98 for patient, coworker, and medical colleague surveys, respectively. The generalizability coefficient for the patient, coworker, and medical colleague was 0.78, 0.82, and 0.81, respectively. CONCLUSION: It is possible to develop a feasible MSF program for psychiatrists with evidence of reliability and validity that can provide feedback about key clinical competencies.


Assuntos
Retroalimentação , Satisfação do Paciente , Padrões de Prática Médica/normas , Psiquiatria , Humanos , Inquéritos e Questionários , Recursos Humanos
7.
Radiology ; 247(3): 771-8, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18375839

RESUMO

PURPOSE: To determine whether it is possible to develop a feasible, valid, and reliable multisource feedback program for radiologists. MATERIALS AND METHODS: Surveys with 38, 29, and 20 items were developed to assess individual radiologists by eight radiologic colleagues (peers), eight referring physicians, and eight co-workers (eg, technicians), respectively, by using five-point scales along with an "unable to assess" category. Radiologists completed a self-assessment on the basis of the peer questionnaire. Items addressed key competencies related to clinical competence, collegiality, professionalism, workplace behavior, and self-management. The study was approved by the University of Calgary Conjoint Health Ethics Research Board. RESULTS: Data from 190 radiologists were available. The mean numbers of respondents per physician were 7.5 of eight (1259 of 1520, 83%), 7.15 of eight (1337 of 1520, 88%), and 7.5 of eight (1420 of 1520, 93%) for peers, referring physicians, and co-workers, respectively. The internal consistency reliability indicated all instruments had a Cronbach alpha of more than 0.95. The generalizability coefficient analysis indicated that the peer, referring physicians, and co-worker instruments achieved a generalizability coefficient of 0.88, 0.79, and 0.87, respectively. The factor analysis indicated that four factors on the colleague questionnaire accounted for 70% of the total variance: clinical competence, collegiality, professional development, and workplace behavior. For the referring physician survey, three factors accounted for 64.1% of the variance: professional development, professional consultation, and professional responsibility. Two factors on the co-worker questionnaire accounted for 63.2% of the total variance: professional responsibility and patient interaction. CONCLUSION: The psychometric examination of the data suggests that the instruments developed to assess radiologists are a feasible way to assess radiology practice and provide evidence for validity and reliability.


Assuntos
Competência Clínica , Revisão dos Cuidados de Saúde por Pares , Radiologia/normas , Autoavaliação (Psicologia) , Comunicação , Análise Fatorial , Estudos de Viabilidade , Humanos , Médicos , Psicometria , Reprodutibilidade dos Testes , Inquéritos e Questionários
8.
Med Educ ; 41(6): 573-9, 2007 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-17518837

RESUMO

CONTEXT: Contemporary studies have shown that traditional medical school admissions interviews have strong face validity but provide evidence for only low reliability and validity. As a result, they do not provide a standardised, defensible and fair process for all applicants. METHODS: In 2006, applicants to the University of Calgary Medical School were interviewed using the multiple mini-interview (MMI). This interview process consisted of 9, 8-minute stations where applicants were presented with scenarios they were then asked to discuss. This was followed by a single 8-minute station that allowed the applicant to discuss why he or she should be admitted to our medical school. Sociodemographic and station assessment data provided for each applicant were analysed to determine whether the MMI was a valid and reliable assessment of the non-cognitive attributes, distinguished between the non-cognitive attributes, and discriminated between those accepted and those placed on the waitlist (waiting list). We also assessed whether applicant sociodemographic characteristics were associated with acceptance or waitlist status. RESULTS: Cronbach's alpha for each station ranged from 0.97-0.98. Low correlations between stations and the factor analysis suggest each station assessed different attributes. There were significant differences in scores between those accepted and those on the waitlist. Sociodemographic differences were not associated with status on acceptance or waiting lists. DISCUSSION: The MMI is able to assess different non-cognitive attributes and our study provides additional evidence for its reliability and validity. The MMI offers a fairer and more defensible assessment of applicants to medical school than the traditional interview.


Assuntos
Cognição , Educação de Graduação em Medicina , Entrevistas como Assunto , Critérios de Admissão Escolar , Faculdades de Medicina , Estudantes de Medicina/psicologia , Adulto , Alberta , Análise de Variância , Feminino , Humanos , Masculino
9.
Acad Emerg Med ; 13(12): 1296-303, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17099191

RESUMO

OBJECTIVES: To determine whether it is possible to develop a feasible, valid, and reliable multisource feedback program (360 degree evaluation) for emergency physicians. METHODS: Surveys with 16, 20, 30, and 31 items were developed to assess emergency physicians by 25 patients, eight coworkers, eight medical colleagues, and self, respectively, using five-point scales along with an "unable to assess" category. Items addressed key competencies related to communication skills, professionalism, collegiality, and self-management. RESULTS: Data from 187 physicians who identified themselves as emergency physicians were available. The mean number of respondents per physician was 21.6 (SD +/- 3.87) (93%) for patients, 7.6 (SD +/- 0.89) (96%) for coworkers, and 7.7 (SD +/- 0.61) (95%) for medical colleagues, suggesting it was a feasible tool. Only the patient survey had four items with "unable to assess" percentages > or = 15%. The factor analysis indicated there were two factors on the patient questionnaire (communication/professionalism and patient education), two on the coworker survey (communication/collegiality and professionalism), and four on the medical colleague questionnaire (clinical performance, professionalism, self-management, and record management) that accounted for 80.0%, 62.5%, and 71.9% of the variance on the surveys, respectively. The factors were consistent with the intent of the instruments, providing empirical evidence of validity for the instruments. Reliability was established for the instruments (Cronbach's alpha > 0.94) and for each physician (generalizability coefficients were 0.68 for patients, 0.85 for coworkers, and 0.84 for medical colleagues). CONCLUSIONS: The psychometric examination of the data suggests that the instruments developed to assess emergency physicians were feasible and provide evidence for validity and reliability.


Assuntos
Competência Clínica , Medicina de Emergência , Satisfação do Paciente , Revisão dos Cuidados de Saúde por Pares , Autoavaliação (Psicologia) , Comunicação , Análise Fatorial , Estudos de Viabilidade , Humanos , Educação de Pacientes como Assunto , Médicos , Inquéritos e Questionários
10.
Pediatrics ; 117(3): 796-802, 2006 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-16510660

RESUMO

OBJECTIVE: To determine whether it is possible to develop feasible, valid, and reliable multisource feedback data for pediatricians. METHODS: Surveys with 40, 22, 38, and 37 items were developed for assessment of pediatricians by patients, co-workers, medical colleagues, and themselves, respectively, using 5-point scales with an "unable to assess" category. Items addressed key competencies related to communication skills, professionalism, collegiality, continuing professional development, and collaboration. Each pediatrician was assessed by 25 patients, 8 medical colleagues, and 8 co-workers. Feasibility was assessed with response rates for each instrument. Validity was assessed with rating profiles, the percentage of participants unable to assess the physician for each item, and exploratory factor analyses to determine which items grouped together into scales. Cronbach's alpha and generalizability coefficient analyses assessed reliability. RESULTS: One hundred pediatricians participated. The mean number of respondents per physician was 23.4 (93.6%) for patients, 7.6 (94.8%) for co-workers, and 7.6 (95.5%) for medical colleagues. The mean ratings ranged from 4 to 5 for each item on each scale. Few items had high percentages of "unable to assess" responses. The factor analyses revealed a 4-factor solution for the patient survey, a 3-factor solution for the co-worker survey, and a 4-factor solution for the medical colleague survey, accounting for at least 64% of the variance. All instruments had high internal consistency. The generalizability coefficients were .85 for patients, .87 for co-workers, and .78 for medical colleagues. CONCLUSION: Surveys can be developed to provide feedback data on key competencies.


Assuntos
Coleta de Dados , Pediatria/normas , Canadá , Pessoal de Saúde/psicologia , Humanos , Relações Interprofissionais , Pacientes/psicologia , Relações Médico-Paciente , Autoavaliação (Psicologia) , Sociedades Médicas , Inquéritos e Questionários
11.
Acad Med ; 79(10 Suppl): S5-8, 2004 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-15383375

RESUMO

PROBLEM STATEMENT: To determine whether a common peer assessment instrument can assess competencies across internal medicine, pediatrics, and psychiatry specialties. METHOD: A common 36 item peer survey assessed psychiatry (n = 101), pediatrics (n = 100), and internal medicine (n = 103) specialists. Cronbach's alpha and generalizability analysis were used to assess reliability and factor analysis to address validity. RESULTS: A total of 2,306 (94.8% response rate) surveys were analyzed. The Cronbach's alpha coefficient was.98. The generalizabililty coefficient (mean of 7.6 raters) produced an Ep(2) =.83. Four factors emerged with a similar pattern of relative importance for pediatricians and internal medicine specialists whose first factor was patient management. Communication was the first factor for psychiatrists. CONCLUSIONS: Reliability and generalizability coefficient data suggest that using the instrument across specialties is appropriate, and differences in factors confirm the instrument's ability to discriminate for specialty differences providing evidence of validity.


Assuntos
Competência Clínica/normas , Medicina/normas , Revisão dos Cuidados de Saúde por Pares/métodos , Especialização , Comunicação , Humanos , Medicina Interna/normas , Assistência ao Paciente/normas , Pediatria/normas , Psiquiatria/normas , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA