Your browser doesn't support javascript.
loading
Assessment-based health informatics curriculum improvement.
Berner, Eta S; Dorsey, Amanda D; Garrie, Robert L; Qu, Haiyan.
Afiliação
  • Berner ES; Professor and Director, Center for Health Informatics for Patient Safety/Quality, Department of Health Services Administration, University of Alabama at Birmingham, Birmingham, AL, USA eberner@uab.edu.
  • Dorsey AD; Department of Health Services Administration, University of Alabama at Birmingham, Birmingham, AL, USA.
  • Garrie RL; Department of Health Services Administration, University of Alabama at Birmingham, Birmingham, AL, USA.
  • Qu H; Department of Health Services Administration, University of Alabama at Birmingham, Birmingham, AL, USA.
J Am Med Inform Assoc ; 23(4): 813-8, 2016 07.
Article em En | MEDLINE | ID: mdl-27274021
OBJECTIVE: Informatics programs need assurance that their curricula prepare students for intended roles as well as ensuring that students have mastered the appropriate competencies. The objective of this study is to describe a method for using assessment data to identify areas for curriculum, student selection, and assessment improvement. MATERIALS AND METHODS: A multiple-choice examination covering the content in the Commission for Health Accreditation of Informatics and Information Management Education curricular facets/elements was developed and administered to 2 cohorts of entering students prior to the beginning of the program and to the first cohort after completion of the first year's courses. The reliability of the examination was assessed using Cronbach's alpha. Content validity was assessed by having 2 raters assess the match of the items to the Commission for Health Accreditation of Informatics and Information Management Education requirements. Construct validation included comparison of exam performance of instructed vs uninstructed students. Criterion-related validity was assessed by examining the relationship of background characteristics to exam performance and by comparing examination performance to graduate Grade Point Average (GPA). RESULTS: Reliability of the examination was 0.91 and 0.82 (Cohort 1 pre/post-tests) and 0.43 (Cohort 2 pretest). Both raters judged 76% of the test items as appropriate. There were statistically significant differences between the instructed (Cohort 1 post-test) and uninstructed (Cohort 2 pretest) students (t = 2.95 P < .01), as well as between the Cohort 1 pre/post-tests (t = 6.52, P < .001). Neither the background variables nor the graduate GPA were significantly correlated with the examination scores. CONCLUSION: We found that the examination had generally good psychometric properties and the exceptions could be used to identify areas for curriculum and assessment improvement.
Assuntos
Palavras-chave

Texto completo: 1 Temas: ECOS / Aspectos_gerais Bases de dados: MEDLINE Assunto principal: Informática Médica / Currículo / Educação de Pós-Graduação / Avaliação Educacional Tipo de estudo: Prognostic_studies País/Região como assunto: America do norte Idioma: En Revista: J Am Med Inform Assoc Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2016 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Temas: ECOS / Aspectos_gerais Bases de dados: MEDLINE Assunto principal: Informática Médica / Currículo / Educação de Pós-Graduação / Avaliação Educacional Tipo de estudo: Prognostic_studies País/Região como assunto: America do norte Idioma: En Revista: J Am Med Inform Assoc Assunto da revista: INFORMATICA MEDICA Ano de publicação: 2016 Tipo de documento: Article País de afiliação: Estados Unidos