Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Surg Endosc ; 34(8): 3633-3643, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32519273

RESUMO

BACKGROUND: The Fundamentals of Endoscopic Surgery (FES) program became required for American Board of Surgery certification as part of the Flexible Endoscopy Curriculum (FEC) for residents graduating in 2018. This study expands prior psychometric investigation of the FES skills test. METHODS: We analyzed de-identified first-attempt skills test scores and self-reported demographic characteristics of 2023 general surgery residents who were required to pass FES. RESULTS: The overall pass rate was 83%. "Loop Reduction" was the most difficult sub-task. Subtasks related to one another only modestly (Spearman's ρ ranging from 0.11 to 0.42; coefficient α = .55). Both upper and lower endoscopic procedural experience had modest positive association with scores (ρ = 0.14 and 0.15) and passing. Examinees who tested on the GI Mentor Express simulator had lower total scores and a lower pass rate than those tested on the GI Mentor II (pass rates = 73% vs. 85%). Removing an Express-specific scoring rule that had been applied eliminated these differences. Gender, glove size, and height were closely related. Women scored lower than men (408- vs. 489-point averages) and had a lower first-attempt pass rate (71% vs. 92%). Glove size correlated positively with score (ρ = 0.31) and pass rate. Finally, height correlated positively with score (r = 0.27) and pass rate. Statistically controlling for glove size and height did not eliminate gender differences, with men still having 3.2 times greater odds of passing. CONCLUSIONS: FES skills test scores show both consistencies with the assessment's validity argument and several remarkable findings. Subtasks reflect distinct skills, so passing standards should perhaps be set for each subtask. The Express simulator-specific scoring penalty should be removed. Differences seen by gender are concerning. We argue those differences do not reflect measurement bias, but rather highlight equity concerns in surgical technology, training, and practice.


Assuntos
Competência Clínica , Endoscopia , Avaliação Educacional , Escolaridade , Endoscopia/educação , Endoscopia/normas , Endoscopia/estatística & dados numéricos , Feminino , Humanos , Masculino
2.
Adv Health Sci Educ Theory Pract ; 24(1): 45-63, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30171512

RESUMO

Learning curves can support a competency-based approach to assessment for learning. When interpreting repeated assessment data displayed as learning curves, a key assessment question is: "How well is each learner learning?" We outline the validity argument and investigation relevant to this question, for a computer-based repeated assessment of competence in electrocardiogram (ECG) interpretation. We developed an on-line ECG learning program based on 292 anonymized ECGs collected from an electronic patient database. After diagnosing each ECG, participants received feedback including the computer interpretation, cardiologist's annotation, and correct diagnosis. In 2015, participants from a single institution, across a range of ECG skill levels, diagnosed at least 60 ECGs. We planned, collected and evaluated validity evidence under each inference of Kane's validity framework. For Scoring, three cardiologists' kappa for agreement on correct diagnosis was 0.92. There was a range of ECG difficulty across and within each diagnostic category. For Generalization, appropriate sampling was reflected in the inclusion of a typical clinical base rate of 39% normal ECGs. Applying generalizability theory presented unique challenges. Under the Extrapolation inference, group learning curves demonstrated expert-novice differences, performance increased with practice and the incremental phase of the learning curve reflected ongoing, effortful learning. A minority of learners had atypical learning curves. We did not collect Implications evidence. Our results support a preliminary validity argument for a learning curve assessment approach for repeated ECG interpretation with deliberate and mixed practice. This approach holds promise for providing educators and researchers, in collaboration with their learners, with deeper insights into how well each learner is learning.


Assuntos
Educação de Graduação em Medicina/métodos , Avaliação Educacional/métodos , Eletrocardiografia/métodos , Curva de Aprendizado , Competência Clínica , Educação Baseada em Competências , Educação a Distância , Educação de Graduação em Medicina/normas , Avaliação Educacional/normas , Eletrocardiografia/normas , Feedback Formativo , Humanos , Internet , Reprodutibilidade dos Testes
3.
J Grad Med Educ ; 10(6): 657-664, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30619523

RESUMO

BACKGROUND: Geriatric patients account for a growing proportion of dermatology clinic visits. Although their biopsychosocial needs differ from those of younger adults, there are no geriatrics training requirements for dermatology residency programs. OBJECTIVE: This study explored the state of geriatrics education in dermatology programs in 2016. METHODS: This constructivist study employed cross-sectional, mixed-methods analysis with triangulation of semistructured interviews, surveys, and commonly used curricular materials. We used purposive sampling of 5 US academic allopathic dermatology programs of different sizes, geographic locations, and institutional resources. Participants were interviewed about informal curricula, barriers, and suggestions for improving geriatrics education, and they also completed a survey about the geriatrics topics that should be taught. The constant comparative method with grounded theory was used for qualitative analysis. We identified formal geriatrics curricular content by electronically searching and counting relevant key texts. RESULTS: Fourteen of 17 participants (82%) agreed to be interviewed, and 10 of 14 (71%) responded to the survey. Themes of what should be taught included diagnosing and managing skin diseases common in older adults, holistic treatment, cosmetic dermatology, benign skin aging, and the basic science of aging. Topics currently covered that could be expanded included communication, systems-based challenges, ethical issues, safe prescribing, quality improvement, and elder abuse. Cosmetic dermatology was the most commonly taught formal geriatrics curricular topic. CONCLUSIONS: There were discrepancies among topics participants felt were important to teach about geriatric dermatology and curricular coverage of these areas. We identified challenges for expanding geriatrics curricula and potential solutions.


Assuntos
Currículo/normas , Dermatologia/educação , Geriatria/educação , Internato e Residência/normas , Estudos Transversais , Educação de Pós-Graduação em Medicina/métodos , Humanos , Avaliação das Necessidades , Inquéritos e Questionários , Estados Unidos
4.
Simul Healthc ; 10(4): 249-55, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26098494

RESUMO

INTRODUCTION: There is a need for reliable and practical interprofessional simulations that measure collaborative practice in outpatient/community scenarios where most health care takes place. The authors applied generalizability theory to examine reliability in an ambulatory care scenario using the following 2 trained observer groups: standardized patient (SP, actor) raters and those who received rater training alone (non-SPs). METHODS: Twenty-one graduate health professions students participated as health care providers in an interprofessional care simulation involving an SP, caregiver, and clinicians. Six observers in each group received frame-of-reference training and rated aspects of collaborative care using a behavioral observation checklist. The authors examined sources of measurement variance using generalizability theory and extended this technique to statistically compare the rater types and compute reliability for subsets of raters. RESULTS: Standardized patient ratings were significantly more reliable than non-SPs' despite both groups receiving extensive rater training. A single SP was predicted to generate scores with a reliability of 0.74, whereas a single non-SP rater's scores were predicted at a reliability of 0.40. Removing each rater one by one from the full 6-member SP sample reduced reliability similarly for all raters (reliability, 0.86-0.89). However, removing individual raters from the full 6-member non-SP sample led to more variable reductions in reliability (0.58-0.72). CONCLUSIONS: Ongoing experience rating performance from within a particular simulation-based assessment may be a valuable rater characteristic and more effective than rater training alone. The extensions of reliability estimation introduced here can also be used to support more insightful reliability research and subsequent improvement of rater training and assessment protocols.


Assuntos
Competência Clínica , Comportamento Cooperativo , Avaliação Educacional/normas , Ocupações em Saúde/educação , Treinamento por Simulação/normas , Humanos , Relações Interprofissionais , Variações Dependentes do Observador , Equipe de Assistência ao Paciente , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA