Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Med Teach ; 44(8): 878-885, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35234562

RESUMO

Finding a reliable, practical and low-cost criterion-referenced standard setting method for performance-based assessments has proved challenging. The borderline regression method of standard setting for OSCEs has been shown to estimate reliable scores in studies using faculty as raters. Standardized patients (SPs) have been shown to be reliable OSCE raters but have not been evaluated as raters using this standard setting method. Our study sought to find whether SPs could be reliably used as sole raters in an OSCE of clinical encounters using the borderline regression standard setting method.SPs were trained for on a five-point global rating scale. In an OSCE for medical students, SPs completed skills checklists and the global rating scale. The borderline regression method was used to create case passing scores. We estimated the dependability of the final pass or fail decisions and the absolute dependability coefficients for global ratings, checklist scores, and case pass-score decisions using generalizability theory.The overall dependability estimate is 0.92 for pass or fail decisions for the complete OSCE. Dependability coefficients (0.70-0.86) of individual case passing scores range demonstrated high dependability.Based on our findings, the borderline regression method of standard setting can be used with SPs as sole raters in a medical student OSCE to produce a dependable passing score. For those already using SPs as raters, this can provide a practical criterion-referenced standard setting method for no additional cost or faculty time.


Assuntos
Estudantes de Medicina , Lista de Checagem , Competência Clínica , Avaliação Educacional/métodos , Humanos , Análise de Regressão
2.
BMC Med Educ ; 21(1): 205, 2021 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-33845830

RESUMO

BACKGROUND: Implicit bias instruction is becoming more prevalent in health professions education, with calls for skills-based curricula moving from awareness and recognition to management of implicit bias. Evidence suggests that health professionals and students learning about implicit bias ("learners") have varying attitudes about instruction in implicit bias, including the concept of implicit bias itself. Assessing learner attitudes could inform curriculum development and enable instructional designs that optimize learner engagement. To date, there are no instruments with evidence for construct validity that assess learner attitudes about implicit bias instruction and its relevance to clinical care. METHODS: The authors developed a novel instrument, the Attitude Towards Implicit Bias Instrument (ATIBI) and gathered evidence for three types of construct validity- content, internal consistency, and relationship to other variables. RESULTS: Authors utilized a modified Delphi technique with an interprofessional team of experts, as well as cognitive interviews with medical students leading to item refinement to improve content validity. Seven cohorts of medical students, N = 1072 completed the ATIBI. Psychometric analysis demonstrated high internal consistency (α = 0.90). Exploratory factor analysis resulted in five factors. Analysis of a subset of 100 medical students demonstrated a moderate correlation with similar instruments, the Integrative Medicine Attitude Questionnaire (r = 0.63, 95% CI: [0.59, 0.66]) and the Internal Motivation to Respond Without Prejudice Scale (r = 0.36, 95% CI: [0.32, 0.40]), providing evidence for convergent validity. Scores on our instrument had low correlation to the External Motivation to Respond Without Prejudice Scale (r = 0.15, 95% CI: [0.09, 0.19]) and the Groningen Reflection Ability Scale (r = 0.12, 95% CI: [0.06, 0.17]) providing evidence for discriminant validity. Analysis resulted in eighteen items in the final instrument; it is easy to administer, both on paper form and online. CONCLUSION: The Attitudes Toward Implicit Bias Instrument is a novel instrument that produces reliable and valid scores and may be used to measure medical student attitudes related to implicit bias recognition and management, including attitudes toward acceptance of bias in oneself, implicit bias instruction, and its relevance to clinical care.


Assuntos
Estudantes de Medicina , Atitude do Pessoal de Saúde , Humanos , Preconceito , Psicometria , Reprodutibilidade dos Testes , Inquéritos e Questionários
3.
Assessment ; : 10731911231204832, 2023 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-37902042

RESUMO

The Child Depression Inventory (CDI) is often used to assess change in depression over time, but no studies estimate the reliability of CDI change scores nor its five subscores. Our study investigated the reliability of change scores for both the total score on the CDI as well as its five subscores. We examined CDI responses from 186 maltreated children and estimated change score reliability for relative (e.g., comparison) and absolute (e.g., diagnosis) purposes. We also conducted subscore utility analysis, which determines if subscores have adequate reliability and provide information beyond the total score. We found that the total change score had acceptable reliability of .70 for our sample for both relative and absolute interpretations. In addition, the total score was a better predictor of true subscore values than the observed subscores-suggesting subscores did not add value over the total score, and that the reliability of changes in subscores was too low to be useful for any purpose. In summary, we found that the total CDI change scores were useful for assessing change in studies that examine relative or absolute change, and we advise caution when interpreting CDI subscores based on our analysis.

4.
Psychol Methods ; 28(3): 651-663, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35007106

RESUMO

We introduce a new method for estimating the degree of nonadditivity in a one-facet generalizability theory design. One-facet G-theory designs have only one observation per cell, such as persons answering items in a test, and assume that there is no interaction between facets. When there is interaction, the model becomes nonadditive, and G-theory variance estimates and reliability coefficients are likely biased. We introduce a multidimensional method for detecting interaction and nonadditivity in G-theory that has less bias and smaller error variance than methods that use the one-degree of freedom method based on Tukey's test for nonadditivity. The method we propose is more flexible and detects a greater variety of interactions than the formulation based on Tukey's test. Further, the proposed method is descriptive and illustrates the nature of the facet interaction using profile analysis, giving insight into potential interaction like rater biases, DIF, threats to test security, and other possible sources of systematic construct-irrelevant variance. We demonstrate the accuracy of our method using a simulation study and illustrate its descriptive profile features with a real data analysis of neurocognitive test scores. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Reprodutibilidade dos Testes , Humanos , Simulação por Computador
5.
Patient Educ Couns ; 105(7): 2264-2269, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34716052

RESUMO

OBJECTIVE: Evaluate medical students' communication skills with a standardized patient (SP) requesting a low value test and describe challenges students identify in addressing the request. METHODS: In this mixed-methods study, third-year students from two medical schools obtained a history, performed a physical examination, and counseled an SP presenting with uncomplicated low back pain who requests an MRI which is not indicated. SP raters evaluated student communication skills using a 14-item checklist. Post-encounter, students reported whether they ordered an MRI and challenges faced. RESULTS: Students who discussed practice guidelines and risks of unnecessary testing with the SP were less likely to order an MRI. Students cited several challenges in responding to the SP request including patient characteristics and circumstances, lack of knowledge about MRI indications and alternatives, and lack of communication skills to address the patient request. CONCLUSIONS: Most students did not order an MRI for uncomplicated LBP, but only a small number of students educated the patient about the evidence to avoid unnecessary imaging or the harm of unnecessary testing. PRACTICE IMPLICATIONS: Knowledge about unnecessary imaging in uncomplicated LBP may be insufficient to adhere to best practices and longitudinal training in challenging conversations is needed.


Assuntos
Estudantes de Medicina , Competência Clínica , Comunicação , Diagnóstico por Imagem , Avaliação Educacional/métodos , Humanos , Exame Físico
6.
J Natl Med Assoc ; 113(5): 566-575, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34140145

RESUMO

BACKGROUND: Implicit bias instruction is becoming more prevalent across the continuum of medical education. Little guidance exists for faculty on recognizing and debriefing about implicit bias during routine clinical encounters. OBJECTIVE: To assess the impact and feasibility of single seminars on implicit bias and the approach to its management in clinical settings. METHODS: Between September 2016 and November 2017, the authors delivered five departmental/divisional grand rounds across three different academic medical centers in New York, USA. Instruction provided background information on implicit bias, highlighted its relevance to clinical care, and discussed proposed interventions. To evaluate the impact of instruction participants completed a twelve-item retrospective pre-intervention/post-intervention survey. Questions related to comfort and confidence in recognizing and managing implicit bias, debriefing with learners, and role-modeling behaviors. Participants identified strategies for recognizing and managing potentially biased events through free text prompts. Authors qualitatively analyzed participants' identified strategies. RESULTS: We received 116 completed surveys from 203 participants (57% response rate). Participants self-reported confidence and comfort increased for all questions. Qualitative analysis resulted in three themes: looking inward, looking outward, and taking action at individual and institutional levels. CONCLUSION: After a single session, respondents reported increased confidence and comfort with the topic. They identified strategies relevant to their professional contexts which can inform future skills-based interventions. For healthcare organizations responding to calls for implicit bias training, this approach has great promise. It is feasible and can reach a wide audience through usual grand rounds programming, serving as an effective early step in such training.


Assuntos
Estudantes de Medicina , Currículo , Docentes , Humanos , Preconceito , Estudos Retrospectivos
7.
Fam Med ; 52(5): 357-360, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32401328

RESUMO

BACKGROUND AND OBJECTIVES: Antibiotic misuse contributes to antibiotic resistance and is a growing public health threat in the United States and globally. Professional medical societies promote antibiotic stewardship education for medical students, ideally before inappropriate practice habits form. To our knowledge, no tools exist to assess medical student competency in antibiotic stewardship and the communication skills necessary to engage patients in this endeavor. The aim of this study was to develop a novel instrument to measure medical students' communication skills and competency in antibiotic stewardship and patient counseling. METHODS: We created and pilot tested a novel instrument to assess student competencies in contextual knowledge and communication skills about antibiotic stewardship with standardized patients (SP). Students from two institutions (N=178; Albert Einstein College of Medicine and Warren Alpert Medical School of Brown University) participated in an observed, structured clinical encounter during which SPs trained in the use of the instrument assessed student performance using the novel instrument. RESULTS: In ranking examinee instrument scores, Cronbach α was 0.64 (95% CI: 0.53 to 0.74) at Einstein and 0.71 (95% CI: 0.60 to 0.79) at Brown, both within a commonly accepted range for estimating reliability. Global ratings and instrument scores were positively correlated (r=0.52, F [3, 174]=30.71, P<.001), providing evidence of concurrent validity. CONCLUSIONS: Similar results at both schools supported external validity. The instrument performed reliably at both institutions under different examination conditions, providing evidence for the validity and utility of this instrument in assessing medical students' skills related to antibiotic stewardship.


Assuntos
Gestão de Antimicrobianos , Estudantes de Medicina , Competência Clínica , Aconselhamento , Avaliação Educacional , Humanos , Reprodutibilidade dos Testes
8.
Artigo em Inglês | MEDLINE | ID: mdl-26708116

RESUMO

The purpose of this study was to estimate and examine ways to improve the reliability of change scores on the Alzheimer's Disease Assessment Scale, Cognitive Subtest (ADAS-Cog). The sample, provided by the Alzheimer's Disease Neuroimaging Initiative, included individuals with Alzheimer's disease (AD) (n = 153) and individuals with mild cognitive impairment (MCI) (n = 352). All participants were administered the ADAS-Cog at baseline and 1 year, and change scores were calculated as the difference in scores over the 1-year period. Three types of change score reliabilities were estimated using multivariate generalizability. Two methods to increase change score reliability were evaluated: reweighting the subtests of the scale and adding more subtests. Reliability of ADAS-Cog change scores over 1 year was low for both the AD sample (ranging from .53 to .64) and the MCI sample (.39 to .61). Reweighting the change scores from the AD sample improved reliability (.68 to .76), but lengthening provided no useful improvement for either sample. The MCI change scores had low reliability, even with reweighting and adding additional subtests. The ADAS-Cog scores had low reliability for measuring change. Researchers using the ADAS-Cog should estimate and report reliability for their use of the change scores. The ADAS-Cog change scores are not recommended for assessment of meaningful clinical change.


Assuntos
Doença de Alzheimer/complicações , Doença de Alzheimer/diagnóstico , Transtornos Cognitivos/diagnóstico , Transtornos Cognitivos/etiologia , Testes Neuropsicológicos , Escalas de Graduação Psiquiátrica , Idoso , Idoso de 80 Anos ou mais , Doença de Alzheimer/psicologia , Canadá , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicometria , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa