Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Electrocardiol ; 80: 166-173, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37467573

RESUMO

BACKGROUND: Electrocardiogram (ECG) interpretation training is a fundamental component of medical education across disciplines. However, the skill of interpreting ECGs is not universal among medical graduates, and numerous barriers and challenges exist in medical training and clinical practice. An evidence-based and widely accessible learning solution is needed. DESIGN: The EDUcation Curriculum Assessment for Teaching Electrocardiography (EDUCATE) Trial is a prospective, international, investigator-initiated, open-label, randomized controlled trial designed to determine the efficacy of self-directed and active-learning approaches of a web-based educational platform for improving ECG interpretation proficiency. Target enrollment is 1000 medical professionals from a variety of medical disciplines and training levels. Participants will complete a pre-intervention baseline survey and an ECG interpretation proficiency test. After completion, participants will be randomized into one of four groups in a 1:1:1:1 fashion: (i) an online, question-based learning resource, (ii) an online, lecture-based learning resource, (iii) an online, hybrid question- and lecture-based learning resource, or (iv) a control group with no ECG learning resources. The primary endpoint will be the change in overall ECG interpretation performance according to pre- and post-intervention tests, and it will be measured within and compared between medical professional groups. Secondary endpoints will include changes in ECG interpretation time, self-reported confidence, and interpretation accuracy for specific ECG findings. CONCLUSIONS: The EDUCATE Trial is a pioneering initiative aiming to establish a practical, widely available, evidence-based solution to enhance ECG interpretation proficiency among medical professionals. Through its innovative study design, it tackles the currently unaddressed challenges of ECG interpretation education in the modern era. The trial seeks to pinpoint performance gaps across medical professions, compare the effectiveness of different web-based ECG content delivery methods, and create initial evidence for competency-based standards. If successful, the EDUCATE Trial will represent a significant stride towards data-driven solutions for improving ECG interpretation skills in the medical community.


Assuntos
Currículo , Eletrocardiografia , Humanos , Estudos Prospectivos , Eletrocardiografia/métodos , Aprendizagem , Avaliação Educacional , Competência Clínica , Ensino
2.
BMC Med Educ ; 22(1): 177, 2022 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-35291995

RESUMO

BACKGROUND: Most work on the validity of clinical assessments for measuring learner performance in graduate medical education has occurred at the residency level. Minimal research exists on the validity of clinical assessments for measuring learner performance in advanced subspecialties. We sought to determine validity characteristics of cardiology fellows' assessment scores during subspecialty training, which represents the largest subspecialty of internal medicine. Validity evidence included item content, internal consistency reliability, and associations between faculty-of-fellow clinical assessments and other pertinent variables. METHODS: This was a retrospective validation study exploring the domains of content, internal structure, and relations to other variables validity evidence for scores on faculty-of-fellow clinical assessments that include the 10-item Mayo Cardiology Fellows Assessment (MCFA-10). Participants included 7 cardiology fellowship classes. The MCFA-10 item content included questions previously validated in the assessment of internal medicine residents. Internal structure evidence was assessed through Cronbach's α. The outcome for relations to other variables evidence was overall mean of faculty-of-fellow assessment score (scale 1-5). Independent variables included common measures of fellow performance. FINDINGS: Participants included 65 cardiology fellows. The overall mean ± standard deviation faculty-of-fellow assessment score was 4.07 ± 0.18. Content evidence for the MCFA-10 scores was based on published literature and core competencies. Cronbach's α was 0.98, suggesting high internal consistency reliability and offering evidence for internal structure validity. In multivariable analysis to provide relations to other variables evidence, mean assessment scores were independently associated with in-training examination scores (beta = 0.088 per 10-point increase; p = 0.05) and receiving a departmental or institutional award (beta = 0.152; p = 0.001). Assessment scores were not associated with educational conference attendance, compliance with completion of required evaluations, faculty appointment upon completion of training, or performance on the board certification exam. R2 for the multivariable model was 0.25. CONCLUSIONS: These findings provide sound validity evidence establishing item content, internal consistency reliability, and associations with other variables for faculty-of-fellow clinical assessment scores that include MCFA-10 items during cardiology fellowship. Relations to other variables evidence included associations of assessment scores with performance on the in-training examination and receipt of competitive awards. These data support the utility of the MCFA-10 as a measure of performance during cardiology training and could serve as the foundation for future research on the assessment of subspecialty learners.


Assuntos
Distinções e Prêmios , Cardiologia , Competência Clínica , Avaliação Educacional , Humanos , Reprodutibilidade dos Testes , Estudos Retrospectivos
3.
BMC Med Educ ; 20(1): 403, 2020 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-33148231

RESUMO

BACKGROUND: Continuing medical education (CME) often uses passive educational models including lectures. However, numerous studies have questioned the effectiveness of these less engaging educational strategies. Studies outside of CME suggest that engaged learning is associated with improved educational outcomes. However, measuring participants' engagement can be challenging. We developed and determined the validity evidence for a novel instrument to assess learner engagement in CME. METHODS: We conducted a cross-sectional validation study at a large, didactic-style CME conference. Content validity evidence was established through review of literature and previously published engagement scales and conceptual frameworks on engagement, along with an iterative process involving experts in the field, to develop an eight-item Learner Engagement Instrument (LEI). Response process validity was established by vetting LEI items on item clarity and perceived meaning prior to implementation, as well as using a well-developed online platform with clear instructions. Internal structure validity evidence was based on factor analysis and calculating internal consistency reliability. Relations to other variables validity evidence was determined by examining associations between LEI and previously validated CME Teaching Effectiveness (CMETE) instrument scores. Following each presentation, all participants were invited to complete the LEI and the CMETE. RESULTS: 51 out of 206 participants completed the LEI and CMETE (response rate 25%) Correlations between the LEI and the CMETE overall scores were strong (r = 0.80). Internal consistency reliability for the LEI was excellent (Cronbach's alpha = 0.96). To support validity to internal structure, a factor analysis was performed and revealed a two dimensional instrument consisting of internal and external engagement domains. The internal consistency reliabilities were 0.96 for the internal engagement domain and 0.95 for the external engagement domain. CONCLUSION: Engagement, as measured by the LEI, is strongly related to teaching effectiveness. The LEI is supported by robust validity evidence including content, response process, internal structure, and relations to other variables. Given the relationship between learner engagement and teaching effectiveness, identifying more engaging and interactive methods for teaching in CME is recommended.


Assuntos
Educação Médica Continuada , Estudantes , Estudos Transversais , Humanos , Aprendizagem , Reprodutibilidade dos Testes
4.
BMC Med Educ ; 20(1): 238, 2020 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-32723355

RESUMO

BACKGROUND: The unique traits of residents who matriculate into subspecialty fellowships are poorly understood. We sought to identify characteristics of internal medicine (IM) residents who match into cardiovascular (CV) fellowships. METHODS: We conducted a retrospective cohort study of 8 classes of IM residents who matriculated into residency from 2007 to 2014. The primary outcome was successful match to a CV fellowship within 1 year of completing IM residency. Independent variables included residents' licensing exam scores, research publications, medical school reputation, Alpha Omega Alpha (AOA) membership, declaration of intent to pursue CV in the residency application personal statement, clinical evaluation scores, mini-clinical evaluation exercise scores, in-training examination (ITE) performance, and exposure to CV during residency. RESULTS: Of the 339 included residents (59% male; mean age 27) from 120 medical schools, 73 (22%) matched to CV fellowship. At the time of residency application, 104 (31%) had ≥1 publication, 38 (11%) declared intention to pursue CV in their residency application personal statement, and 104 (31%) were members of AOA. Prior to fellowship application, 111 (33%) completed a CV elective rotation. At the completion of residency training, 108 (32%) had ≥3 publications. In an adjusted logistic regression analysis, declaration of intention to pursue CV (OR 6.4, 99% CI 1.7-23.4; p < 0.001), completion of a CV elective (OR 7.3, 99% CI 2.8-19.0; p < 0.001), score on the CV portion of the PGY-2 ITE (OR 1.05, 99% CI 1.02-1.08; p < 0.001), and publication of ≥3 manuscripts (OR 4.7, 99% CI 1.1-20.5; p = 0.007) were positively associated with matching to a CV fellowship. Overall PGY-2 ITE score was negatively associated (OR 0.93, 99% CI 0.90-0.97; p < 0.001) with matching to a CV fellowship. CONCLUSIONS: Residents' matriculation into CV fellowships was associated with declaration of CV career intent, completion of a CV elective rotation, CV medical knowledge, and research publications during residency. These findings may be useful when advising residents about pursuing careers in CV. They may also help residents understand factors associated with a successful match to a CV fellowship. The negative association between matching into CV fellowship and overall ITE score may indicate excessive subspecialty focus during IM residency.


Assuntos
Cardiologia , Internato e Residência , Adulto , Escolha da Profissão , Educação de Pós-Graduação em Medicina , Bolsas de Estudo , Feminino , Humanos , Medicina Interna/educação , Masculino , Estudos Retrospectivos
5.
J Physician Assist Educ ; 31(1): 2-7, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32004252

RESUMO

PURPOSE: The purpose of this study was to describe participant characteristics and effective teaching methods at a national continuing medical education (CME) conference on hospital medicine for physician assistants (PAs) and nurse practitioners (NPs). METHODS: In this cross-sectional study, participants provided demographic information and teaching effectiveness scores for each presentation. Associations between teaching effectiveness score and presentation characteristics were determined. RESULTS: In total, 163 of 253 participants (64.4%) completed evaluations of 28 presentations. Many of the participants were younger than 50 years (69.0%), had practiced for fewer than 5 years (41.5%), and worked in nonacademic settings (76.7%). Teaching effectiveness scores were significantly associated with the use of clinical cases (perfect scores for 68.8% of presentations with clinical cases vs. 59.8% without; P = .04). CONCLUSION: Many PAs and NPs at an HM CME conference were early-career clinicians working in nonacademic settings. Presenters at CME conferences in hospital medicine should consider using clinical cases to improve their teaching effectiveness among PA and NP learners.


Assuntos
Educação Continuada/organização & administração , Medicina Hospitalar/educação , Profissionais de Enfermagem/educação , Assistentes Médicos/educação , Ensino/organização & administração , Adulto , Idoso , Estudos Transversais , Humanos , Aprendizagem , Pessoa de Meia-Idade , Fatores Socioeconômicos , Adulto Jovem
6.
Acad Psychiatry ; 42(4): 458-463, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28685348

RESUMO

OBJECTIVE: Little is known about factors associated with effective continuing medical education (CME) in psychiatry. The authors aimed to validate a method to assess psychiatry CME teaching effectiveness and to determine associations between teaching effectiveness scores and characteristics of presentations, presenters, and participants. METHODS: This cross-sectional study was conducted at the Mayo Clinic Psychiatry Clinical Reviews and Psychiatry in Medical Settings. Presentations were evaluated using an eight-item CME teaching effectiveness instrument, its content based on previously published instruments. Factor analysis, internal consistency and interrater reliabilities, and temporal stability reliability were calculated. Associations were determined between teaching effectiveness scores and characteristics of presentations, presenters, and participants. RESULTS: In total, 364 participants returned 246 completed surveys (response rate, 67.6%). Factor analysis revealed a unidimensional model of psychiatry CME teaching effectiveness. Cronbach α for the instrument was excellent at 0.94. Item mean score (SD) ranged from 4.33 (0.92) to 4.71 (0.59) on a 5-point scale. Overall interrater reliability was 0.84 (95% CI, 0.75-0.91), and temporal stability was 0.89 (95% CI, 0.77-0.97). No associations were found between teaching effectiveness scores and characteristics of presentations, presenters, and participants. CONCLUSIONS: This study provides a new, validated measure of CME teaching effectiveness that could be used to improve psychiatry CME. In contrast to prior research in other medical specialties, CME teaching effectiveness scores were not associated with use of case-based or interactive presentations. This outcome suggests the need for distinctive considerations regarding psychiatry CME; a singular approach to CME teaching may not apply to all medical specialties.


Assuntos
Braquiterapia/normas , Educação Médica Continuada/normas , Psiquiatria/educação , Ensino/normas , Estudos Transversais , Educação Médica Continuada/métodos , Humanos , Reprodutibilidade dos Testes
8.
Acad Med ; 86(6): 737-41, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21512373

RESUMO

PURPOSE: Residents' reflections on quality improvement (QI) opportunities are poorly understood. The authors used the Mayo Evaluation of Reflection on Improvement Tool (MERIT) to measure residents' reflection scores across three years and to determine associations between reflection scores and resident and adverse patient event characteristics. METHOD: From 2006 to 2009, 48 Mayo Clinic internal medicine residents completed biannual reflections on adverse events and classified event severity and preventability. Faculty assessed residents' reflections using MERIT, which contains 18 Likert-scaled items and measures three factors-personal reflection, systems reflection, and event merit. ANOVA was used to identify changes in MERIT scores across three years of training and among factors, paired t tests were used to identify differences between MERIT factor scores, and generalized estimating equations were used to examine associations between MERIT scores and resident and adverse event characteristics. RESULTS: The residents completed 240 reflections. MERIT reflection scores were stable over time. Individual factor scores differed significantly (P < .0001), with event merit being the highest and systems reflection the lowest. Event preventability was significantly associated with MERIT factor scores and overall scores (beta = 0.415; CI = 0.186-0.643; P = .0004). No significant associations between MERIT scores and resident characteristics or event severity were identified. CONCLUSIONS: Residents' reflections on adverse events remained constant over time, were lowest for systems factors, and were associated with adverse event preventability. Future research should explore learners' emphasis on systems aspects of QI and the relationship between QI and event preventability.


Assuntos
Avaliação Educacional/métodos , Medicina Interna/educação , Internato e Residência , Melhoria de Qualidade , Gestão de Riscos , Autoavaliação (Psicologia) , Análise Fatorial , Humanos , Estudos Longitudinais , Minnesota , Reprodutibilidade dos Testes
10.
J Grad Med Educ ; 2(2): 181-7, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21975617

RESUMO

BACKGROUND: The financial success of academic medical centers depends largely on appropriate billing for resident-patient encounters. Objectives of this study were to develop an instrument for billing in internal medicine resident clinics, to compare billing practices among junior versus senior residents, and to estimate financial losses from inappropriate resident billing. METHODS: For this analysis, we randomly selected 100 patient visit notes from a resident outpatient practice. Three coding specialists used an instrument structured on Medicare billing standards to determine appropriate codes, and interrater reliability was assessed. Billing codes were converted to US dollars based on the national Medicare reimbursement list. Inappropriate billing, based on comparisons with coding specialists, was then determined for residents across years of training. RESULTS: Interrater reliability of Current Procedural Terminology components was excellent, with κ ranging from 0.76 for examination to 0.94 for diagnosis. Of the encounters in the study, 55% were underbilled by an average of $45.26 per encounter, and 18% were overbilled by an average of $51.29 per encounter. The percentages of appropriately coded notes were 16.1% for postgraduate year (PGY) 1, 26.8% for PGY-2, and 39.3% for PGY-3 residents (P < .05). Underbilling was 74.2% for PGY-1, 48.8% for PGY-2, and 42.9% for PGY-3 residents (P < .01). There was significantly less overbilling among PGY-1 residents compared with PGY-2 and PGY-3 residents (9.7% versus 24.4% and 17.9%, respectively; P < .05). CONCLUSIONS: Our study reports a reliable method for assessing billing in internal medicine resident clinics. It exposed large financial losses, which were attributable to junior residents more than senior residents. The findings highlight the need for educational interventions to improve resident coding and billing.

11.
Am J Surg ; 198(3): 442-4, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19716888

RESUMO

BACKGROUND: This study examined the methodologic quality of medical education research published in The American Journal of Surgery (AJS) relative to other journals and in AJS itself over time. METHODS: Medical Education Research Study Quality Instrument (MERSQI) scores were determined for 198 education studies published in 2003 in 13 peer-reviewed journals including AJS and all 38 AJS education studies published in 2007. RESULTS: In 2003, the mean (standard deviation) MERSQI scores of AJS studies were 11.03 (2.12) compared with 9.83 (2.37) for studies published in the other 12 journals (P = .03). AJS studies received higher scores for response rate (P < .001) and content validity (P = .03) than other journals. The mean MERSQI scores among AJS studies remained constant between 2003 and 2007 (12.03 [2.35] vs 11.03 [2.12], P = .13). CONCLUSIONS: Education studies published in AJS compared favorably with those published in other journals, and this quality was maintained over time. Nonetheless, there is room for improvement with respect to study designs and outcome assessment.


Assuntos
Educação Médica , Publicações Periódicas como Assunto , Editoração , Pesquisa/normas , Bibliometria , Humanos , Projetos de Pesquisa , Estatísticas não Paramétricas , Estados Unidos
12.
Teach Learn Med ; 21(3): 188-94, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-20183337

RESUMO

BACKGROUND: Assessment score reliability is usually based on a single analysis. However, reliability is an essential component of validity and assessment validation and revision is a never-ending cycle. For ongoing assessments over extended time frames, real-time reliability computations may alert users to possible changes in the learning environment that are revealed by variations in reliability over time. PURPOSE: To develop software that calculates the reliability of clinical assessments in real time. METHODS: Over 2,400 assessment forms were analyzed. We developed software that calculates reliability in real time. Software accuracy was verified by comparing data from our software with a standard method. Factor analysis determined scale dimensionality. RESULTS: Correlation between our software and a standard method was excellent (ICC for kappas = 0.97; Cronbach's alphas differed by < 0.03). Cronbach's alpha ranged from 0.94 to 0.97 and weighted kappa ranged from 0.08 to 0.40. Factor analysis confirmed 3 teaching domains. CONCLUSIONS: We describe an accurate method for calculating reliability in real time. The benefit of real time computation is that it provides a mechanism for detecting possible changes (related to curriculum, teachers, and students) in the learning environment indicated by changes in reliability over time. This technique will enable investigators to monitor and detect changes in the reliability of assessment scores and, with future study, isolate aspects of the learning environment that impact on reliability.


Assuntos
Competência Clínica , Educação de Pós-Graduação em Medicina/normas , Avaliação Educacional/normas , Medicina Interna/normas , Internato e Residência/normas , Software , Adulto , Análise Fatorial , Feminino , Humanos , Medicina Interna/educação , Masculino , Reprodutibilidade dos Testes
13.
JAMA ; 298(9): 1002-9, 2007 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-17785645

RESUMO

CONTEXT: Methodological shortcomings in medical education research are often attributed to insufficient funding, yet an association between funding and study quality has not been established. OBJECTIVES: To develop and evaluate an instrument for measuring the quality of education research studies and to assess the relationship between funding and study quality. DESIGN, SETTING, AND PARTICIPANTS: Internal consistency, interrater and intrarater reliability, and criterion validity were determined for a 10-item medical education research study quality instrument (MERSQI). This was applied to 210 medical education research studies published in 13 peer-reviewed journals between September 1, 2002, and December 31, 2003. The amount of funding obtained per study and the publication record of the first author were determined by survey. MAIN OUTCOME MEASURES: Study quality as measured by the MERSQI (potential maximum total score, 18; maximum domain score, 3), amount of funding per study, and previous publications by the first author. RESULTS: The mean MERSQI score was 9.95 (SD, 2.34; range, 5-16). Mean domain scores were highest for data analysis (2.58) and lowest for validity (0.69). Intraclass correlation coefficient ranges for interrater and intrarater reliability were 0.72 to 0.98 and 0.78 to 0.998, respectively. Total MERSQI scores were associated with expert quality ratings (Spearman rho, 0.73; 95% confidence interval [CI], 0.56-0.84; P < .001), 3-year citation rate (0.8 increase in score per 10 citations; 95% CI, 0.03-1.30; P = .003), and journal impact factor (1.0 increase in score per 6-unit increase in impact factor; 95% CI, 0.34-1.56; P = .003). In multivariate analysis, MERSQI scores were independently associated with study funding of $20 000 or more (0.95 increase in score; 95% CI, 0.22-1.86; P = .045) and previous medical education publications by the first author (1.07 increase in score per 20 publications; 95% CI, 0.15-2.23; P = .047). CONCLUSION: The quality of published medical education research is associated with study funding.


Assuntos
Educação Médica , Estudos de Avaliação como Assunto , Editoração , Apoio à Pesquisa como Assunto , Estudos Transversais , Reprodutibilidade dos Testes , Pesquisa , Projetos de Pesquisa
14.
Med Educ ; 40(12): 1209-16, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17118115

RESUMO

CONTEXT: We are unaware of studies examining the stability of teaching assessment scores across different medical specialties. A recent study showed that clinical teaching assessments of general internists reduced to interpersonal, clinical teaching and efficiency domains. We sought to determine the factor stability of this 3-dimensional model among cardiologists and to compare domain-specific scores between general internists and cardiologists. METHODS: A total of 2000 general internal medicine and cardiology hospital teaching assessments carried out from January 2000 to March 2004 were analysed using principal factor analysis. Internal consistency and inter-rater reliability were calculated. Mean item scores were compared between general internists and cardiologists. RESULTS: The interpersonal and clinical teaching domains previously demonstrated among general internists collapsed into 1 domain among cardiologists, whereas the efficiency domain remained stable. Internal consistency of domains (Cronbach's alpha range 0.89-0.93) and inter-rater reliability of items (range 0.65-0.87) were good to excellent for both specialties. General internists scored significantly higher (P<0.05) than cardiologists on most items except for 4 items that more accurately assessed the cardiology teaching environment. CONCLUSIONS: We observed factor instability of clinical teaching assessment scores from the same instrument administered to general internists and cardiologists. This finding was attributed to salient differences between these specialties' educational environments and highlights the importance of validating assessments for the specific contexts in which they are to be used. Future research should determine whether interpersonal domain scores identify superior teachers and study the reasons why interpersonal and clinical teaching domains are unstable across different educational settings.


Assuntos
Cardiologia/educação , Internato e Residência , Ensino/normas , Análise Fatorial , Minnesota , Variações Dependentes do Observador , Ensino/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA