Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Gen Intern Med ; 38(14): 3093-3098, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37592118

RESUMO

BACKGROUND: Bedside incision and drainage (I&D) of skin abscesses is a common medical procedure performed in a variety of medical settings. Yet, there is a paucity of published validated educational tools to teach and assess competency for this procedure. OBJECTIVE: To validate an educational tool to teach and assess competency for bedside I&D of skin abscesses via the Delphi consensus and Angoff standard setting methods. DESIGN: Expert consensus on the importance of each procedural step in the educational tool was obtained using the Delphi method, consisting of four rounds of iterative revisions based on input from a panel of experts. The passing cut-off score for a proficient provider was determined using the modified dichotomous Angoff method. PARTICIPANTS: All participants met the minimum criteria of active involvement in resident education and performance of at least 20 skin abscess I&D's within the past 5 years. Participant specialties included general surgery, emergency medicine, and internal medicine. MAIN MEASURES: The primary outcome was consensus on procedural steps and errors, defined as an interquartile range ≤ 2 on a 9-point Likert scale. A cut-off score was determined by the average across all respondents for the anticipated number of errors that would be committed by a provider with the level of proficiency defined in the survey. Qualitative input was incorporated into the educational tool. KEY RESULTS: At the end of four rounds of review via the Delphi process, participants achieved consensus on 93% of items on the clinical checklist and 85% of errors on the assessment checklist. Via the modified dichotomous Angoff method, the determined passing cut-off for competency was 6 out of 22 errors. CONCLUSION: An educational and evaluation tool for bedside I&D of skin abscesses was validated via the Delphi and Angoff methods.


Assuntos
Abscesso , Lista de Checagem , Humanos , Abscesso/cirurgia , Escolaridade , Inquéritos e Questionários , Drenagem , Técnica Delphi , Competência Clínica
2.
J Hosp Med ; 19(11): 1019-1027, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38946024

RESUMO

BACKGROUND: In response to a decline in bedside procedures performed by hospitalists, some hospital medicine groups have created medical procedure services (MPSs) concentrating procedures under the expertise of trained hospitalist-proceduralists. OBJECTIVES: To characterize the structure, breadth, and heterogeneity of academic medical center MPSs, as well as compare the procedural landscape for groups with and without an MPS. METHODS: The Survey of Internal Medicine Providers' Limitations and Experiences with Procedures and MPSs, is a cross-sectional study, conducted in the United States and Canada through a web-based survey administered from October 2022 to March 2023. We used convenience and snowball sampling to identify eligible study participants. The survey explored presence of MPS, procedure volumes, patient safety, and educational practices. For MPSs, we explored onboarding, staffing, skill maintenancy, funding, and barriers to growth. RESULTS: Forty institutions (response rate 97.5%), represented by members of the Procedural Research and Innovation for Medical Educators (PRIME) consortium participated in the survey. MPSs were found in 75% of the surveyed institutions. Most MPSs (97%) involved trainees and were staffed by internists (100%) who often had additional clinical duties (70%). The majority (83%) of MPSs used checklists and procedural safety guidelines, but only 53% had a standardized process for tracking complications. There was significant variability in determining procedural competency and supervising trainees. Groups with an MPS reported higher procedure volume compared to those without. CONCLUSIONS: MPSs were highly prevalent among the participating institutions, offered a broad array of bedside procedures, and often included trainees. There was a high variability in funding models, procedure volumes, patient safety practices, and skill maintenance requirements.


Assuntos
Medicina Interna , Humanos , Estudos Transversais , Inquéritos e Questionários , Canadá , Estados Unidos , Centros Médicos Acadêmicos , Segurança do Paciente , Médicos Hospitalares , Competência Clínica
3.
JAMA Netw Open ; 7(10): e2440969, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39466245

RESUMO

Importance: Large language models (LLMs) have shown promise in their performance on both multiple-choice and open-ended medical reasoning examinations, but it remains unknown whether the use of such tools improves physician diagnostic reasoning. Objective: To assess the effect of an LLM on physicians' diagnostic reasoning compared with conventional resources. Design, Setting, and Participants: A single-blind randomized clinical trial was conducted from November 29 to December 29, 2023. Using remote video conferencing and in-person participation across multiple academic medical institutions, physicians with training in family medicine, internal medicine, or emergency medicine were recruited. Intervention: Participants were randomized to either access the LLM in addition to conventional diagnostic resources or conventional resources only, stratified by career stage. Participants were allocated 60 minutes to review up to 6 clinical vignettes. Main Outcomes and Measures: The primary outcome was performance on a standardized rubric of diagnostic performance based on differential diagnosis accuracy, appropriateness of supporting and opposing factors, and next diagnostic evaluation steps, validated and graded via blinded expert consensus. Secondary outcomes included time spent per case (in seconds) and final diagnosis accuracy. All analyses followed the intention-to-treat principle. A secondary exploratory analysis evaluated the standalone performance of the LLM by comparing the primary outcomes between the LLM alone group and the conventional resource group. Results: Fifty physicians (26 attendings, 24 residents; median years in practice, 3 [IQR, 2-8]) participated virtually as well as at 1 in-person site. The median diagnostic reasoning score per case was 76% (IQR, 66%-87%) for the LLM group and 74% (IQR, 63%-84%) for the conventional resources-only group, with an adjusted difference of 2 percentage points (95% CI, -4 to 8 percentage points; P = .60). The median time spent per case for the LLM group was 519 (IQR, 371-668) seconds, compared with 565 (IQR, 456-788) seconds for the conventional resources group, with a time difference of -82 (95% CI, -195 to 31; P = .20) seconds. The LLM alone scored 16 percentage points (95% CI, 2-30 percentage points; P = .03) higher than the conventional resources group. Conclusions and Relevance: In this trial, the availability of an LLM to physicians as a diagnostic aid did not significantly improve clinical reasoning compared with conventional resources. The LLM alone demonstrated higher performance than both physician groups, indicating the need for technology and workforce development to realize the potential of physician-artificial intelligence collaboration in clinical practice. Trial Registration: ClinicalTrials.gov Identifier: NCT06157944.


Assuntos
Competência Clínica , Raciocínio Clínico , Humanos , Feminino , Masculino , Método Simples-Cego , Adulto , Competência Clínica/estatística & dados numéricos , Idioma , Médicos/psicologia , Médicos/estatística & dados numéricos
4.
medRxiv ; 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39148822

RESUMO

Importance: Large language model (LLM) artificial intelligence (AI) systems have shown promise in diagnostic reasoning, but their utility in management reasoning with no clear right answers is unknown. Objective: To determine whether LLM assistance improves physician performance on open-ended management reasoning tasks compared to conventional resources. Design: Prospective, randomized controlled trial conducted from 30 November 2023 to 21 April 2024. Setting: Multi-institutional study from Stanford University, Beth Israel Deaconess Medical Center, and the University of Virginia involving physicians from across the United States. Participants: 92 practicing attending physicians and residents with training in internal medicine, family medicine, or emergency medicine. Intervention: Five expert-developed clinical case vignettes were presented with multiple open-ended management questions and scoring rubrics created through a Delphi process. Physicians were randomized to use either GPT-4 via ChatGPT Plus in addition to conventional resources (e.g., UpToDate, Google), or conventional resources alone. Main Outcomes and Measures: The primary outcome was difference in total score between groups on expert-developed scoring rubrics. Secondary outcomes included domain-specific scores and time spent per case. Results: Physicians using the LLM scored higher compared to those using conventional resources (mean difference 6.5 %, 95% CI 2.7-10.2, p<0.001). Significant improvements were seen in management decisions (6.1%, 95% CI 2.5-9.7, p=0.001), diagnostic decisions (12.1%, 95% CI 3.1-21.0, p=0.009), and case-specific (6.2%, 95% CI 2.4-9.9, p=0.002) domains. GPT-4 users spent more time per case (mean difference 119.3 seconds, 95% CI 17.4-221.2, p=0.02). There was no significant difference between GPT-4-augmented physicians and GPT-4 alone (-0.9%, 95% CI -9.0 to 7.2, p=0.8). Conclusions and Relevance: LLM assistance improved physician management reasoning compared to conventional resources, with particular gains in contextual and patient-specific decision-making. These findings indicate that LLMs can augment management decision-making in complex cases. Trial registration: ClinicalTrials.gov Identifier: NCT06208423; https://classic.clinicaltrials.gov/ct2/show/NCT06208423.

5.
J Hosp Med ; 19(4): 259-266, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38472645

RESUMO

BACKGROUND: In-hospital consultation is essential for patient care. We previously proposed a framework of seven specific consultation types to classify consult requests to improve communication, workflow, and provider satisfaction. METHODS: This multimethods study's aim was to evaluate the applicability of the consult classification framework to real internal medicine (IM) consults. We sought validity evidence using Kane's validity model with focus groups and classifying consult requests from five IM specialties. Participants attended five 1 h semi-structured focus groups that were recorded, transcribed, and coded for thematic saturation. For each specialty, three specialists and three hospitalists categorized 100 (total 500) random anonymized consult requests. The primary outcome was concordance in the classification of consult requests, defined as the sum of partial concordance and perfect concordance, where respectively 4-5/6 and 6/6 participants classified a consult in the same category. We used χ2 tests to compare concordance rates across specialties and between specialists and hospitalists. RESULTS: Five major themes were identified in the qualitative analysis of the focus groups: (1) consult question, (2) interpersonal interactions, (3) value, (4) miscommunication, (5) consult framework application, barriers, and iterative development. In the quantitative analysis, the overall concordance rate was 88.8% (95% confidence interval [CI]: 85.7-91.4), and perfect concordance was 46.6% (95% CI: 42.2-51.1). Concordance differed significantly between hospitalists and specialists overall (p = .01), with a higher proportion of hospitalists having perfect concordance compared to specialists (67.2% vs. 57.8%, p = .002). CONCLUSIONS: The consult classification framework was found to be applicable to consults from five different IM specialties, and could improve communication and education.


Assuntos
Medicina Interna , Encaminhamento e Consulta , Humanos , Grupos Focais
6.
J Hosp Med ; 16(4): 230-235, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33734979

RESUMO

BACKGROUND: As general internists practicing in the inpatient setting, hospitalists at many institutions are expected to perform invasive bedside procedures, as defined by professional standards. In reality, hospitalists are doing fewer procedures and increasingly are referring to specialists, which threatens their ability to maintain procedural skills. The discrepancy between expectations and reality, especially when hospitalists may be fully credentialed to perform procedures, poses significant risks to patients because of morbidity and mortality associated with complications, some of which derive from practitioner inexperience. METHODS: We performed a structured search of the peer-reviewed literature to identify articles focused on hospitalists performing procedures. RESULTS: Our synthesis of the literature characterizes contributors to hospitalists' procedural competency and discusses: (1) temporal trends for procedures performed by hospitalists and their associated referral patterns, (2) data comparing use and clinical outcomes of procedures performed by hospitalists compared with specialists, (3) the lack of nationwide standardization of hospitalist procedural training and credentialing, and (4) the role of medical procedure services, although limited in supportive evidence, in concentrating procedural skill and mitigating risk in the hands of a few well-trained hospitalists. CONCLUSION: We conclude with recommendations for hospital medicine groups to ensure the safety of hospitalized patients undergoing bedside procedures.


Assuntos
Medicina Hospitalar , Médicos Hospitalares , Credenciamento , Hospitalização , Humanos , Encaminhamento e Consulta
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA