Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Expert Rev Anti Infect Ther ; : 1-12, 2024 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-39225411

RESUMO

INTRODUCTION: Cytomegalovirus (CMV) is a classic opportunistic infection in transplant recipients. Treatment-refractory CMV infections are of concern, with growing identification of strains that have developed genetic mutations which confer resistance to standard antiviral therapy. Resistant and refractory CMV infections are associated with worse patient outcomes, prolonged hospitalization, and increased healthcare costs. AREAS COVERED: This article provides a comprehensive practical overview of resistant and refractory CMV infections in transplant recipients. We review the updated definitions for these infections, antiviral pharmacology, mechanisms of drug resistance, diagnostic workup, management strategies, and host-related factors including immune optimization. EXPERT OPINION: Resistant and refractory CMV infections are a significant contributor to post-transplant morbidity and mortality. This is likely the result of a combination of prolonged antiviral exposure and active viral replication in the setting of intensive pharmacologic immunosuppression. Successful control of resistant and refractory infections in transplant recipients requires a combination of immunomodulatory optimization and appropriate antiviral drug choice with sufficient treatment duration.

2.
Front Artif Intell ; 7: 1452469, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39315245

RESUMO

Background: Efficient triage of patient communications is crucial for timely medical attention and improved care. This study evaluates ChatGPT's accuracy in categorizing nephrology patient inbox messages, assessing its potential in outpatient settings. Methods: One hundred and fifty simulated patient inbox messages were created based on cases typically encountered in everyday practice at a nephrology outpatient clinic. These messages were triaged as non-urgent, urgent, and emergent by two nephrologists. The messages were then submitted to ChatGPT-4 for independent triage into the same categories. The inquiry process was performed twice with a two-week period in between. ChatGPT responses were graded as correct (agreement with physicians), overestimation (higher priority), or underestimation (lower priority). Results: In the first trial, ChatGPT correctly triaged 140 (93%) messages, overestimated the priority of 4 messages (3%), and underestimated the priority of 6 messages (4%). In the second trial, it correctly triaged 140 (93%) messages, overestimated the priority of 9 (6%), and underestimated the priority of 1 (1%). The accuracy did not depend on the urgency level of the message (p = 0.19). The internal agreement of ChatGPT responses was 92% with an intra-rater Kappa score of 0.88. Conclusion: ChatGPT-4 demonstrated high accuracy in triaging nephrology patient messages, highlighting the potential for AI-driven triage systems to enhance operational efficiency and improve patient care in outpatient clinics.

3.
Ren Fail ; 46(2): 2402075, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39258385

RESUMO

INTRODUCTION: ChatGPT, a state-of-the-art large language model, has shown potential in analyzing images and providing accurate information. This study aimed to explore ChatGPT-4 as a tool for identifying commonly prescribed nephrology medications across different versions and testing dates. METHODS: 25 nephrology medications were obtained from an institutional pharmacy. High-quality images of each medication were captured using an iPhone 13 Pro Max and uploaded to ChatGPT-4 with the query, 'What is this medication?' The accuracy of ChatGPT-4's responses was assessed for medication name, dosage, and imprint. The process was repeated after 2 weeks to evaluate consistency across different versions, including GPT-4, GPT-4 Legacy, and GPT-4.Ø. RESULTS: ChatGPT-4 correctly identified 22 out of 25 (88%) medications across all versions. However, it misidentified Hydrochlorothiazide, Nifedipine, and Spironolactone due to misreading imprints. For instance, Nifedipine ER 90 mg was mistaken for Metformin Hydrochloride ER 500 mg because 'NF 06' was misread as 'NF 05'. Hydrochlorothiazide 50 mg was confused with the 25 mg version due to imprint errors, and Spironolactone 25 mg was misidentified as Naproxen Sodium or Diclofenac Sodium. Despite these errors, ChatGPT-4 showed 100% consistency when retested, correcting misidentifications after receiving feedback on the correct imprints. CONCLUSION: ChatGPT-4 shows strong potential in identifying nephrology medications from self-captured images, though challenges with difficult-to-read imprints remain. Providing feedback improved accuracy, suggesting ChatGPT-4 could be a valuable tool in digital health for medication identification. Future research should enhance the model's ability to distinguish similar imprints and explore broader integration into digital health platforms.


Assuntos
Inteligência Artificial , Humanos , Smartphone
4.
Front Artif Intell ; 7: 1457586, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39286549

RESUMO

Background: Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing. Methods: Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024. Results: In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (p = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (p > 0.05). Conclusion: ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.

5.
Digit Health ; 10: 20552076241277458, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39221085

RESUMO

Background: Professional opinion polling has become a popular means of seeking advice for complex nephrology questions in the #AskRenal community on X. ChatGPT is a large language model with remarkable problem-solving capabilities, but its ability to provide solutions for real-world clinical scenarios remains unproven. This study seeks to evaluate how closely ChatGPT's responses align with current prevailing medical opinions in nephrology. Methods: Nephrology polls from X were submitted to ChatGPT-4, which generated answers without prior knowledge of the poll outcomes. Its responses were compared to the poll results (inter-rater) and a second set of responses given after a one-week interval (intra-rater) using Cohen's kappa statistic (κ). Subgroup analysis was performed based on question subject matter. Results: Our analysis comprised two rounds of testing ChatGPT on 271 nephrology-related questions. In the first round, ChatGPT's responses agreed with poll results for 163 of the 271 questions (60.2%; κ = 0.42, 95% CI: 0.38-0.46). In the second round, conducted to assess reproducibility, agreement improved slightly to 171 out of 271 questions (63.1%; κ = 0.46, 95% CI: 0.42-0.50). Comparison of ChatGPT's responses between the two rounds demonstrated high internal consistency, with agreement in 245 out of 271 responses (90.4%; κ = 0.86, 95% CI: 0.82-0.90). Subgroup analysis revealed stronger performance in the combined areas of homeostasis, nephrolithiasis, and pharmacology (κ = 0.53, 95% CI: 0.47-0.59 in both rounds), compared to other nephrology subfields. Conclusion: ChatGPT-4 demonstrates modest capability in replicating prevailing professional opinion in nephrology polls overall, with varying performance levels between question topics and excellent internal consistency. This study provides insights into the potential and limitations of using ChatGPT in medical decision making.

6.
Front Cardiovasc Med ; 10: 1167256, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37180798

RESUMO

Hypertrophic cardiomyopathy (HCM) is a heritable cardiomyopathy that is predominantly caused by pathogenic mutations in sarcomeric proteins. Here we report two individuals, a mother and her daughter, both heterozygous carriers of the same HCM-causing mutation in cardiac Troponin T (TNNT2). Despite sharing an identical pathogenic variant, the two individuals had very different manifestations of the disease. While one patient presented with sudden cardiac death, recurrent tachyarrhythmia, and findings of massive left ventricular hypertrophy, the other patient manifested with extensive abnormal myocardial delayed enhancement despite normal ventricular wall thickness and has remained relatively asymptomatic. Recognition of the marked incomplete penetrance and variable expressivity possible in a single TNNT2-positive family has potential to guide HCM patient care.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA