Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Cardiovasc Diabetol ; 23(1): 204, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38879473

RESUMEN

BACKGROUND: Diabetic kidney disease is an established risk factor for heart failure. However, the impact of incident heart failure on the subsequent risk of renal failure has not been systematically assessed in diabetic population. We sought to study the risk of progression to end stage kidney disease (ESKD) after incident heart failure in Asian patients with type 2 diabetes. METHODS: In this prospective cohort study, 1985 outpatients with type 2 diabetes from a regional hospital and a primary care facility in Singapore were followed for a median of 8.6 (interquartile range 6.2-9.6) years. ESKD was defined as a composite of progression to sustained eGFR below 15 ml/min/1.73m2, maintenance dialysis or renal death, whichever occurred first. RESULTS: 180 incident heart failure events and 181 incident ESKD events were identified during follow-up. Of 181 ESKD events, 38 (21%) occurred after incident heart failure. Compared to those did not progress to ESKD after incident heart failure (n = 142), participants who progressed to ESKD after heart failure occurrence were younger, had higher HbA1c and higher urine albumin-to-creatinine ratio at baseline. The excess risk of ESKD manifested immediately after heart failure occurrence, persisted for two years and was moderated thereafter. Cox regression suggested that, compared to counterparts with no heart failure event, participants with heart failure occurrence had 9.6 (95% CI 5.0- 18.3) fold increased risk for incident ESKD after adjustment for baseline cardio-renal risk factors including eGFR and albuminuria. It appeared that heart failure with preserved ejection fraction had a higher risk for ESKD as compared to those with reduced ejection fraction (adjusted HR 13.7 [6.3-29.5] versus 6.5 [2.3-18.6]). CONCLUSION: Incident heart failure impinges a high risk for progression to ESKD in individuals with type 2 diabetes. Our data highlight the need for intensive surveillance of kidney function after incident heart failure, especially within the first two years after heart failure diagnosis.


Asunto(s)
Diabetes Mellitus Tipo 2 , Nefropatías Diabéticas , Progresión de la Enfermedad , Tasa de Filtración Glomerular , Insuficiencia Cardíaca , Fallo Renal Crónico , Riñón , Humanos , Insuficiencia Cardíaca/epidemiología , Insuficiencia Cardíaca/diagnóstico , Insuficiencia Cardíaca/fisiopatología , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 2/epidemiología , Masculino , Femenino , Persona de Mediana Edad , Factores de Riesgo , Anciano , Estudios Prospectivos , Incidencia , Factores de Tiempo , Nefropatías Diabéticas/epidemiología , Nefropatías Diabéticas/diagnóstico , Nefropatías Diabéticas/fisiopatología , Fallo Renal Crónico/epidemiología , Fallo Renal Crónico/diagnóstico , Fallo Renal Crónico/fisiopatología , Medición de Riesgo , Singapur/epidemiología , Riñón/fisiopatología , Pronóstico , Biomarcadores/sangre
2.
Eur Spine J ; 2024 Mar 15.
Artículo en Inglés | MEDLINE | ID: mdl-38489044

RESUMEN

BACKGROUND CONTEXT: Clinical guidelines, developed in concordance with the literature, are often used to guide surgeons' clinical decision making. Recent advancements of large language models and artificial intelligence (AI) in the medical field come with exciting potential. OpenAI's generative AI model, known as ChatGPT, can quickly synthesize information and generate responses grounded in medical literature, which may prove to be a useful tool in clinical decision-making for spine care. The current literature has yet to investigate the ability of ChatGPT to assist clinical decision making with regard to degenerative spondylolisthesis. PURPOSE: The study aimed to compare ChatGPT's concordance with the recommendations set forth by The North American Spine Society (NASS) Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and assess ChatGPT's accuracy within the context of the most recent literature. METHODS: ChatGPT-3.5 and 4.0 was prompted with questions from the NASS Clinical Guideline for the Diagnosis and Treatment of Degenerative Spondylolisthesis and graded its recommendations as "concordant" or "nonconcordant" relative to those put forth by NASS. A response was considered "concordant" when ChatGPT generated a recommendation that accurately reproduced all major points made in the NASS recommendation. Any responses with a grading of "nonconcordant" were further stratified into two subcategories: "Insufficient" or "Over-conclusive," to provide further insight into grading rationale. Responses between GPT-3.5 and 4.0 were compared using Chi-squared tests. RESULTS: ChatGPT-3.5 answered 13 of NASS's 28 total clinical questions in concordance with NASS's guidelines (46.4%). Categorical breakdown is as follows: Definitions and Natural History (1/1, 100%), Diagnosis and Imaging (1/4, 25%), Outcome Measures for Medical Intervention and Surgical Treatment (0/1, 0%), Medical and Interventional Treatment (4/6, 66.7%), Surgical Treatment (7/14, 50%), and Value of Spine Care (0/2, 0%). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-3.5 generated a concordant response 66.7% of the time (6/9). However, ChatGPT-3.5's concordance dropped to 36.8% when asked clinical questions that NASS did not provide a clear recommendation on (7/19). A further breakdown of ChatGPT-3.5's nonconcordance with the guidelines revealed that a vast majority of its inaccurate recommendations were due to them being "over-conclusive" (12/15, 80%), rather than "insufficient" (3/15, 20%). ChatGPT-4.0 answered 19 (67.9%) of the 28 total questions in concordance with NASS guidelines (P = 0.177). When NASS indicated there was sufficient evidence to offer a clear recommendation, ChatGPT-4.0 generated a concordant response 66.7% of the time (6/9). ChatGPT-4.0's concordance held up at 68.4% when asked clinical questions that NASS did not provide a clear recommendation on (13/19, P = 0.104). CONCLUSIONS: This study sheds light on the duality of LLM applications within clinical settings: one of accuracy and utility in some contexts versus inaccuracy and risk in others. ChatGPT was concordant for most clinical questions NASS offered recommendations for. However, for questions NASS did not offer best practices, ChatGPT generated answers that were either too general or inconsistent with the literature, and even fabricated data/citations. Thus, clinicians should exercise extreme caution when attempting to consult ChatGPT for clinical recommendations, taking care to ensure its reliability within the context of recent literature.

3.
J Orthop ; 53: 27-33, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38450060

RESUMEN

Background: Resident training programs in the US use the Orthopaedic In-Training Examination (OITE) developed by the American Academy of Orthopaedic Surgeons (AAOS) to assess the current knowledge of their residents and to identify the residents at risk of failing the Amerian Board of Orthopaedic Surgery (ABOS) examination. Optimal strategies for OITE preparation are constantly being explored. There may be a role for Large Language Models (LLMs) in orthopaedic resident education. ChatGPT, an LLM launched in late 2022 has demonstrated the ability to produce accurate, detailed answers, potentially enabling it to aid in medical education and clinical decision-making. The purpose of this study is to evaluate the performance of ChatGPT on Orthopaedic In-Training Examinations using Self-Assessment Exams from the AAOS database and approved literature as a proxy for the Orthopaedic Board Examination. Methods: 301 SAE questions from the AAOS database and associated AAOS literature were input into ChatGPT's interface in a question and multiple-choice format and the answers were then analyzed to determine which answer choice was selected. A new chat was used for every question. All answers were recorded, categorized, and compared to the answer given by the OITE and SAE exams, noting whether the answer was right or wrong. Results: Of the 301 questions asked, ChatGPT was able to correctly answer 183 (60.8%) of them. The subjects with the highest percentage of correct questions were basic science (81%), oncology (72.7%, shoulder and elbow (71.9%), and sports (71.4%). The questions were further subdivided into 3 groups: those about management, diagnosis, or knowledge recall. There were 86 management questions and 47 were correct (54.7%), 45 diagnosis questions with 32 correct (71.7%), and 168 knowledge recall questions with 102 correct (60.7%). Conclusions: ChatGPT has the potential to provide orthopedic educators and trainees with accurate clinical conclusions for the majority of board-style questions, although its reasoning should be carefully analyzed for accuracy and clinical validity. As such, its usefulness in a clinical educational context is currently limited but rapidly evolving. Clinical relevance: ChatGPT can access a multitude of medical data and may help provide accurate answers to clinical questions.

4.
BJUI Compass ; 5(4): 405-425, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38633827

RESUMEN

Background: Racial disparities in oncological outcomes resulting from differences in social determinants of health (SDOH) and tumour biology are well described in prostate cancer (PCa) but similar inequities exist in bladder (BCa) and renal cancers (RCCs). Precision medicine (PM) aims to provide personalized treatment based on individual patient characteristics and has the potential to reduce these inequities in GU cancers. Objective: This article aims to review the current evidence outlining racial disparities in GU cancers and explore studies demonstrating improved oncological outcomes when PM is applied to racially diverse patient populations. Evidence acquisition: Evidence was obtained from Pubmed and Web of Science using keywords prostate, bladder and renal cancer, racial disparity and precision medicine. Because limited studies were found, preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines were not applied but rather related articles were studied to explore existing debates, identify the current status and speculate on future applications. Results: Evidence suggests addressing SDOH for PCa can reverse racial inequities in oncological outcomes but differences in incidence remain. Similar disparities in BCa and RCC are seen, and it would be reasonable to suggest achieving parity in SDOH for all races would do the same. Research applying a PM approach to different ethnicities is lacking although in African Americans (AAs) with metastatic castrate-resistant prostate cancer (mCRPCa) better outcomes have been shown with androgen receptor inhibitors, radium-223 and sipuleucel. Exploiting the abscopal effect with targeted radiation therapy (RT) and immunotherapy has promise but requires further study, as does defining actionable mutations in specific patient groups to tailor treatments as appropriate. Conclusion: For all GU cancers, the historical underrepresentation of ethnic minorities in clinical trials still exists and there is an urgent need for recruitment strategies to address this. PM is a promising development with the potential to reduce inequities in GU cancers, however, both improved understanding of race-specific tumour biology, and enhanced recruitment of minority populations into clinical trials are required. Without this, the benefits of PM will be limited.

5.
BJUI Compass ; 5(3): 334-344, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38481668

RESUMEN

Particle therapy and radiopharmaceuticals are emerging fields in the treatment of genitourinary cancers. With these novel techniques and the ever-growing immunotherapy options, the combinations of these therapies have the potential to improve current cancer cure rates. However, the most effective sequence and combination of these therapies is unknown and is a question that is actively being explored in multiple ongoing clinical trials. Here, we review the immunological effects of particle therapy and the available radiopharmaceuticals and discuss how best to combine these therapies.

6.
J Neurosurg Spine ; : 1-11, 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38941643

RESUMEN

OBJECTIVE: The objective of this study was to assess the safety and accuracy of ChatGPT recommendations in comparison to the evidence-based guidelines from the North American Spine Society (NASS) for the diagnosis and treatment of cervical radiculopathy. METHODS: ChatGPT was prompted with questions from the 2011 NASS clinical guidelines for cervical radiculopathy and evaluated for concordance. Selected key phrases within the NASS guidelines were identified. Completeness was measured as the number of overlapping key phrases between ChatGPT responses and NASS guidelines divided by the total number of key phrases. A senior spine surgeon evaluated the ChatGPT responses for safety and accuracy. ChatGPT responses were further evaluated on their readability, similarity, and consistency. Flesch Reading Ease scores and Flesch-Kincaid reading levels were measured to assess readability. The Jaccard Similarity Index was used to assess agreement between ChatGPT responses and NASS clinical guidelines. RESULTS: A total of 100 key phrases were identified across 14 NASS clinical guidelines. The mean completeness of ChatGPT-4 was 46%. ChatGPT-3.5 yielded a completeness of 34%. ChatGPT-4 outperformed ChatGPT-3.5 by a margin of 12%. ChatGPT-4.0 outputs had a mean Flesch reading score of 15.24, which is very difficult to read, requiring a college graduate education to understand. ChatGPT-3.5 outputs had a lower mean Flesch reading score of 8.73, indicating that they are even more difficult to read and require a professional education level to do so. However, both versions of ChatGPT were more accessible than NASS guidelines, which had a mean Flesch reading score of 4.58. Furthermore, with NASS guidelines as a reference, ChatGPT-3.5 registered a mean ± SD Jaccard Similarity Index score of 0.20 ± 0.078 while ChatGPT-4 had a mean of 0.18 ± 0.068. Based on physician evaluation, outputs from ChatGPT-3.5 and ChatGPT-4.0 were safe 100% of the time. Thirteen of 14 (92.8%) ChatGPT-3.5 responses and 14 of 14 (100%) ChatGPT-4.0 responses were in agreement with current best clinical practices for cervical radiculopathy according to a senior spine surgeon. CONCLUSIONS: ChatGPT models were able to provide safe and accurate but incomplete responses to NASS clinical guideline questions about cervical radiculopathy. Although the authors' results suggest that improvements are required before ChatGPT can be reliably deployed in a clinical setting, future versions of the LLM hold promise as an updated reference for guidelines on cervical radiculopathy. Future versions must prioritize accessibility and comprehensibility for a diverse audience.

7.
J Orthop Res ; 42(8): 1696-1709, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38499500

RESUMEN

Pannexin 3 (Panx3) is a glycoprotein that forms mechanosensitive channels expressed in chondrocytes and annulus fibrosus cells of the intervertebral disc (IVD). Evidence suggests Panx3 plays contrasting roles in traumatic versus aging osteoarthritis (OA) and intervertebral disc degeneration (IDD). However, whether its deletion influences the response of joint tissue to forced use is unknown. The purpose of this study was to determine if Panx3 deletion in mice causes increased knee joint OA and IDD after forced treadmill running. Male and female wildtype (WT) and Panx3 knockout (KO) mice were randomized to either a no-exercise group (sedentary; SED) or daily forced treadmill running (forced exercise; FEX) from 24 to 30 weeks of age. Knee cartilage and IVD histopathology were evaluated by histology, while tibial secondary ossification centers were analyzed using microcomputed tomography (µCT). Both male and female Panx3 KO mice developed larger superficial defects of the tibial cartilage after forced treadmill running compared with SED WT mice. Additionally, Panx3 KO mice developed reduced bone volume, and female PANX3 KO mice had lengthening of the lateral tubercle at the intercondylar eminence. In the lower lumbar spine, both male and female Panx3 KO mice developed histopathological features of IDD after running compared to SED WT mice. These findings suggest that the combination of deleting Panx3 and forced treadmill running induces OA and causes histopathological changes associated with the degeneration of the IVDs in mice.


Asunto(s)
Conexinas , Degeneración del Disco Intervertebral , Ratones Noqueados , Osteoartritis de la Rodilla , Animales , Femenino , Osteoartritis de la Rodilla/etiología , Osteoartritis de la Rodilla/patología , Conexinas/genética , Conexinas/deficiencia , Masculino , Degeneración del Disco Intervertebral/patología , Degeneración del Disco Intervertebral/etiología , Degeneración del Disco Intervertebral/genética , Ratones Endogámicos C57BL , Carrera , Ratones , Condicionamiento Físico Animal , Cartílago Articular/patología , Proteínas del Tejido Nervioso/genética , Proteínas del Tejido Nervioso/deficiencia
8.
Radiat Oncol ; 19(1): 69, 2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-38822385

RESUMEN

BACKGROUND: Multiple artificial intelligence (AI)-based autocontouring solutions have become available, each promising high accuracy and time savings compared with manual contouring. Before implementing AI-driven autocontouring into clinical practice, three commercially available CT-based solutions were evaluated. MATERIALS AND METHODS: The following solutions were evaluated in this work: MIM-ProtégéAI+ (MIM), Radformation-AutoContour (RAD), and Siemens-DirectORGANS (SIE). Sixteen organs were identified that could be contoured by all solutions. For each organ, ten patients that had manually generated contours approved by the treating physician (AP) were identified, totaling forty-seven different patients. CT scans in the supine position were acquired using a Siemens-SOMATOMgo 64-slice helical scanner and used to generate autocontours. Physician scoring of contour accuracy was performed by at least three physicians using a five-point Likert scale. Dice similarity coefficient (DSC), Hausdorff distance (HD) and mean distance to agreement (MDA) were calculated comparing AI contours to "ground truth" AP contours. RESULTS: The average physician score ranged from 1.00, indicating that all physicians reviewed the contour as clinically acceptable with no modifications necessary, to 3.70, indicating changes are required and that the time taken to modify the structures would likely take as long or longer than manually generating the contour. When averaged across all sixteen structures, the AP contours had a physician score of 2.02, MIM 2.07, RAD 1.96 and SIE 1.99. DSC ranged from 0.37 to 0.98, with 41/48 (85.4%) contours having an average DSC ≥ 0.7. Average HD ranged from 2.9 to 43.3 mm. Average MDA ranged from 0.6 to 26.1 mm. CONCLUSIONS: The results of our comparison demonstrate that each vendor's AI contouring solution exhibited capabilities similar to those of manual contouring. There were a small number of cases where unusual anatomy led to poor scores with one or more of the solutions. The consistency and comparable performance of all three vendors' solutions suggest that radiation oncology centers can confidently choose any of the evaluated solutions based on individual preferences, resource availability, and compatibility with their existing clinical workflows. Although AI-based contouring may result in high-quality contours for the majority of patients, a minority of patients require manual contouring and more in-depth physician review.


Asunto(s)
Inteligencia Artificial , Planificación de la Radioterapia Asistida por Computador , Tomografía Computarizada por Rayos X , Humanos , Planificación de la Radioterapia Asistida por Computador/métodos , Órganos en Riesgo/efectos de la radiación , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
9.
Neurospine ; 21(1): 128-146, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38569639

RESUMEN

OBJECTIVE: Large language models, such as chat generative pre-trained transformer (ChatGPT), have great potential for streamlining medical processes and assisting physicians in clinical decision-making. This study aimed to assess the potential of ChatGPT's 2 models (GPT-3.5 and GPT-4.0) to support clinical decision-making by comparing its responses for antibiotic prophylaxis in spine surgery to accepted clinical guidelines. METHODS: ChatGPT models were prompted with questions from the North American Spine Society (NASS) Evidence-based Clinical Guidelines for Multidisciplinary Spine Care for Antibiotic Prophylaxis in Spine Surgery (2013). Its responses were then compared and assessed for accuracy. RESULTS: Of the 16 NASS guideline questions concerning antibiotic prophylaxis, 10 responses (62.5%) were accurate in ChatGPT's GPT-3.5 model and 13 (81%) were accurate in GPT-4.0. Twenty-five percent of GPT-3.5 answers were deemed as overly confident while 62.5% of GPT-4.0 answers directly used the NASS guideline as evidence for its response. CONCLUSION: ChatGPT demonstrated an impressive ability to accurately answer clinical questions. GPT-3.5 model's performance was limited by its tendency to give overly confident responses and its inability to identify the most significant elements in its responses. GPT-4.0 model's responses had higher accuracy and cited the NASS guideline as direct evidence many times. While GPT-4.0 is still far from perfect, it has shown an exceptional ability to extract the most relevant research available compared to GPT-3.5. Thus, while ChatGPT has shown far-reaching potential, scrutiny should still be exercised regarding its clinical use at this time.

10.
Spine (Phila Pa 1976) ; 49(9): 640-651, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38213186

RESUMEN

STUDY DESIGN: Comparative analysis. OBJECTIVE: To evaluate Chat Generative Pre-trained Transformer (ChatGPT's) ability to predict appropriate clinical recommendations based on the most recent clinical guidelines for the diagnosis and treatment of low back pain. BACKGROUND: Low back pain is a very common and often debilitating condition that affects many people globally. ChatGPT is an artificial intelligence model that may be able to generate recommendations for low back pain. MATERIALS AND METHODS: Using the North American Spine Society Evidence-Based Clinical Guidelines as the gold standard, 82 clinical questions relating to low back pain were entered into ChatGPT (GPT-3.5) independently. For each question, we recorded ChatGPT's answer, then used a point-answer system-the point being the guideline recommendation and the answer being ChatGPT's response-and asked ChatGPT if the point was mentioned in the answer to assess for accuracy. This response accuracy was repeated with one caveat-a prior prompt is given in ChatGPT to answer as an experienced orthopedic surgeon-for each question by guideline category. A two-sample proportion z test was used to assess any differences between the preprompt and postprompt scenarios with alpha=0.05. RESULTS: ChatGPT's response was accurate 65% (72% postprompt, P =0.41) for guidelines with clinical recommendations, 46% (58% postprompt, P =0.11) for guidelines with insufficient or conflicting data, and 49% (16% postprompt, P =0.003*) for guidelines with no adequate study to address the clinical question. For guidelines with insufficient or conflicting data, 44% (25% postprompt, P =0.01*) of ChatGPT responses wrongly suggested that sufficient evidence existed. CONCLUSION: ChatGPT was able to produce a sufficient clinical guideline recommendation for low back pain, with overall improvements if initially prompted. However, it tended to wrongly suggest evidence and often failed to mention, especially postprompt, when there is not enough evidence to adequately give an accurate recommendation.


Asunto(s)
Dolor de la Región Lumbar , Cirujanos Ortopédicos , Humanos , Dolor de la Región Lumbar/diagnóstico , Dolor de la Región Lumbar/terapia , Inteligencia Artificial , Columna Vertebral
11.
Ophthalmol Retina ; 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38719191

RESUMEN

PURPOSE: To evaluate the impact of reduction in geographic atrophy (GA) lesion growth on visual acuity in the GATHER trials using categorical outcome measures. DESIGN: Randomized, double-masked, sham-controlled phase 3 trials. PARTICIPANTS: Aged ≥50 years with noncenter point-involving GA and best-corrected visual acuity (BCVA) of 25 to 80 ETDRS letters in the study eye. METHODS: GATHER1 consisted of 2 parts. In part 1, 77 patients were randomized 1:1:1 to avacincaptad pegol (ACP) 1 mg, ACP 2 mg, and sham. In part 2, 209 patients were randomized 1:2:2 to ACP 2 mg, ACP 4 mg, and sham. In GATHER2, patients were randomized 1:1 to ACP 2 mg (n = 225) and sham (n = 223). A post hoc analysis of 12-month data for pooled ACP 2 mg and sham groups is reported. MAIN OUTCOME MEASURES: Proportion of study eyes that experienced ≥10-, ≥15-, or ≥20-BCVA ETDRS letter loss from baseline to month 12; time-to-event analysis of persistent vision loss of ≥10, ≥15, or≥ 20 BCVA letters from baseline at ≥2 consecutive visits over 12 months; proportion of study eyes with BCVA loss to a level below driving eligibility threshold at month 12 among those eligible to drive at baseline. RESULTS: Lower proportions of study eyes experienced ≥10-, ≥15-, or ≥20-BCVA letter loss from baseline over 12 months with ACP 2 mg (11.6%, 4.0%, and 1.6%, respectively) versus sham (14.1%, 7.6%, and 4.5%, respectively). There was a reduction in the risk of persistent loss of ≥15 BCVA ETDRS letters with ACP 2 mg (3.4%) versus sham (7.8%) through 12 months. A lower proportion of study eyes treated with ACP 2 mg reached the threshold for driving ineligibility versus sham by 12 months. CONCLUSIONS: Treatment with ACP 2 mg delayed the risk of progression to persistent vision loss (i.e., ≥10-, ≥15-, and ≥20-BCVA letter loss or BCVA loss to a level below driving eligibility threshold) versus sham over 12 months. FINANCIAL DISCLOSURE(S): Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA