Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
1.
Nat Med ; 2024 May 03.
Article in English | MEDLINE | ID: mdl-38702523

ABSTRACT

Few young people with type 1 diabetes (T1D) meet glucose targets. Continuous glucose monitoring improves glycemia, but access is not equitable. We prospectively assessed the impact of a systematic and equitable digital-health-team-based care program implementing tighter glucose targets (HbA1c < 7%), early technology use (continuous glucose monitoring starts <1 month after diagnosis) and remote patient monitoring on glycemia in young people with newly diagnosed T1D enrolled in the Teamwork, Targets, Technology, and Tight Control (4T Study 1). Primary outcome was HbA1c change from 4 to 12 months after diagnosis; the secondary outcome was achieving the HbA1c targets. The 4T Study 1 cohort (36.8% Hispanic and 35.3% publicly insured) had a mean HbA1c of 6.58%, 64% with HbA1c < 7% and mean time in the range (70-180 mg dl-1) of 68% at 1 year after diagnosis. Clinical implementation of the 4T Study 1 met the prespecified primary outcome and improved glycemia without unexpected serious adverse events. The strategies in the 4T Study 1 can be used to implement systematic and equitable care for individuals with T1D and translate to care for other chronic diseases. ClinicalTrials.gov registration: NCT04336969 .

3.
Diabetes Technol Ther ; 26(3): 176-183, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37955644

ABSTRACT

Introduction: Diabetic ketoacidosis (DKA) at diagnosis is associated with short- and long-term complications. We assessed the relationship between DKA status and hemoglobin A1c (A1c) levels in the first year following type 1 diabetes (T1D) diagnosis. Research Design and Methods: The Pilot Teamwork, Targets, Technology, and Tight Control (4T) study offered continuous glucose monitoring to youth with T1D within 1 month of diagnosis. A1c levels were compared between historical (n = 271) and Pilot 4T (n = 135) cohorts stratified by DKA status at diagnosis (DKA: historical = 94, 4T = 67 versus without DKA: historical = 177, 4T = 68). A1c was evaluated using locally estimated scatter plot smoothing. Change in A1c from 4 to 12 months postdiagnosis was evaluated using a linear mixed model. Results: Median age was 9.7 (interquartile range [IQR]: 6.6, 12.7) versus 9.7 (IQR: 6.8, 12.7) years, 49% versus 47% female, 44% versus 39% non-Hispanic White in historical versus Pilot 4T. In historical and 4T cohorts, DKA at diagnosis demonstrated higher A1c at 6 (0.5% [95% confidence interval (CI): 0.21-0.79; P < 0.01] and 0.38% [95% CI: 0.02-0.74; P = 0.04], respectively), and 12 months (0.62% [95% CI: -0.06 to 1.29; P = 0.07] and 0.39% [95% CI: -0.32 to 1.10; P = 0.29], respectively). The highest % time in range (TIR; 70-180 mg/dL) was seen between weeks 15-20 (69%) versus 25-30 (75%) postdiagnosis for youth with versus without DKA in Pilot 4T, respectively. Conclusions: Pilot 4T improved A1c outcomes versus the historical cohort, but those with DKA at diagnosis had persistently elevated A1c throughout the study and intensive diabetes management did not mitigate this difference. DKA prevention at diagnosis may translate into better glycemic outcomes in the first-year postdiagnosis. Clinical Trial Registration: clinicaltrials.gov: NCT04336969.


Subject(s)
Diabetes Mellitus, Type 1 , Diabetic Ketoacidosis , Adolescent , Female , Humans , Male , Blood Glucose , Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1/complications , Diabetes Mellitus, Type 1/drug therapy , Diabetic Ketoacidosis/etiology , Glycated Hemoglobin , Insulin/therapeutic use , Pilot Projects
4.
Cancer ; 130(5): 770-780, 2024 03 01.
Article in English | MEDLINE | ID: mdl-37877788

ABSTRACT

BACKGROUND: Recent therapeutic advances and screening technologies have improved survival among patients with lung cancer, who are now at high risk of developing second primary lung cancer (SPLC). Recently, an SPLC risk-prediction model (called SPLC-RAT) was developed and validated using data from population-based epidemiological cohorts and clinical trials, but real-world validation has been lacking. The predictive performance of SPLC-RAT was evaluated in a hospital-based cohort of lung cancer survivors. METHODS: The authors analyzed data from 8448 ever-smoking patients diagnosed with initial primary lung cancer (IPLC) in 1997-2006 at Mayo Clinic, with each patient followed for SPLC through 2018. The predictive performance of SPLC-RAT and further explored the potential of improving SPLC detection through risk model-based surveillance using SPLC-RAT versus existing clinical surveillance guidelines. RESULTS: Of 8448 IPLC patients, 483 (5.7%) developed SPLC over 26,470 person-years. The application of SPLC-RAT showed high discrimination area under the receiver operating characteristics curve: 0.81). When the cohort was stratified by a 10-year risk threshold of ≥5.6% (i.e., 80th percentile from the SPLC-RAT development cohort), the observed SPLC incidence was significantly elevated in the high-risk versus low-risk subgroup (13.1% vs. 1.1%, p < 1 × 10-6 ). The risk-based surveillance through SPLC-RAT (≥5.6% threshold) outperformed the National Comprehensive Cancer Network guidelines with higher sensitivity (86.4% vs. 79.4%) and specificity (38.9% vs. 30.4%) and required 20% fewer computed tomography follow-ups needed to detect one SPLC (162 vs. 202). CONCLUSION: In a large, hospital-based cohort, the authors validated the predictive performance of SPLC-RAT in identifying high-risk survivors of SPLC and showed its potential to improve SPLC detection through risk-based surveillance. PLAIN LANGUAGE SUMMARY: Lung cancer survivors have a high risk of developing second primary lung cancer (SPLC). However, no evidence-based guidelines for SPLC surveillance are available for lung cancer survivors. Recently, an SPLC risk-prediction model was developed and validated using data from population-based epidemiological cohorts and clinical trials, but real-world validation has been lacking. Using a large, real-world cohort of lung cancer survivors, we showed the high predictive accuracy and risk-stratification ability of the SPLC risk-prediction model. Furthermore, we demonstrated the potential to enhance efficiency in detecting SPLC using risk model-based surveillance strategies compared to the existing consensus-based clinical guidelines, including the National Comprehensive Cancer Network.


Subject(s)
Cancer Survivors , Lung Neoplasms , Neoplasms, Second Primary , Humans , Lung Neoplasms/diagnosis , Lung Neoplasms/epidemiology , Lung Neoplasms/therapy , Risk , Smoking , Lung
5.
Neuroradiol J ; 37(1): 74-83, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37921691

ABSTRACT

PURPOSE: We aimed to use machine learning (ML) algorithms with clinical, lab, and imaging data as input to predict various outcomes in traumatic brain injury (TBI) patients. METHODS: In this retrospective study, blood samples were analyzed for glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1). The non-contrast head CTs were reviewed by two neuroradiologists for TBI common data elements (CDE). Three outcomes were designed to predict: discharged or admitted for further management (prediction 1), deceased or not deceased (prediction 2), and admission only, prolonged stay, or neurosurgery performed (prediction 3). Five ML models were trained. SHapley Additive exPlanations (SHAP) analyses were used to assess the relative significance of variables. RESULTS: Four hundred forty patients were used to predict predictions 1 and 2, while 271 patients were used in prediction 3. Due to Prediction 3's hospitalization requirement, deceased and discharged patients could not be utilized. The Random Forest model achieved an average accuracy of 1.00 for prediction 1 and an accuracy of 0.99 for prediction 2. The Random Forest model achieved a mean accuracy of 0.93 for prediction 3. Key features were extracranial injury, hemorrhage, UCH-L1 for prediction 1; The Glasgow Coma Scale, age, GFAP for prediction 2; and GFAP, subdural hemorrhage volume, and pneumocephalus for prediction 3, per SHAP analysis. CONCLUSION: Combining clinical and laboratory parameters with non-contrast CT CDEs allowed our ML models to accurately predict the designed outcomes of TBI patients. GFAP and UCH-L1 were among the significant predictor variables, demonstrating the importance of these biomarkers.


Subject(s)
Brain Injuries, Traumatic , Ubiquitin Thiolesterase , Humans , Retrospective Studies , Brain Injuries, Traumatic/diagnostic imaging , Prognosis , Biomarkers , Hospitals
6.
JAMA Oncol ; 9(12): 1640-1648, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-37883107

ABSTRACT

Importance: The revised 2021 US Preventive Services Task Force (USPSTF) guidelines for lung cancer screening have been shown to reduce disparities in screening eligibility and performance between African American and White individuals vs the 2013 guidelines. However, potential disparities across other racial and ethnic groups in the US remain unknown. Risk model-based screening may reduce racial and ethnic disparities and improve screening performance, but neither validation of key risk prediction models nor their screening performance has been examined by race and ethnicity. Objective: To validate and recalibrate the Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial 2012 (PLCOm2012) model-a well-established risk prediction model based on a predominantly White population-across races and ethnicities in the US and evaluate racial and ethnic disparities and screening performance through risk-based screening using PLCOm2012 vs the USPSTF 2021 criteria. Design, Setting, and Participants: In a population-based cohort design, the Multiethnic Cohort Study enrolled participants in 1993-1996, followed up through December 31, 2018. Data analysis was conducted from April 1, 2022, to May 19. 2023. A total of 105 261 adults with a smoking history were included. Exposures: The 6-year lung cancer risk was calculated through recalibrated PLCOm2012 (ie, PLCOm2012-Update) and screening eligibility based on a 6-year risk threshold greater than or equal to 1.3%, yielding similar eligibility as the USPSTF 2021 guidelines. Outcomes: Predictive accuracy, screening eligibility-incidence (E-I) ratio (ie, ratio of the number of eligible to incident cases), and screening performance (sensitivity, specificity, and number needed to screen to detect 1 lung cancer). Results: Of 105 261 participants (60 011 [57.0%] men; mean [SD] age, 59.8 [8.7] years), consisting of 19 258 (18.3%) African American, 27 227 (25.9%) Japanese American, 21 383 (20.3%) Latino, 8368 (7.9%) Native Hawaiian/Other Pacific Islander, and 29 025 (27.6%) White individuals, 1464 (1.4%) developed lung cancer within 6 years from enrollment. The PLCOm2012-Update showed good predictive accuracy across races and ethnicities (area under the curve, 0.72-0.82). The USPSTF 2021 criteria yielded a large disparity among African American individuals, whose E-I ratio was 53% lower vs White individuals (E-I ratio: 9.5 vs 20.3; P < .001). Under the risk-based screening (PLCOm2012-Update 6-year risk ≥1.3%), the disparity between African American and White individuals was substantially reduced (E-I ratio: 15.9 vs 18.4; P < .001), with minimal disparities observed in persons of other minoritized groups, including Japanese American, Latino, and Native Hawaiian/Other Pacific Islander. Risk-based screening yielded superior overall and race and ethnicity-specific performance to the USPSTF 2021 criteria, with higher overall sensitivity (67.2% vs 57.7%) and lower number needed to screen (26 vs 30) at similar specificity (76.6%). Conclusions: The findings of this cohort study suggest that risk-based lung cancer screening can reduce racial and ethnic disparities and improve screening performance across races and ethnicities vs the USPSTF 2021 criteria.


Subject(s)
Early Detection of Cancer , Lung Neoplasms , Male , Adult , Humans , Middle Aged , Female , Cohort Studies , Lung Neoplasms/diagnosis , Lung Neoplasms/epidemiology , Ethnicity , Hispanic or Latino
7.
Int J Epidemiol ; 52(6): 1984-1989, 2023 Dec 25.
Article in English | MEDLINE | ID: mdl-37670428

ABSTRACT

MOTIVATION: Providing a dynamic assessment of prognosis is essential for improved personalized medicine. The landmark model for survival data provides a potentially powerful solution to the dynamic prediction of disease progression. However, a general framework and a flexible implementation of the model that incorporates various outcomes, such as competing events, have been lacking. We present an R package, dynamicLM, a user-friendly tool for the landmark model for the dynamic prediction of survival data under competing risks, which includes various functions for data preparation, model development, prediction and evaluation of predictive performance. IMPLEMENTATION: dynamicLM as an R package. GENERAL FEATURES: The package includes options for incorporating time-varying covariates, capturing time-dependent effects of predictors and fitting a cause-specific landmark model for time-to-event data with or without competing risks. Tools for evaluating the prediction performance include time-dependent area under the ROC curve, Brier Score and calibration. AVAILABILITY: Available on GitHub [https://github.com/thehanlab/dynamicLM].


Subject(s)
Models, Statistical , Software , Humans , Prognosis , ROC Curve
8.
Circulation ; 148(12): 950-958, 2023 09 19.
Article in English | MEDLINE | ID: mdl-37602376

ABSTRACT

BACKGROUND: Previous studies comparing percutaneous coronary intervention (PCI) with coronary artery bypass grafting (CABG) in patients with multivessel coronary disease not involving the left main have shown significantly lower rates of death, myocardial infarction (MI), or stroke after CABG. These studies did not routinely use current-generation drug-eluting stents or fractional flow reserve (FFR) to guide PCI. METHODS: FAME 3 (Fractional Flow Reserve versus Angiography for Multivessel Evaluation) is an investigator-initiated, multicenter, international, randomized trial involving patients with 3-vessel coronary artery disease (not involving the left main coronary artery) in 48 centers worldwide. Patients were randomly assigned to receive FFR-guided PCI using zotarolimus drug-eluting stents or CABG. The prespecified key secondary end point of the trial reported here is the 3-year incidence of the composite of death, MI, or stroke. RESULTS: A total of 1500 patients were randomized to FFR-guided PCI or CABG. Follow-up was achieved in >96% of patients in both groups. There was no difference in the incidence of the composite of death, MI, or stroke after FFR-guided PCI compared with CABG (12.0% versus 9.2%; hazard ratio [HR], 1.3 [95% CI, 0.98-1.83]; P=0.07). The rates of death (4.1% versus 3.9%; HR, 1.0 [95% CI, 0.6-1.7]; P=0.88) and stroke (1.6% versus 2.0%; HR, 0.8 [95% CI, 0.4-1.7]; P=0.56) were not different. MI occurred more frequently after PCI (7.0% versus 4.2%; HR, 1.7 [95% CI, 1.1-2.7]; P=0.02). CONCLUSIONS: At 3-year follow-up, there was no difference in the incidence of the composite of death, MI, or stroke after FFR-guided PCI with current-generation drug-eluting stents compared with CABG. There was a higher incidence of MI after PCI compared with CABG, with no difference in death or stroke. These results provide contemporary data to allow improved shared decision-making between physicians and patients with 3-vessel coronary artery disease. REGISTRATION: URL: https://www. CLINICALTRIALS: gov; Unique identifier: NCT02100722.


Subject(s)
Coronary Artery Disease , Fractional Flow Reserve, Myocardial , Myocardial Infarction , Percutaneous Coronary Intervention , Stroke , Humans , Coronary Artery Disease/surgery , Follow-Up Studies , Percutaneous Coronary Intervention/adverse effects , Coronary Artery Bypass/adverse effects , Stroke/epidemiology , Stroke/etiology
9.
Neuroradiology ; 65(11): 1605-1617, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37269414

ABSTRACT

PURPOSE: This study aimed to assess and externally validate the performance of a deep learning (DL) model for the interpretation of non-contrast computed tomography (NCCT) scans of patients with suspicion of traumatic brain injury (TBI). METHODS: This retrospective and multi-reader study included patients with TBI suspicion who were transported to the emergency department and underwent NCCT scans. Eight reviewers, with varying levels of training and experience (two neuroradiology attendings, two neuroradiology fellows, two neuroradiology residents, one neurosurgery attending, and one neurosurgery resident), independently evaluated NCCT head scans. The same scans were evaluated using the version 5.0 of the DL model icobrain tbi. The establishment of the ground truth involved a thorough assessment of all accessible clinical and laboratory data, as well as follow-up imaging studies, including NCCT and magnetic resonance imaging, as a consensus amongst the study reviewers. The outcomes of interest included neuroimaging radiological interpretation system (NIRIS) scores, the presence of midline shift, mass effect, hemorrhagic lesions, hydrocephalus, and severe hydrocephalus, as well as measurements of midline shift and volumes of hemorrhagic lesions. Comparisons using weighted Cohen's kappa coefficient were made. The McNemar test was used to compare the diagnostic performance. Bland-Altman plots were used to compare measurements. RESULTS: One hundred patients were included, with the DL model successfully categorizing 77 scans. The median age for the total group was 48, with the omitted group having a median age of 44.5 and the included group having a median age of 48. The DL model demonstrated moderate agreement with the ground truth, trainees, and attendings. With the DL model's assistance, trainees' agreement with the ground truth improved. The DL model showed high specificity (0.88) and positive predictive value (0.96) in classifying NIRIS scores as 0-2 or 3-4. Trainees and attendings had the highest accuracy (0.95). The DL model's performance in classifying various TBI CT imaging common data elements was comparable to that of trainees and attendings. The average difference for the DL model in quantifying the volume of hemorrhagic lesions was 6.0 mL with a wide 95% confidence interval (CI) of - 68.32 to 80.22, and for midline shift, the average difference was 1.4 mm with a 95% CI of - 3.4 to 6.2. CONCLUSION: While the DL model outperformed trainees in some aspects, attendings' assessments remained superior in most instances. Using the DL model as an assistive tool benefited trainees, improving their NIRIS score agreement with the ground truth. Although the DL model showed high potential in classifying some TBI CT imaging common data elements, further refinement and optimization are necessary to enhance its clinical utility.


Subject(s)
Brain Injuries, Traumatic , Deep Learning , Hydrocephalus , Humans , Retrospective Studies , Brain Injuries, Traumatic/diagnostic imaging , Tomography, X-Ray Computed/methods , Neuroimaging/methods
10.
Neuroradiol J ; 36(1): 68-75, 2023 Feb.
Article in English | MEDLINE | ID: mdl-35588232

ABSTRACT

INTRODUCTION: Traumatic brain injury (TBI) is a major public health concern in the U.S. Recommendations for patients admitted in the emergency department (ED) to receive head computed tomography (CT) scan are currently guided by various clinical decision rules. OBJECTIVE: To compare how a blood biomarker approach compares with clinical decision rules in terms of predicting a positive head CT in adult patients suspected of TBI. METHODS: We retrospectively identified patients transported to our emergency department and underwent a noncontrast head CT due to suspicion of TBI and who had blood samples available. Published thresholds for serum and plasma glial fibrillary acidic protein (GFAP), ubiquitin carboxyl-terminal hydrolase-L1 (UCH-L1), and serum S100ß were used to make CT recommendations. These blood biomarker-based recommendations were compared to those achieved under widely used clinical head CT decision rules (Canadian, New Orleans, NEXUS II, and ACEP Clinical Policy). RESULTS: Our study included 463 patients, of which 122 (26.3%) had one or more abnormalities presenting on head CT. Individual blood biomarkers achieved high negative predictive value (NPV) for abnormal head CT findings (88%-98%), although positive predictive value (PPV) was consistently low (25%-42%). A composite biomarker-based decision rule (GFAP+UCH-L1)'s NPV of 100% and PPV of 29% were comparable or better than those achieved under the clinical decision rules. CONCLUSION: Blood biomarkers perform at least as well as clinical rules in terms of selecting TBI patients for head CT and may be easier to implement in the clinical setting. A prospective study is necessary to validate this approach.


Subject(s)
Brain Injuries, Traumatic , Clinical Decision Rules , Adult , Humans , Prospective Studies , Retrospective Studies , Ubiquitin Thiolesterase , Canada , Biomarkers , Tomography, X-Ray Computed
11.
JNCI Cancer Spectr ; 6(3)2022 05 02.
Article in English | MEDLINE | ID: mdl-35642317

ABSTRACT

BACKGROUND: In 2021, the US Preventive Services Task Force (USPSTF) revised its lung cancer screening guidelines to expand screening eligibility. We evaluated screening sensitivities and racial and ethnic disparities under the 2021 USPSTF criteria vs alternative risk-based criteria in a racially and ethnically diverse population. METHODS: In the Multiethnic Cohort, we evaluated the proportion of ever-smoking lung cancer cases eligible for screening (ie, screening sensitivity) under the 2021 USPSTF criteria and under risk-based criteria through the PLCOm2012 model (6-year risk ≥1.51%). We also calculated the screening disparity (ie, absolute sensitivity difference) for each of 4 racial or ethnic groups (African American, Japanese American, Latino, Native Hawaiian) vs White cases. RESULTS: Among 5900 lung cancer cases, 43.3% were screen eligible under the 2021 USPSTF criteria. Screening sensitivities varied by race and ethnicity, with Native Hawaiian (56.7%) and White (49.6%) cases attaining the highest sensitivities and Latino (37.3%), African American (38.4%), and Japanese American (40.0%) cases attaining the lowest. Latino cases had the greatest screening disparity vs White cases at 12.4%, followed by African American (11.2%) and Japanese American (9.6%) cases. Under risk-based screening, the overall screening sensitivity increased to 75.7%, and all racial and ethnic groups had increased sensitivities (54.5%-91.9%). Whereas the screening disparity decreased to 5.1% for African American cases, it increased to 28.6% for Latino cases and 12.8% for Japanese American cases. CONCLUSIONS: In the Multiethnic Cohort, racial and ethnic disparities decreased but persisted under the 2021 USPSTF lung cancer screening guidelines. Risk-based screening through PLCOm2012 may increase screening sensitivities and help to reduce disparities in some, but not all, racial and ethnic groups. Further optimization of risk-based screening strategies across diverse populations is needed.


Subject(s)
Early Detection of Cancer , Lung Neoplasms , Cohort Studies , Ethnicity , Humans , Lung Neoplasms/diagnosis , Mass Screening
12.
J Neurotrauma ; 39(19-20): 1329-1338, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35546284

ABSTRACT

The objective of this work was to analyze the relationships between traumatic brain injury (TBI) on computed tomographic (CT) imaging and blood concentration of glial fibrillary acidic protein (GFAP), ubiquitin C-terminal hydrolase-L1 (UCH-L1), and S100B. This prospective cohort study involved 644 TBI patients referred to Stanford Hospital's Emergency Department between November 2015 and April 2017. Plasma and serum samples of 462 patients were analyzed for levels of GFAP, UCH-L1, and S100B. Glial neuronal ratio (GNR) was calculated as the ratio between GFAP and UCH-L1 concentrations. Admission head CT scans were reviewed for TBI imaging common data elements, and performance of biomarkers for identifying TBI was assessed via area under the receiver operating characteristic curve (ROC). We also dichotomized biomarkers at established thresholds and estimated standard measures of classification accuracy. We assessed the ability of GFAP, UCH-L1, and GNR to discriminate small and large/diffuse lesions based on CT imaging using an ROC analysis. In our cohort of mostly mild TBI patients, GFAP was significantly more accurate in detecting all types of acute brain injuries than UCH-L1 in terms of area under the curve (AUC) values (p < 0.001), and also compared with S100B (p < 0.001). UCH-L1 and S100B had similar performance (comparable AUC values, p = 0.342). Sensitivity exceeded 0.8 for each biomarker across all different types of TBI injuries, and no significant differences were observed by type of injury. There was a significant difference between GFAP and GNR in distinguishing between small lesions and large/diffuse lesions in all injuries (p = 0.004, p = 0.007). In conclusion, GFAP, UCH-L1, and S100B show high sensitivity and negative predictive values for all types of TBI lesions on head CT. A combination of negative blood biomarkers (GFAP and UCH-L1) in a patient suspected of TBI may be used to safely obviate the need for a head CT scan. GFAP is a promising indicator to discriminate between small and large/diffuse TBI lesions.


Subject(s)
Brain Concussion , Brain Injuries, Traumatic , Biomarkers , Brain Injuries, Traumatic/diagnosis , Cohort Studies , Glial Fibrillary Acidic Protein , Humans , Prospective Studies , Tomography, X-Ray Computed , Ubiquitin Thiolesterase
13.
Circulation ; 145(22): 1655-1662, 2022 05 31.
Article in English | MEDLINE | ID: mdl-35369704

ABSTRACT

BACKGROUND: Previous studies have shown that quality of life improves after coronary revascularization more so after coronary artery bypass grafting (CABG) than after percutaneous coronary intervention (PCI). This study aimed to evaluate the effect of fractional flow reserve guidance and current generation, zotarolimus drug-eluting stents on quality of life after PCI compared with CABG. METHODS: The FAME 3 trial (Fractional Flow Reserve Versus Angiography for Multivessel Evaluation) is a multicenter, international trial including 1500 patients with 3-vessel coronary artery disease who were randomly assigned to either CABG or fractional flow reserve-guided PCI. Quality of life was measured using the European Quality of Life-5 Dimensions (EQ-5D) questionnaire at baseline and 1 and 12 months. The Canadian Cardiovascular Class angina grade and working status were assessed at the same time points and at 6 months. The primary objective was to compare EQ-5D summary index at 12 months. Secondary end points included angina grade and work status. RESULTS: The EQ-5D summary index at 12 months did not differ between the PCI and CABG groups (difference, 0.001 [95% CI, -0.016 to 0.017]; P=0.946). The trajectory of EQ-5D during the 12 months differed (P<0.001) between PCI and CABG: at 1 month, EQ-5D was 0.063 (95% CI, 0.047 to 0.079) higher in the PCI group. A similar trajectory was found for the EQ (EuroQol) visual analogue scale. The proportion of patients with Canadian Cardiovascular Class 2 or greater angina at 12 months was 6.2% versus 3.1% (odds ratio, 2.5 [95% CI, 0.96-6.8]), respectively, in the PCI group compared with the CABG group. A greater percentage of younger patients (<65 years old) were working at 12 months in the PCI group compared with the CABG group (68% versus 57%; odds ratio, 3.9 [95% CI, 1.7-8.8]). CONCLUSIONS: In the FAME 3 trial, quality of life after fractional flow reserve-guided PCI with current generation drug-eluting stents compared with CABG was similar at 1 year. The rate of significant angina was low in both groups and not significantly different. The trajectory of improvement in quality of life was significantly better after PCI, as was working status in those <65 years old. REGISTRATION: URL: https://www. CLINICALTRIALS: gov; Unique identifier: NCT02100722.


Subject(s)
Coronary Artery Disease , Fractional Flow Reserve, Myocardial , Percutaneous Coronary Intervention , Aged , Angina Pectoris , Canada , Coronary Artery Bypass/adverse effects , Coronary Artery Bypass/methods , Coronary Artery Disease/surgery , Humans , Percutaneous Coronary Intervention/adverse effects , Percutaneous Coronary Intervention/methods , Quality of Life , Treatment Outcome
14.
JPEN J Parenter Enteral Nutr ; 46(8): 1914-1922, 2022 11.
Article in English | MEDLINE | ID: mdl-35274342

ABSTRACT

BACKGROUND: Small bowel bacterial overgrowth (SBBO) is a common, but difficult to diagnose and treat, problem in pediatric short bowel syndrome (SBS). Lack of clinical consensus criteria and unknown sensitivity and specificity of bedside diagnosis makes research on this potential SBS disease modifier challenging. The objective of this research was to describe clinical care of SBBO among international intestinal rehabilitation and nutrition support (IR&NS) providers treating patients with SBS. METHODS: A secure, confidential, international, electronic survey of IR&NS practitioners was conducted between March 2021 and May 2021. All analyses were conducted in the R statistical computing framework, version 4.0. RESULTS: Sixty percent of respondents agreed and 0% strongly disagreed that abdominal pain, distension, emesis, diarrhea, and malodorous stool, were attributable to SBBO. No more than 20% of respondents strongly agreed and no more than 40% agreed that any sign or symptom was specific for SBBO. For a first-time diagnosis, 31 practitioners agreed with use of a 7-day course of a single antibiotic, with a majority citing grade 5 evidence to inform their decisions (case series, uncontrolled studies, or expert opinion). The most common first antibiotic used to treat a new onset SBBO was metronidazole, and rifaximin was the second most commonly used. One hundred percent of respondents reported they would consider a consensus algorithm for SBBO, even if the algorithm may be divergent from their current practice. CONCLUSION: SBBO practice varies widely among experienced IR&NS providers. Development of a clinical consensus algorithm may help standardize care to improve research and care of this complex problem and to identify risks and benefits of chronic antibiotic use in SBS.


Subject(s)
Bacterial Infections , Short Bowel Syndrome , Humans , Child , Intestine, Small/microbiology , Practice Patterns, Physicians' , Short Bowel Syndrome/microbiology , Anti-Bacterial Agents/therapeutic use , Surveys and Questionnaires
15.
J Am Soc Echocardiogr ; 35(7): 752-761.e11, 2022 07.
Article in English | MEDLINE | ID: mdl-35257895

ABSTRACT

BACKGROUND: Fetal echocardiography is a major diagnostic imaging modality for prenatal detection of critical congenital heart disease. Diagnostic accuracy is essential for appropriate planning of delivery and neonatal care. The relationship between study comprehensiveness and diagnostic error is not well understood. The aim of this study was to test the hypothesis that high fetal echocardiographic study comprehensiveness would be associated with low diagnostic error. Diagnostic errors were defined as discordant fetal and postnatal diagnoses and were further characterized by potential causes, contributors, and clinical significance. METHODS: Fetal echocardiographic examinations performed at Lucile Packard Children's Hospital in which fetuses with critical congenital heart disease were anticipated to require postnatal surgical or catheter intervention in the first year of life were identified using the fetal cardiology program database. For this cohort, initial fetal echocardiographic images were reviewed and given a fetal echocardiography comprehensiveness score (FECS). Fetal diagnoses obtained from initial fetal echocardiographic images and reports were compared with postnatal diagnoses confirmed by transthoracic echocardiography and other imaging studies and/or surgery to determine diagnostic error. The relationship between FECS and diagnostic error was evaluated using multivariable logistic regression. RESULTS: Of the 304 initial fetal echocardiographic studies, diagnostic error (discrepant diagnosis, false negative, or false positive) occurred in 92 cases (30.3%). FECS was not associated with diagnostic error, but low FECS (≤80% complete) was associated with false negatives and procedural/conditional (P < .001) and technical (P = .005) contributors compared with high FECS (>80% complete). Cognitive factors made up the largest proportion of contributors to error. CONCLUSIONS: The comprehensiveness of fetal echocardiographic studies was not related to diagnostic error. The most common contributors to error were cognitive factors. Echocardiography laboratories can work to mitigate preventable cognitive error through quality improvement initiatives.


Subject(s)
Cardiology , Heart Defects, Congenital , Child , Echocardiography/methods , Female , Fetal Heart/diagnostic imaging , Fetus , Heart Defects, Congenital/diagnostic imaging , Humans , Infant, Newborn , Pregnancy , Prenatal Diagnosis/methods , Ultrasonography, Prenatal/methods
16.
N Engl J Med ; 386(2): 128-137, 2022 01 13.
Article in English | MEDLINE | ID: mdl-34735046

ABSTRACT

BACKGROUND: Patients with three-vessel coronary artery disease have been found to have better outcomes with coronary-artery bypass grafting (CABG) than with percutaneous coronary intervention (PCI), but studies in which PCI is guided by measurement of fractional flow reserve (FFR) have been lacking. METHODS: In this multicenter, international, noninferiority trial, patients with three-vessel coronary artery disease were randomly assigned to undergo CABG or FFR-guided PCI with current-generation zotarolimus-eluting stents. The primary end point was the occurrence within 1 year of a major adverse cardiac or cerebrovascular event, defined as death from any cause, myocardial infarction, stroke, or repeat revascularization. Noninferiority of FFR-guided PCI to CABG was prespecified as an upper boundary of less than 1.65 for the 95% confidence interval of the hazard ratio. Secondary end points included a composite of death, myocardial infarction, or stroke; safety was also assessed. RESULTS: A total of 1500 patients underwent randomization at 48 centers. Patients assigned to undergo PCI received a mean (±SD) of 3.7±1.9 stents, and those assigned to undergo CABG received 3.4±1.0 distal anastomoses. The 1-year incidence of the composite primary end point was 10.6% among patients randomly assigned to undergo FFR-guided PCI and 6.9% among those assigned to undergo CABG (hazard ratio, 1.5; 95% confidence interval [CI], 1.1 to 2.2), findings that were not consistent with noninferiority of FFR-guided PCI (P = 0.35 for noninferiority). The incidence of death, myocardial infarction, or stroke was 7.3% in the FFR-guided PCI group and 5.2% in the CABG group (hazard ratio, 1.4; 95% CI, 0.9 to 2.1). The incidences of major bleeding, arrhythmia, and acute kidney injury were higher in the CABG group than in the FFR-guided PCI group. CONCLUSIONS: In patients with three-vessel coronary artery disease, FFR-guided PCI was not found to be noninferior to CABG with respect to the incidence of a composite of death, myocardial infarction, stroke, or repeat revascularization at 1 year. (Funded by Medtronic and Abbott Vascular; FAME 3 ClinicalTrials.gov number, NCT02100722.).


Subject(s)
Coronary Artery Bypass , Coronary Stenosis/surgery , Fractional Flow Reserve, Myocardial , Percutaneous Coronary Intervention/methods , Aged , Cardiovascular Diseases/epidemiology , Coronary Artery Bypass/adverse effects , Coronary Stenosis/mortality , Female , Humans , Kaplan-Meier Estimate , Length of Stay , Male , Middle Aged , Operative Time , Percutaneous Coronary Intervention/adverse effects , Reoperation , Stents
17.
J Natl Cancer Inst ; 114(1): 87-96, 2022 01 11.
Article in English | MEDLINE | ID: mdl-34255071

ABSTRACT

BACKGROUND: With advancing therapeutics, lung cancer (LC) survivors are rapidly increasing in number. Although mounting evidence suggests LC survivors have high risk of second primary lung cancer (SPLC), there is no validated prediction model available for clinical use to identify high-risk LC survivors for SPLC. METHODS: Using data from 6325 ever-smokers in the Multiethnic Cohort (MEC) study diagnosed with initial primary lung cancer (IPLC) in 1993-2017, we developed a prediction model for 10-year SPLC risk after IPLC diagnosis using cause-specific Cox regression. We evaluated the model's clinical utility using decision curve analysis and externally validated it using 2 population-based data-Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial (PLCO) and National Lung Screening Trial (NLST)-that included 2963 and 2844 IPLC (101 and 93 SPLC cases), respectively. RESULTS: Over 14 063 person-years, 145 (2.3%) ever-smoking IPLC patients developed SPLC in MEC. Our prediction model demonstrated a high predictive accuracy (Brier score = 2.9, 95% confidence interval [CI] = 2.4 to 3.3) and discrimination (area under the receiver operating characteristics [AUC] = 81.9%, 95% CI = 78.2% to 85.5%) based on bootstrap validation in MEC. Stratification by the estimated risk quartiles showed that the observed SPLC incidence was statistically significantly higher in the 4th vs 1st quartile (9.5% vs 0.2%; P < .001). Decision curve analysis indicated that in a wide range of 10-year risk thresholds from 1% to 20%, the model yielded a larger net-benefit vs hypothetical all-screening or no-screening scenarios. External validation using PLCO and NLST showed an AUC of 78.8% (95% CI = 74.6% to 82.9%) and 72.7% (95% CI = 67.7% to 77.7%), respectively. CONCLUSIONS: We developed and validated a SPLC prediction model based on large population-based cohorts. The proposed prediction model can help identify high-risk LC patients for SPLC and can be incorporated into clinical decision making for SPLC surveillance and screening.


Subject(s)
Lung Neoplasms , Neoplasms, Second Primary , Early Detection of Cancer , Humans , Lung , Lung Neoplasms/diagnosis , Lung Neoplasms/epidemiology , Male , Neoplasms, Second Primary/diagnosis , Neoplasms, Second Primary/epidemiology , Neoplasms, Second Primary/etiology , Smoking/adverse effects , Smoking/epidemiology
18.
J Clin Endocrinol Metab ; 107(4): 998-1008, 2022 03 24.
Article in English | MEDLINE | ID: mdl-34850024

ABSTRACT

CONTEXT: Youth with type 1 diabetes (T1D) do not meet glycated hemoglobin A1c (HbA1c) targets. OBJECTIVE: This work aimed to assess HbA1c outcomes in children with new-onset T1D enrolled in the Teamwork, Targets, Technology and Tight Control (4T) Study. METHODS: HbA1c levels were compared between the 4T and historical cohorts. HbA1c differences between cohorts were estimated using locally estimated scatter plot smoothing (LOESS). The change from nadir HbA1c (month 4) to 12 months post diagnosis was estimated by cohort using a piecewise mixed-effects regression model accounting for age at diagnosis, sex, ethnicity, and insurance type. We recruited 135 youth with newly diagnosed T1D at Stanford Children's Health. Starting July 2018, all youth within the first month of T1D diagnosis were offered continuous glucose monitoring (CGM) initiation and remote CGM data review was added in March 2019. The main outcomes measure was HbA1c. RESULTS: HbA1c at 6, 9, and 12 months post diagnosis was lower in the 4T cohort than in the historic cohort (-0.54% to -0.52%, and -0.58%, respectively). Within the 4T cohort, HbA1c at 6, 9, and 12 months post diagnosis was lower in those patients with remote monitoring than those without (-0.14%, -0.18% to -0.14%, respectively). Multivariable regression analysis showed that the 4T cohort experienced a significantly lower increase in HbA1c between months 4 and 12 (P < .001). CONCLUSION: A technology-enabled, team-based approach to intensified new-onset education involving target setting, CGM initiation, and remote data review statistically significantly decreased HbA1c in youth with T1D 12 months post diagnosis.


Subject(s)
Diabetes Mellitus, Type 1 , Adolescent , Blood Glucose/analysis , Blood Glucose Self-Monitoring , Child , Diabetes Mellitus, Type 1/diagnosis , Glycated Hemoglobin/analysis , Humans , Technology
19.
Diabetology (Basel) ; 3(3): 494-501, 2022 Sep.
Article in English | MEDLINE | ID: mdl-37163187

ABSTRACT

During the COVID-19 pandemic, fewer in-person clinic visits resulted in fewer point-of-care (POC) HbA1c measurements. In this sub-study, we assessed the performance of alternative glycemic measures that can be obtained remotely, such as HbA1c home kits and Glucose Management Indicator (GMI) values from Dexcom Clarity. Home kit HbA1c (n = 99), GMI, (n = 88), and POC HbA1c (n = 32) were collected from youth with T1D (age 9.7 ± 4.6 years). Bland-Altman analyses and Lin's concordance correlation coefficient (ρc) were used to characterize the agreement between paired HbA1c measures. Both the HbA1c home kit and GMI showed a slight positive bias (mean difference 0.18% and 0.34%, respectively) and strong concordance with POC HbA1c (ρc = 0.982 [0.965, 0.991] and 0.823 [0.686, 0.904], respectively). GMI showed a slight positive bias (mean difference 0.28%) and fair concordance (ρc = 0.750 [0.658, 0.820]) to the HbA1c home kit. In conclusion, the strong concordance of GMI and home kits to POC A1c measures suggest their utility in telehealth visits assessments. Although these are not candidates for replacement, these measures can facilitate telehealth visits, particularly in the context of other POC HbA1c measurements from an individual.

20.
J Neurosurg ; 135(6): 1725-1741, 2021 Apr 02.
Article in English | MEDLINE | ID: mdl-33799297

ABSTRACT

OBJECTIVE: The CyberKnife (CK) has emerged as an effective frameless and noninvasive method for treating a myriad of neurosurgical conditions. Here, the authors conducted an extensive retrospective analysis and review of the literature to elucidate the trend for CK use in the management paradigm for common neurosurgical diseases at their institution. METHODS: A literature review (January 1990-June 2019) and clinical review (January 1999-December 2018) were performed using, respectively, online research databases and the Stanford Research Repository of patients with intracranial and spinal lesions treated with CK at Stanford. For each disease considered, the coefficient of determination (r2) was estimated as a measure of CK utilization over time. A change in treatment modality was assessed using a t-test, with statistical significance assessed at the 0.05 alpha level. RESULTS: In over 7000 patients treated with CK for various brain and spinal lesions over the past 20 years, a positive linear trend (r2 = 0.80) in the system's use was observed. CK gained prominence in the management of intracranial and spinal arteriovenous malformations (AVMs; r2 = 0.89 and 0.95, respectively); brain and spine metastases (r2 = 0.97 and 0.79, respectively); benign tumors such as meningioma (r2 = 0.85), vestibular schwannoma (r2 = 0.76), and glomus jugulare tumor (r2 = 0.89); glioblastoma (r2 = 0.54); and trigeminal neuralgia (r2 = 0.81). A statistically significant difference in the change in treatment modality to CK was observed in the management of intracranial and spinal AVMs (p < 0.05), and while the treatment of brain and spine metastases, meningioma, and glioblastoma trended toward the use of CK, the change in treatment modality for these lesions was not statistically significant. CONCLUSIONS: Evidence suggests the robust use of CK for treating a wide range of neurological conditions.

SELECTION OF CITATIONS
SEARCH DETAIL
...