Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
J Am Med Inform Assoc ; 31(6): 1322-1330, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38679906

ABSTRACT

OBJECTIVES: To compare and externally validate popular deep learning model architectures and data transformation methods for variable-length time series data in 3 clinical tasks (clinical deterioration, severe acute kidney injury [AKI], and suspected infection). MATERIALS AND METHODS: This multicenter retrospective study included admissions at 2 medical centers that spanned 2007-2022. Distinct datasets were created for each clinical task, with 1 site used for training and the other for testing. Three feature engineering methods (normalization, standardization, and piece-wise linear encoding with decision trees [PLE-DTs]) and 3 architectures (long short-term memory/gated recurrent unit [LSTM/GRU], temporal convolutional network, and time-distributed wrapper with convolutional neural network [TDW-CNN]) were compared in each clinical task. Model discrimination was evaluated using the area under the precision-recall curve (AUPRC) and the area under the receiver operating characteristic curve (AUROC). RESULTS: The study comprised 373 825 admissions for training and 256 128 admissions for testing. LSTM/GRU models tied with TDW-CNN models with both obtaining the highest mean AUPRC in 2 tasks, and LSTM/GRU had the highest mean AUROC across all tasks (deterioration: 0.81, AKI: 0.92, infection: 0.87). PLE-DT with LSTM/GRU achieved the highest AUPRC in all tasks. DISCUSSION: When externally validated in 3 clinical tasks, the LSTM/GRU model architecture with PLE-DT transformed data demonstrated the highest AUPRC in all tasks. Multiple models achieved similar performance when evaluated using AUROC. CONCLUSION: The LSTM architecture performs as well or better than some newer architectures, and PLE-DT may enhance the AUPRC in variable-length time series data for predicting clinical outcomes during external validation.


Subject(s)
Deep Learning , Humans , Retrospective Studies , Acute Kidney Injury , Neural Networks, Computer , ROC Curve , Male , Datasets as Topic , Female , Middle Aged
2.
medRxiv ; 2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38370788

ABSTRACT

OBJECTIVE: Timely intervention for clinically deteriorating ward patients requires that care teams accurately diagnose and treat their underlying medical conditions. However, the most common diagnoses leading to deterioration and the relevant therapies provided are poorly characterized. Therefore, we aimed to determine the diagnoses responsible for clinical deterioration, the relevant diagnostic tests ordered, and the treatments administered among high-risk ward patients using manual chart review. DESIGN: Multicenter retrospective observational study. SETTING: Inpatient medical-surgical wards at four health systems from 2006-2020 PATIENTS: Randomly selected patients (1,000 from each health system) with clinical deterioration, defined by reaching the 95th percentile of a validated early warning score, electronic Cardiac Arrest Risk Triage (eCART), were included. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Clinical deterioration was confirmed by a trained reviewer or marked as a false alarm if no deterioration occurred for each patient. For true deterioration events, the condition causing deterioration, relevant diagnostic tests ordered, and treatments provided were collected. Of the 4,000 included patients, 2,484 (62%) had clinical deterioration confirmed by chart review. Sepsis was the most common cause of deterioration (41%; n=1,021), followed by arrhythmia (19%; n=473), while liver failure had the highest in-hospital mortality (41%). The most common diagnostic tests ordered were complete blood counts (47% of events), followed by chest x-rays (42%), and cultures (40%), while the most common medication orders were antimicrobials (46%), followed by fluid boluses (34%), and antiarrhythmics (19%). CONCLUSIONS: We found that sepsis was the most common cause of deterioration, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic tests ordered, and antimicrobials and fluid boluses were the most common medication interventions. These results provide important insights for clinical decision-making at the bedside, training of rapid response teams, and the development of institutional treatment pathways for clinical deterioration. KEY POINTS: Question: What are the most common diagnoses, diagnostic test orders, and treatments for ward patients experiencing clinical deterioration? Findings: In manual chart review of 2,484 encounters with deterioration across four health systems, we found that sepsis was the most common cause of clinical deterioration, followed by arrythmias, while liver failure had the highest mortality. Complete blood counts and chest x-rays were the most common diagnostic test orders, while antimicrobials and fluid boluses were the most common treatments. Meaning: Our results provide new insights into clinical deterioration events, which can inform institutional treatment pathways, rapid response team training, and patient care.

3.
JAMIA Open ; 6(4): ooad109, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38144168

ABSTRACT

Objectives: To develop and externally validate machine learning models using structured and unstructured electronic health record data to predict postoperative acute kidney injury (AKI) across inpatient settings. Materials and Methods: Data for adult postoperative admissions to the Loyola University Medical Center (2009-2017) were used for model development and admissions to the University of Wisconsin-Madison (2009-2020) were used for validation. Structured features included demographics, vital signs, laboratory results, and nurse-documented scores. Unstructured text from clinical notes were converted into concept unique identifiers (CUIs) using the clinical Text Analysis and Knowledge Extraction System. The primary outcome was the development of Kidney Disease Improvement Global Outcomes stage 2 AKI within 7 days after leaving the operating room. We derived unimodal extreme gradient boosting machines (XGBoost) and elastic net logistic regression (GLMNET) models using structured-only data and multimodal models combining structured data with CUI features. Model comparison was performed using the receiver operating characteristic curve (AUROC), with Delong's test for statistical differences. Results: The study cohort included 138 389 adult patient admissions (mean [SD] age 58 [16] years; 11 506 [8%] African-American; and 70 826 [51%] female) across the 2 sites. Of those, 2959 (2.1%) developed stage 2 AKI or higher. Across all data types, XGBoost outperformed GLMNET (mean AUROC 0.81 [95% confidence interval (CI), 0.80-0.82] vs 0.78 [95% CI, 0.77-0.79]). The multimodal XGBoost model incorporating CUIs parameterized as term frequency-inverse document frequency (TF-IDF) showed the highest discrimination performance (AUROC 0.82 [95% CI, 0.81-0.83]) over unimodal models (AUROC 0.79 [95% CI, 0.78-0.80]). Discussion: A multimodality approach with structured data and TF-IDF weighting of CUIs increased model performance over structured data-only models. Conclusion: These findings highlight the predictive power of CUIs when merged with structured data for clinical prediction models, which may improve the detection of postoperative AKI.

4.
Front Pediatr ; 11: 1284672, 2023.
Article in English | MEDLINE | ID: mdl-38188917

ABSTRACT

Introduction: Critical deterioration in hospitalized children, defined as ward to pediatric intensive care unit (PICU) transfer followed by mechanical ventilation (MV) or vasoactive infusion (VI) within 12 h, has been used as a primary metric to evaluate the effectiveness of clinical interventions or quality improvement initiatives. We explore the association between critical events (CEs), i.e., MV or VI events, within the first 48 h of PICU transfer from the ward or emergency department (ED) and in-hospital mortality. Methods: We conducted a retrospective study of a cohort of PICU transfers from the ward or the ED at two tertiary-care academic hospitals. We determined the association between mortality and occurrence of CEs within 48 h of PICU transfer after adjusting for age, gender, hospital, and prior comorbidities. Results: Experiencing a CE within 48 h of PICU transfer was associated with an increased risk of mortality [OR 12.40 (95% CI: 8.12-19.23, P < 0.05)]. The increased risk of mortality was highest in the first 12 h [OR 11.32 (95% CI: 7.51-17.15, P < 0.05)] but persisted in the 12-48 h time interval [OR 2.84 (95% CI: 1.40-5.22, P < 0.05)]. Varying levels of risk were observed when considering ED or ward transfers only, when considering different age groups, and when considering individual 12-h time intervals. Discussion: We demonstrate that occurrence of a CE within 48 h of PICU transfer was associated with mortality after adjusting for confounders. Studies focusing on the impact of quality improvement efforts may benefit from using CEs within 48 h of PICU transfer as an additional evaluation metric, provided these events could have been influenced by the initiative.

5.
ACG Case Rep J ; 9(10): e00879, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36247380

ABSTRACT

Indolent T-cell lymphoproliferative disease of the gastrointestinal (GI) tract is an exceedingly rare benign proliferation of clonal and mature-appearing lymphoid cells originating from the GI tract. We discuss the case of a 52-year-old woman with indolent T-cell lymphoproliferative disease of the GI tract manifesting as chronic diarrhea and profound weight loss. Interestingly, the patient also had extra-GI involvement of her disease process, which has not been previously reported. Our patient was managed with steroids with improvement in symptoms and weight gain. We provide a review of the literature to highlight the importance of early recognition and intervention of this disease entity.

6.
J Am Med Inform Assoc ; 29(10): 1696-1704, 2022 09 12.
Article in English | MEDLINE | ID: mdl-35869954

ABSTRACT

OBJECTIVES: Early identification of infection improves outcomes, but developing models for early identification requires determining infection status with manual chart review, limiting sample size. Therefore, we aimed to compare semi-supervised and transfer learning algorithms with algorithms based solely on manual chart review for identifying infection in hospitalized patients. MATERIALS AND METHODS: This multicenter retrospective study of admissions to 6 hospitals included "gold-standard" labels of infection from manual chart review and "silver-standard" labels from nonchart-reviewed patients using the Sepsis-3 infection criteria based on antibiotic and culture orders. "Gold-standard" labeled admissions were randomly allocated to training (70%) and testing (30%) datasets. Using patient characteristics, vital signs, and laboratory data from the first 24 hours of admission, we derived deep learning and non-deep learning models using transfer learning and semi-supervised methods. Performance was compared in the gold-standard test set using discrimination and calibration metrics. RESULTS: The study comprised 432 965 admissions, of which 2724 underwent chart review. In the test set, deep learning and non-deep learning approaches had similar discrimination (area under the receiver operating characteristic curve of 0.82). Semi-supervised and transfer learning approaches did not improve discrimination over models fit using only silver- or gold-standard data. Transfer learning had the best calibration (unreliability index P value: .997, Brier score: 0.173), followed by self-learning gradient boosted machine (P value: .67, Brier score: 0.170). DISCUSSION: Deep learning and non-deep learning models performed similarly for identifying infection, as did models developed using Sepsis-3 and manual chart review labels. CONCLUSION: In a multicenter study of almost 3000 chart-reviewed patients, semi-supervised and transfer learning models showed similar performance for model discrimination as baseline XGBoost, while transfer learning improved calibration.


Subject(s)
Machine Learning , Sepsis , Humans , ROC Curve , Retrospective Studies , Sepsis/diagnosis
7.
Crit Care Med ; 49(10): 1694-1705, 2021 10 01.
Article in English | MEDLINE | ID: mdl-33938715

ABSTRACT

OBJECTIVES: Early antibiotic administration is a central component of sepsis guidelines, and delays may increase mortality. However, prior studies have examined the delay to first antibiotic administration as a single time period even though it contains two distinct processes: antibiotic ordering and antibiotic delivery, which can each be targeted for improvement through different interventions. The objective of this study was to characterize and compare patients who experienced order or delivery delays, investigate the association of each delay type with mortality, and identify novel patient subphenotypes with elevated risk of harm from delays. DESIGN: Retrospective analysis of multicenter inpatient data. SETTING: Two tertiary care medical centers (2008-2018, 2006-2017) and four community-based hospitals (2008-2017). PATIENTS: All patients admitted through the emergency department who met clinical criteria for infection. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patient demographics, vitals, laboratory values, medication order and administration times, and in-hospital survival data were obtained from the electronic health record. Order and delivery delays were calculated for each admission. Adjusted logistic regression models were used to examine the relationship between each delay and in-hospital mortality. Causal forests, a machine learning method, was used to identify a high-risk subgroup. A total of 60,817 admissions were included, and delays occurred in 58% of patients. Each additional hour of order delay (odds ratio, 1.04; 95% CI, 1.03-1.05) and delivery delay (odds ratio, 1.05; 95% CI, 1.02-1.08) was associated with increased mortality. A patient subgroup identified by causal forests with higher comorbidity burden, greater organ dysfunction, and abnormal initial lactate measurements had a higher risk of death associated with delays (odds ratio, 1.07; 95% CI, 1.06-1.09 vs odds ratio, 1.02; 95% CI, 1.01-1.03). CONCLUSIONS: Delays in antibiotic ordering and drug delivery are both associated with a similar increase in mortality. A distinct subgroup of high-risk patients exist who could be targeted for more timely therapy.


Subject(s)
Anti-Bacterial Agents/administration & dosage , Phenotype , Sepsis/genetics , Time-to-Treatment/statistics & numerical data , Aged , Aged, 80 and over , Anti-Bacterial Agents/therapeutic use , Emergency Service, Hospital/organization & administration , Emergency Service, Hospital/statistics & numerical data , Female , Hospitalization/statistics & numerical data , Humans , Illinois/epidemiology , Male , Middle Aged , Prospective Studies , Retrospective Studies , Sepsis/drug therapy , Sepsis/physiopathology , Time Factors
8.
Crit Care Med ; 49(7): e673-e682, 2021 07 01.
Article in English | MEDLINE | ID: mdl-33861547

ABSTRACT

OBJECTIVES: Recent sepsis studies have defined patients as "infected" using a combination of culture and antibiotic orders rather than billing data. However, the accuracy of these definitions is unclear. We aimed to compare the accuracy of different established criteria for identifying infected patients using detailed chart review. DESIGN: Retrospective observational study. SETTING: Six hospitals from three health systems in Illinois. PATIENTS: Adult admissions with blood culture or antibiotic orders, or Angus International Classification of Diseases infection codes and death were eligible for study inclusion as potentially infected patients. Nine-hundred to 1,000 of these admissions were randomly selected from each health system for chart review, and a proportional number of patients who did not meet chart review eligibility criteria were also included and deemed not infected. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: The accuracy of published billing code criteria by Angus et al and electronic health record criteria by Rhee et al and Seymour et al (Sepsis-3) was determined using the manual chart review results as the gold standard. A total of 5,215 patients were included, with 2,874 encounters analyzed via chart review and a proportional 2,341 added who did not meet chart review eligibility criteria. In the study cohort, 27.5% of admissions had at least one infection. This was most similar to the percentage of admissions with blood culture orders (26.8%), Angus infection criteria (28.7%), and the Sepsis-3 criteria (30.4%). Sepsis-3 criteria was the most sensitive (81%), followed by Angus (77%) and Rhee (52%), while Rhee (97%) and Angus (90%) were more specific than the Sepsis-3 criteria (89%). Results were similar for patients with organ dysfunction during their admission. CONCLUSIONS: Published criteria have a wide range of accuracy for identifying infected patients, with the Sepsis-3 criteria being the most sensitive and Rhee criteria being the most specific. These findings have important implications for studies investigating the burden of sepsis on a local and national level.


Subject(s)
Data Accuracy , Electronic Health Records/standards , Infections/epidemiology , Information Storage and Retrieval/methods , Adult , Aged , Anti-Bacterial Agents/therapeutic use , Antibiotic Prophylaxis/statistics & numerical data , Blood Culture , Chicago/epidemiology , False Positive Reactions , Female , Humans , Infections/diagnosis , International Classification of Diseases , Male , Middle Aged , Organ Dysfunction Scores , Patient Admission/statistics & numerical data , Prevalence , Retrospective Studies , Sensitivity and Specificity , Sepsis/diagnosis
9.
Crit Care Med ; 48(11): e1020-e1028, 2020 11.
Article in English | MEDLINE | ID: mdl-32796184

ABSTRACT

OBJECTIVES: Bacteremia and fungemia can cause life-threatening illness with high mortality rates, which increase with delays in antimicrobial therapy. The objective of this study is to develop machine learning models to predict blood culture results at the time of the blood culture order using routine data in the electronic health record. DESIGN: Retrospective analysis of a large, multicenter inpatient data. SETTING: Two academic tertiary medical centers between the years 2007 and 2018. SUBJECTS: All hospitalized patients who received a blood culture during hospitalization. INTERVENTIONS: The dataset was partitioned temporally into development and validation cohorts: the logistic regression and gradient boosting machine models were trained on the earliest 80% of hospital admissions and validated on the most recent 20%. MEASUREMENTS AND MAIN RESULTS: There were 252,569 blood culture days-defined as nonoverlapping 24-hour periods in which one or more blood cultures were ordered. In the validation cohort, there were 50,514 blood culture days, with 3,762 cases of bacteremia (7.5%) and 370 cases of fungemia (0.7%). The gradient boosting machine model for bacteremia had significantly higher area under the receiver operating characteristic curve (0.78 [95% CI 0.77-0.78]) than the logistic regression model (0.73 [0.72-0.74]) (p < 0.001). The model identified a high-risk group with over 30 times the occurrence rate of bacteremia in the low-risk group (27.4% vs 0.9%; p < 0.001). Using the low-risk cut-off, the model identifies bacteremia with 98.7% sensitivity. The gradient boosting machine model for fungemia had high discrimination (area under the receiver operating characteristic curve 0.88 [95% CI 0.86-0.90]). The high-risk fungemia group had 252 fungemic cultures compared with one fungemic culture in the low-risk group (5.0% vs 0.02%; p < 0.001). Further, the high-risk group had a mortality rate 60 times higher than the low-risk group (28.2% vs 0.4%; p < 0.001). CONCLUSIONS: Our novel models identified patients at low and high-risk for bacteremia and fungemia using routinely collected electronic health record data. Further research is needed to evaluate the cost-effectiveness and impact of model implementation in clinical practice.


Subject(s)
Bacteremia/diagnosis , Electronic Health Records/statistics & numerical data , Fungemia/diagnosis , Machine Learning , Aged , Bacteremia/blood , Bacteremia/etiology , Bacteremia/microbiology , Blood Culture , Female , Fungemia/blood , Fungemia/etiology , Fungemia/microbiology , Hospitalization/statistics & numerical data , Humans , Male , Middle Aged , Models, Statistical , Reproducibility of Results , Retrospective Studies , Risk Factors
10.
JAMA Netw Open ; 3(8): e2012892, 2020 08 03.
Article in English | MEDLINE | ID: mdl-32780123

ABSTRACT

Importance: Acute kidney injury (AKI) is associated with increased morbidity and mortality in hospitalized patients. Current methods to identify patients at high risk of AKI are limited, and few prediction models have been externally validated. Objective: To internally and externally validate a machine learning risk score to detect AKI in hospitalized patients. Design, Setting, and Participants: This diagnostic study included 495 971 adult hospital admissions at the University of Chicago (UC) from 2008 to 2016 (n = 48 463), at Loyola University Medical Center (LUMC) from 2007 to 2017 (n = 200 613), and at NorthShore University Health System (NUS) from 2006 to 2016 (n = 246 895) with serum creatinine (SCr) measurements. Patients with an SCr concentration at admission greater than 3.0 mg/dL, with a prior diagnostic code for chronic kidney disease stage 4 or higher, or who received kidney replacement therapy within 48 hours of admission were excluded. A simplified version of a previously published gradient boosted machine AKI prediction algorithm was used; it was validated internally among patients at UC and externally among patients at NUS and LUMC. Main Outcomes and Measures: Prediction of Kidney Disease Improving Global Outcomes SCr-defined stage 2 AKI within a 48-hour interval was the primary outcome. Discrimination was assessed by the area under the receiver operating characteristic curve (AUC). Results: The study included 495 971 adult admissions (mean [SD] age, 63 [18] years; 87 689 [17.7%] African American; and 266 866 [53.8%] women) across 3 health systems. The development of stage 2 or higher AKI occurred in 15 664 of 48 463 patients (3.4%) in the UC cohort, 5711 of 200 613 (2.8%) in the LUMC cohort, and 3499 of 246 895 (1.4%) in the NUS cohort. In the UC cohort, 332 patients (0.7%) required kidney replacement therapy compared with 672 patients (0.3%) in the LUMC cohort and 440 patients (0.2%) in the NUS cohort. The AUCs for predicting at least stage 2 AKI in the next 48 hours were 0.86 (95% CI, 0.86-0.86) in the UC cohort, 0.85 (95% CI, 0.84-0.85) in the LUMC cohort, and 0.86 (95% CI, 0.86-0.86) in the NUS cohort. The AUCs for receipt of kidney replacement therapy within 48 hours were 0.96 (95% CI, 0.96-0.96) in the UC cohort, 0.95 (95% CI, 0.94-0.95) in the LUMC cohort, and 0.95 (95% CI, 0.94-0.95) in the NUS cohort. In time-to-event analysis, a probability cutoff of at least 0.057 predicted the onset of stage 2 AKI a median (IQR) of 27 (6.5-93) hours before the eventual doubling in SCr concentrations in the UC cohort, 34.5 (19-85) hours in the NUS cohort, and 39 (19-108) hours in the LUMC cohort. Conclusions and Relevance: In this study, the machine learning algorithm demonstrated excellent discrimination in both internal and external validation, supporting its generalizability and potential as a clinical decision support tool to improve AKI detection and outcomes.


Subject(s)
Acute Kidney Injury/diagnosis , Acute Kidney Injury/epidemiology , Machine Learning , Risk Assessment/methods , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Models, Statistical , ROC Curve , Retrospective Studies , Risk Factors
11.
JAMA Netw Open ; 3(5): e205191, 2020 05 01.
Article in English | MEDLINE | ID: mdl-32427324

ABSTRACT

Importance: Risk scores used in early warning systems exist for general inpatients and patients with suspected infection outside the intensive care unit (ICU), but their relative performance is incompletely characterized. Objective: To compare the performance of tools used to determine points-based risk scores among all hospitalized patients, including those with and without suspected infection, for identifying those at risk for death and/or ICU transfer. Design, Setting, and Participants: In a cohort design, a retrospective analysis of prospectively collected data was conducted in 21 California and 7 Illinois hospitals between 2006 and 2018 among adult inpatients outside the ICU using points-based scores from 5 commonly used tools: National Early Warning Score (NEWS), Modified Early Warning Score (MEWS), Between the Flags (BTF), Quick Sequential Sepsis-Related Organ Failure Assessment (qSOFA), and Systemic Inflammatory Response Syndrome (SIRS). Data analysis was conducted from February 2019 to January 2020. Main Outcomes and Measures: Risk model discrimination was assessed in each state for predicting in-hospital mortality and the combined outcome of ICU transfer or mortality with area under the receiver operating characteristic curves (AUCs). Stratified analyses were also conducted based on suspected infection. Results: The study included 773 477 hospitalized patients in California (mean [SD] age, 65.1 [17.6] years; 416 605 women [53.9%]) and 713 786 hospitalized patients in Illinois (mean [SD] age, 61.3 [19.9] years; 384 830 women [53.9%]). The NEWS exhibited the highest discrimination for mortality (AUC, 0.87; 95% CI, 0.87-0.87 in California vs AUC, 0.86; 95% CI, 0.85-0.86 in Illinois), followed by the MEWS (AUC, 0.83; 95% CI, 0.83-0.84 in California vs AUC, 0.84; 95% CI, 0.84-0.85 in Illinois), qSOFA (AUC, 0.78; 95% CI, 0.78-0.79 in California vs AUC, 0.78; 95% CI, 0.77-0.78 in Illinois), SIRS (AUC, 0.76; 95% CI, 0.76-0.76 in California vs AUC, 0.76; 95% CI, 0.75-0.76 in Illinois), and BTF (AUC, 0.73; 95% CI, 0.73-0.73 in California vs AUC, 0.74; 95% CI, 0.73-0.74 in Illinois). At specific decision thresholds, the NEWS outperformed the SIRS and qSOFA at all 28 hospitals either by reducing the percentage of at-risk patients who need to be screened by 5% to 20% or increasing the percentage of adverse outcomes identified by 3% to 25%. Conclusions and Relevance: In all hospitalized patients evaluated in this study, including those meeting criteria for suspected infection, the NEWS appeared to display the highest discrimination. Our results suggest that, among commonly used points-based scoring systems, determining the NEWS for inpatient risk stratification could identify patients with and without infection at high risk of mortality.


Subject(s)
Early Warning Score , Hospital Mortality , Hospitalization/statistics & numerical data , Infections/mortality , Intensive Care Units/statistics & numerical data , Patient Transfer/statistics & numerical data , Aged , California/epidemiology , Female , Humans , Illinois/epidemiology , Infections/diagnosis , Infections/epidemiology , Length of Stay/statistics & numerical data , Male , Middle Aged , Retrospective Studies , Risk Assessment , Risk Factors , Sensitivity and Specificity
12.
Am J Respir Crit Care Med ; 200(3): 327-335, 2019 08 01.
Article in English | MEDLINE | ID: mdl-30789749

ABSTRACT

Rationale: Sepsis is a heterogeneous syndrome, and identifying clinically relevant subphenotypes is essential.Objectives: To identify novel subphenotypes in hospitalized patients with infection using longitudinal temperature trajectories.Methods: In the model development cohort, inpatient admissions meeting criteria for infection in the emergency department and receiving antibiotics within 24 hours of presentation were included. Temperature measurements within the first 72 hours were compared between survivors and nonsurvivors. Group-based trajectory modeling was performed to identify temperature trajectory groups, and patient characteristics and outcomes were compared between the groups. The model was then externally validated at a second hospital using the same inclusion criteria.Measurements and Main Results: A total of 12,413 admissions were included in the development cohort, and 19,053 were included in the validation cohort. In the development cohort, four temperature trajectory groups were identified: "hyperthermic, slow resolvers" (n = 1,855; 14.9% of the cohort); "hyperthermic, fast resolvers" (n = 2,877; 23.2%); "normothermic" (n = 4,067; 32.8%); and "hypothermic" (n = 3,614; 29.1%). The hypothermic subjects were the oldest and had the most comorbidities, the lowest levels of inflammatory markers, and the highest in-hospital mortality rate (9.5%). The hyperthermic, slow resolvers were the youngest and had the fewest comorbidities, the highest levels of inflammatory markers, and a mortality rate of 5.1%. The hyperthermic, fast resolvers had the lowest mortality rate (2.9%). Similar trajectory groups, patient characteristics, and outcomes were found in the validation cohort.Conclusions: We identified and validated four novel subphenotypes of patients with infection, with significant variability in inflammatory markers and outcomes.


Subject(s)
Body Temperature , Fever/diagnosis , Fever/etiology , Sepsis/complications , Sepsis/mortality , Aged , Cohort Studies , Female , Fever/therapy , Hospital Mortality , Hospitalization , Humans , Male , Middle Aged , Sepsis/therapy , Time Factors
13.
Cancer Immunol Immunother ; 66(1): 63-75, 2017 01.
Article in English | MEDLINE | ID: mdl-27787577

ABSTRACT

An immunotherapeutic strategy is discussed supporting anti-tumor activity toward malignancies overexpressing ganglioside D3. GD3 can be targeted by NKT cells when derived moieties are presented in the context of CD1d. NKT cells can support anti-tumor responses by secreting inflammatory cytokines and through cytotoxicity toward CD1d+GD3+ tumors. To overexpress GD3, we generated expression vector DNA and an adenoviral vector encoding the enzyme responsible for generating GD3 from its ubiquitous precursor GM3. We show that DNA encoding α-N-acetyl-neuraminide α-2,8-sialyltransferase 1 (SIAT8) introduced by gene gun vaccination in vivo leads to overexpression of GD3 and delays tumor growth. Delayed tumor growth is dependent on CD1d expression by host immune cells, as shown in experiments engaging CD1d knockout mice. A trend toward greater NKT cell populations among tumor-infiltrating lymphocytes is associated with SIAT8 vaccination. A single adenoviral vaccination introduces anti-tumor activity similarly to repeated vaccination with naked DNA. Here, greater NKT tumor infiltrates were accompanied by marked overexpression of IL-17 in the tumor, later switching to IL-4. Our results suggest that a single intramuscular adenoviral vaccination introduces overexpression of GD3 by antigen-presenting cells at the injection site, recruiting NKT cells that provide an inflammatory anti-tumor environment. We propose adenoviral SIAT8 (AdV-SIAT8) can slow the growth of GD3 expressing tumors in patients.


Subject(s)
Gangliosides/biosynthesis , Melanoma, Experimental/immunology , Melanoma/immunology , Sialyltransferases/immunology , Animals , Biolistics , Cell Line, Tumor , Gangliosides/immunology , HEK293 Cells , Humans , Melanoma/enzymology , Melanoma/therapy , Melanoma, Experimental/enzymology , Melanoma, Experimental/therapy , Mice , Mice, Inbred C57BL , Mice, Knockout , Sialyltransferases/genetics , Vaccines, DNA/immunology
14.
Am J Pathol ; 183(1): 226-34, 2013 Jul.
Article in English | MEDLINE | ID: mdl-23665200

ABSTRACT

Tumors that develop in lymphangioleiomyomatosis (LAM) as a consequence of biallelic loss of TSC1 or TSC2 gene function express melanoma differentiation antigens. However, the percentage of LAM cells expressing these melanosomal antigens is limited. Here, we report the overexpression of ganglioside D3 (GD3) in LAM. GD3 is a tumor-associated antigen otherwise found in melanoma and neuroendocrine tumors; normal expression is largely restricted to neuronal cells in the brain. We also observed markedly reduced serum antibody titers to GD3, which may allow for a population of GD3-expressing LAM cells to expand within patients. This is supported by the demonstrated sensitivity of cultured LAM cells to complement mediated cytotoxicity via GD3 antibodies. GD3 can serve as a natural killer T (NKT) cell antigen when presented on CD1d molecules expressed on professional antigen-presenting cells. Although CD1d-expressing monocyte derivatives were present in situ, enhanced NKT-cell recruitment to LAM lung was not observed. Cultured LAM cells retained surface expression of GD3 over several passages and also expressed CD1d, implying that infiltrating NKT cells can be directly cytotoxic toward LAM lung lesions. Immunization with antibodies to GD3 may thus be therapeutic in LAM, and enhancement of existing NKT-cell infiltration may be effective to further improve antitumor responses. Overall, we hereby establish GD3 as a suitable target for immunotherapy of LAM.


Subject(s)
Biomarkers, Tumor/metabolism , Gangliosides/metabolism , Lung Neoplasms/metabolism , Lymphangioleiomyomatosis/metabolism , Animals , Antigens, CD1d/metabolism , Biomarkers, Tumor/immunology , Case-Control Studies , Enzyme-Linked Immunosorbent Assay , Gangliosides/immunology , Humans , Lung/immunology , Lung/metabolism , Lung/pathology , Lung Neoplasms/immunology , Lung Neoplasms/pathology , Lymphangioleiomyomatosis/immunology , Lymphangioleiomyomatosis/pathology , Mice , Natural Killer T-Cells/metabolism , Tumor Cells, Cultured
15.
Am J Respir Cell Mol Biol ; 46(1): 1-5, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21940815

ABSTRACT

Lymphangioleiomyomatosis (LAM) leads to hyperproliferation of abnormal smooth muscle cells in the lungs, associated with diffuse pulmonary parenchymal cyst formation and progressive dyspnea on exertion. The disease targets women of child-bearing age. Complications include pneumothoraces and chylous pleural effusions. Ten-year survival is estimated at 70%, and lung transplantation remains the only validated treatment. It has been observed that LAM cells express markers associated with melanocytic differentiation, including gp100 and MART-1. Other melanocytic markers have also been observed. The same proteins are targeted by T cells infiltrating melanoma tumors as well as by T cells infiltrating autoimmune vitiligo skin, and these antigens are regarded as relatively immunogenic. Consequently, vaccines have been developed for melanoma targeting these and other immunogenic melanocyte differentiation proteins. Preliminary data showing susceptibility of LAM cells to melanoma derived T cells suggest that vaccines targeting melanosomal antigens can be successful in treating LAM.


Subject(s)
Lung Neoplasms/immunology , Lung Neoplasms/therapy , Lymphangioleiomyomatosis/immunology , Lymphangioleiomyomatosis/therapy , Biomarkers, Tumor/immunology , Humans , Immunotherapy/methods , Lung Neoplasms/complications , Lung Neoplasms/pathology , Lung Transplantation/methods , Lymphangioleiomyomatosis/complications , Lymphangioleiomyomatosis/pathology
SELECTION OF CITATIONS
SEARCH DETAIL
...