Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Br J Anaesth ; 123(6): 877-886, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31627890

RESUMEN

BACKGROUND: Rapid, preoperative identification of patients with the highest risk for medical complications is necessary to ensure that limited infrastructure and human resources are directed towards those most likely to benefit. Existing risk scores either lack specificity at the patient level or utilise the American Society of Anesthesiologists (ASA) physical status classification, which requires a clinician to review the chart. METHODS: We report on the use of machine learning algorithms, specifically random forests, to create a fully automated score that predicts postoperative in-hospital mortality based solely on structured data available at the time of surgery. Electronic health record data from 53 097 surgical patients (2.01% mortality rate) who underwent general anaesthesia between April 1, 2013 and December 10, 2018 in a large US academic medical centre were used to extract 58 preoperative features. RESULTS: Using a random forest classifier we found that automatically obtained preoperative features (area under the curve [AUC] of 0.932, 95% confidence interval [CI] 0.910-0.951) outperforms Preoperative Score to Predict Postoperative Mortality (POSPOM) scores (AUC of 0.660, 95% CI 0.598-0.722), Charlson comorbidity scores (AUC of 0.742, 95% CI 0.658-0.812), and ASA physical status (AUC of 0.866, 95% CI 0.829-0.897). Including the ASA physical status with the preoperative features achieves an AUC of 0.936 (95% CI 0.917-0.955). CONCLUSIONS: This automated score outperforms the ASA physical status score, the Charlson comorbidity score, and the POSPOM score for predicting in-hospital mortality. Additionally, we integrate this score with a previously published postoperative score to demonstrate the extent to which patient risk changes during the perioperative period.


Asunto(s)
Registros Electrónicos de Salud/estadística & datos numéricos , Estado de Salud , Mortalidad Hospitalaria , Aprendizaje Automático , Complicaciones Posoperatorias/diagnóstico , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , California , Comorbilidad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Periodo Preoperatorio , Medición de Riesgo , Factores de Riesgo , Adulto Joven
2.
PLOS Digit Health ; 2(2): e0000106, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36812608

RESUMEN

Age-related Macular Degeneration (AMD) is a major cause of irreversible vision loss in individuals over 55 years old in the United States. One of the late-stage manifestations of AMD, and a major cause of vision loss, is the development of exudative macular neovascularization (MNV). Optical Coherence Tomography (OCT) is the gold standard to identify fluid at different levels within the retina. The presence of fluid is considered the hallmark to define the presence of disease activity. Anti-vascular growth factor (anti-VEGF) injections can be used to treat exudative MNV. However, given the limitations of anti-VEGF treatment, as burdensome need for frequent visits and repeated injections to sustain efficacy, limited durability of the treatment, poor or no response, there is a great interest in detecting early biomarkers associated with a higher risk for AMD progression to exudative forms in order to optimize the design of early intervention clinical trials. The annotation of structural biomarkers on optical coherence tomography (OCT) B-scans is a laborious, complex and time-consuming process, and discrepancies between human graders can introduce variability into this assessment. To address this issue, a deep-learning model (SLIVER-net) was proposed, which could identify AMD biomarkers on structural OCT volumes with high precision and without human supervision. However, the validation was performed on a small dataset, and the true predictive power of these detected biomarkers in the context of a large cohort has not been evaluated. In this retrospective cohort study, we perform the largest-scale validation of these biomarkers to date. We also assess how these features combined with other EHR data (demographics, comorbidities, etc) affect and/or improve the prediction performance relative to known factors. Our hypothesis is that these biomarkers can be identified by a machine learning algorithm without human supervision, in a way that they preserve their predictive nature. The way we test this hypothesis is by building several machine learning models utilizing these machine-read biomarkers and assessing their added predictive power. We found that not only can we show that the machine-read OCT B-scan biomarkers are predictive of AMD progression, we also observe that our proposed combined OCT and EHR data-based algorithm outperforms the state-of-the-art solution in clinically relevant metrics and provides actionable information which has the potential to improve patient care. In addition, it provides a framework for automated large-scale processing of OCT volumes, making it possible to analyze vast archives without human supervision.

3.
Ophthalmol Retina ; 7(2): 118-126, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-35995411

RESUMEN

OBJECTIVE: To assess and validate a deep learning algorithm to automatically detect incomplete retinal pigment epithelial and outer retinal atrophy (iRORA) and complete retinal pigment epithelial and outer retinal atrophy (cRORA) in eyes with age-related macular degeneration. DESIGN: In a retrospective machine learning analysis, a deep learning model was trained to jointly classify the presence of iRORA and cRORA within a given B-scan. The algorithm was evaluated using 2 separate and independent datasets. PARTICIPANTS: OCT B-scan volumes from 71 patients with nonneovascular age-related macular degeneration captured at the Doheny-University of California Los Angeles Eye Centers and the following 2 external OCT B-scans testing datasets: (1) University of Pennsylvania, University of Miami, and Case Western Reserve University and (2) Doheny Image Reading Research Laboratory. METHODS: The images were annotated by an experienced grader for the presence of iRORA and cRORA. A Resnet18 model was trained to classify these annotations for each B-scan using OCT volumes collected at the Doheny-University of California Los Angeles Eye Centers. The model was applied to 2 testing datasets to assess out-of-sample model performance. MAIN OUTCOMES MEASURES: Model performance was quantified in terms of area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (AUPRC). Sensitivity, specificity, and positive predictive value were also compared against additional clinician annotators. RESULTS: On an independently collected test set, consisting of 1117 volumes from the general population, the model predicted iRORA and cRORA presence within the entire volume with nearly perfect AUROC performance and AUPRC scores (iRORA, 0.61; 95% confidence interval [CI] [0.45, 0.82]: cRORA, 0.83; 95% CI [0.68, 0.95]). On another independently collected set, consisting of 60 OCT B-scans enriched for iRORA and cRORA lesions, the model performed with AUROC (iRORA: 0.68, 95% CI [0.54, 0.81]; cRORA: 0.84, 95% CI [0.75, 0.94]) and AUPRC (iRORA: 0.70, 95% CI [0.55, 0.86]; cRORA: 0.82, 95% CI [0.70, 0.93]). CONCLUSIONS: A deep learning model can accurately and precisely identify both iRORA and cRORA lesions within the OCT B-scan volume. The model can achieve similar sensitivity compared with human graders, which potentially obviates a laborious and time-consuming annotation process and could be developed into a diagnostic screening tool.


Asunto(s)
Degeneración Macular , Degeneración Retiniana , Humanos , Estudios Retrospectivos , Degeneración Retiniana/patología , Degeneración Macular/patología , Epitelio Pigmentado de la Retina/patología , Aprendizaje Automático , Atrofia
4.
Res Sq ; 2023 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-38045283

RESUMEN

We present SLIViT, a deep-learning framework that accurately measures disease-related risk factors in volumetric biomedical imaging, such as magnetic resonance imaging (MRI) scans, optical coherence tomography (OCT) scans, and ultrasound videos. To evaluate SLIViT, we applied it to five different datasets of these three different data modalities tackling seven learning tasks (including both classification and regression) and found that it consistently and significantly outperforms domain-specific state-of-the-art models, typically improving performance (ROC AUC or correlation) by 0.1-0.4. Notably, compared to existing approaches, SLIViT can be applied even when only a small number of annotated training samples is available, which is often a constraint in medical applications. When trained on less than 700 annotated volumes, SLIViT obtained accuracy comparable to trained clinical specialists while reducing annotation time by a factor of 5,000 demonstrating its utility to automate and expedite ongoing research and other practical clinical scenarios.

5.
NPJ Genom Med ; 7(1): 50, 2022 Aug 25.
Artículo en Inglés | MEDLINE | ID: mdl-36008412

RESUMEN

Inference of clinical phenotypes is a fundamental task in precision medicine, and has therefore been heavily investigated in recent years in the context of electronic health records (EHR) using a large arsenal of machine learning techniques, as well as in the context of genetics using polygenic risk scores (PRS). In this work, we considered the epigenetic analog of PRS, methylation risk scores (MRS), a linear combination of methylation states. We measured methylation across a large cohort (n = 831) of diverse samples in the UCLA Health biobank, for which both genetic and complete EHR data are available. We constructed MRS for 607 phenotypes spanning diagnoses, clinical lab tests, and medication prescriptions. When added to a baseline set of predictive features, MRS significantly improved the imputation of 139 outcomes, whereas the PRS improved only 22 (median improvement for methylation 10.74%, 141.52%, and 15.46% in medications, labs, and diagnosis codes, respectively, whereas genotypes only improved the labs at a median increase of 18.42%). We added significant MRS to state-of-the-art EHR imputation methods that leverage the entire set of medical records, and found that including MRS as a medical feature in the algorithm significantly improves EHR imputation in 37% of lab tests examined (median R2 increase 47.6%). Finally, we replicated several MRS in multiple external studies of methylation (minimum p-value of 2.72 × 10-7) and replicated 22 of 30 tested MRS internally in two separate cohorts of different ethnicity. Our publicly available results and weights show promise for methylation risk scores as clinical and scientific tools.

6.
Genome Med ; 14(1): 104, 2022 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-36085083

RESUMEN

BACKGROUND: Large medical centers in urban areas, like Los Angeles, care for a diverse patient population and offer the potential to study the interplay between genetic ancestry and social determinants of health. Here, we explore the implications of genetic ancestry within the University of California, Los Angeles (UCLA) ATLAS Community Health Initiative-an ancestrally diverse biobank of genomic data linked with de-identified electronic health records (EHRs) of UCLA Health patients (N=36,736). METHODS: We quantify the extensive continental and subcontinental genetic diversity within the ATLAS data through principal component analysis, identity-by-descent, and genetic admixture. We assess the relationship between genetically inferred ancestry (GIA) and >1500 EHR-derived phenotypes (phecodes). Finally, we demonstrate the utility of genetic data linked with EHR to perform ancestry-specific and multi-ancestry genome and phenome-wide scans across a broad set of disease phenotypes. RESULTS: We identify 5 continental-scale GIA clusters including European American (EA), African American (AA), Hispanic Latino American (HL), South Asian American (SAA) and East Asian American (EAA) individuals and 7 subcontinental GIA clusters within the EAA GIA corresponding to Chinese American, Vietnamese American, and Japanese American individuals. Although we broadly find that self-identified race/ethnicity (SIRE) is highly correlated with GIA, we still observe marked differences between the two, emphasizing that the populations defined by these two criteria are not analogous. We find a total of 259 significant associations between continental GIA and phecodes even after accounting for individuals' SIRE, demonstrating that for some phenotypes, GIA provides information not already captured by SIRE. GWAS identifies significant associations for liver disease in the 22q13.31 locus across the HL and EAA GIA groups (HL p-value=2.32×10-16, EAA p-value=6.73×10-11). A subsequent PheWAS at the top SNP reveals significant associations with neurologic and neoplastic phenotypes specifically within the HL GIA group. CONCLUSIONS: Overall, our results explore the interplay between SIRE and GIA within a disease context and underscore the utility of studying the genomes of diverse individuals through biobank-scale genotyping linked with EHR-based phenotyping.


Asunto(s)
Registros Electrónicos de Salud , Salud Pública , Pueblo Asiatico , Bancos de Muestras Biológicas , Genómica , Humanos
7.
Sci Rep ; 11(1): 15755, 2021 08 03.
Artículo en Inglés | MEDLINE | ID: mdl-34344934

RESUMEN

In two-thirds of intensive care unit (ICU) patients and 90% of surgical patients, arterial blood pressure (ABP) is monitored non-invasively but intermittently using a blood pressure cuff. Since even a few minutes of hypotension increases the risk of mortality and morbidity, for the remaining (high-risk) patients ABP is measured continuously using invasive devices, and derived values are extracted from the recorded waveforms. However, since invasive monitoring is associated with major complications (infection, bleeding, thrombosis), the ideal ABP monitor should be both non-invasive and continuous. With large volumes of high-fidelity physiological waveforms, it may be possible today to impute a physiological waveform from other available signals. Currently, the state-of-the-art approaches for ABP imputation only aim at intermittent systolic and diastolic blood pressure imputation, and there is no method that imputes the continuous ABP waveform. Here, we developed a novel approach to impute the continuous ABP waveform non-invasively using two continuously-monitored waveforms that are currently part of the standard-of-care, the electrocardiogram (ECG) and photo-plethysmogram (PPG), by adapting a deep learning architecture designed for image segmentation. Using over 150,000 min of data collected at two separate health systems from 463 patients, we demonstrate that our model provides a highly accurate prediction of the continuous ABP waveform (root mean square error 5.823 (95% CI 5.806-5.840) mmHg), as well as the derived systolic (mean difference 2.398 ± 5.623 mmHg) and diastolic blood pressure (mean difference - 2.497 ± 3.785 mmHg) compared to arterial line measurements. Our approach can potentially be used to measure blood pressure continuously and non-invasively for all patients in the acute care setting, without the need for any additional instrumentation beyond the current standard-of-care.


Asunto(s)
Presión Arterial , Determinación de la Presión Sanguínea/métodos , Aprendizaje Profundo , Hipertensión/fisiopatología , Hipotensión/fisiopatología , Unidades de Cuidados Intensivos/estadística & datos numéricos , Análisis de la Onda del Pulso , Estudios de Cohortes , Femenino , Humanos , Masculino , Persona de Mediana Edad
8.
NPJ Digit Med ; 4(1): 44, 2021 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-33686212

RESUMEN

One of the core challenges in applying machine learning and artificial intelligence to medicine is the limited availability of annotated medical data. Unlike in other applications of machine learning, where an abundance of labeled data is available, the labeling and annotation of medical data and images require a major effort of manual work by expert clinicians who do not have the time to annotate manually. In this work, we propose a new deep learning technique (SLIVER-net), to predict clinical features from 3-dimensional volumes using a limited number of manually annotated examples. SLIVER-net is based on transfer learning, where we borrow information about the structure and parameters of the network from publicly available large datasets. Since public volume data are scarce, we use 2D images and account for the 3-dimensional structure using a novel deep learning method which tiles the volume scans, and then adds layers that leverage the 3D structure. In order to illustrate its utility, we apply SLIVER-net to predict risk factors for progression of age-related macular degeneration (AMD), a leading cause of blindness, from optical coherence tomography (OCT) volumes acquired from multiple sites. SLIVER-net successfully predicts these factors despite being trained with a relatively small number of annotated volumes (hundreds) and only dozens of positive training examples. Our empirical evaluation demonstrates that SLIVER-net significantly outperforms standard state-of-the-art deep learning techniques used for medical volumes, and its performance is generalizable as it was validated on an external testing set. In a direct comparison with a clinician panel, we find that SLIVER-net also outperforms junior specialists, and identifies AMD progression risk factors similarly to expert retina specialists.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA