Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Anesthesiology ; 137(5): 586-601, 2022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-35950802

RESUMO

BACKGROUND: Postoperative hemodynamic deterioration among cardiac surgical patients can indicate or lead to adverse outcomes. Whereas prediction models for such events using electronic health records or physiologic waveform data are previously described, their combined value remains incompletely defined. The authors hypothesized that models incorporating electronic health record and processed waveform signal data (electrocardiogram lead II, pulse plethysmography, arterial catheter tracing) would yield improved performance versus either modality alone. METHODS: Intensive care unit data were reviewed after elective adult cardiac surgical procedures at an academic center between 2013 and 2020. Model features included electronic health record features and physiologic waveforms. Tensor decomposition was used for waveform feature reduction. Machine learning-based prediction models included a 2013 to 2017 training set and a 2017 to 2020 temporal holdout test set. The primary outcome was a postoperative deterioration event, defined as a composite of low cardiac index of less than 2.0 ml min-1 m-2, mean arterial pressure of less than 55 mmHg sustained for 120 min or longer, new or escalated inotrope/vasopressor infusion, epinephrine bolus of 1 mg or more, or intensive care unit mortality. Prediction models analyzed data 8 h before events. RESULTS: Among 1,555 cases, 185 (12%) experienced 276 deterioration events, most commonly including low cardiac index (7.0% of patients), new inotrope (1.9%), and sustained hypotension (1.4%). The best performing model on the 2013 to 2017 training set yielded a C-statistic of 0.803 (95% CI, 0.799 to 0.807), although performance was substantially lower in the 2017 to 2020 test set (0.709, 0.705 to 0.712). Test set performance of the combined model was greater than corresponding models limited to solely electronic health record features (0.641; 95% CI, 0.637 to 0.646) or waveform features (0.697; 95% CI, 0.693 to 0.701). CONCLUSIONS: Clinical deterioration prediction models combining electronic health record data and waveform data were superior to either modality alone, and performance of combined models was primarily driven by waveform data. Decreased performance of prediction models during temporal validation may be explained by data set shift, a core challenge of healthcare prediction modeling.


Assuntos
Procedimentos Cirúrgicos Cardíacos , Hipotensão , Humanos , Adulto , Registros Eletrônicos de Saúde , Aprendizado de Máquina , Epinefrina
2.
BMC Med Imaging ; 22(1): 39, 2022 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-35260105

RESUMO

BACKGROUND: Both early detection and severity assessment of liver trauma are critical for optimal triage and management of trauma patients. Current trauma protocols utilize computed tomography (CT) assessment of injuries in a subjective and qualitative (v.s. quantitative) fashion, shortcomings which could both be addressed by automated computer-aided systems that are capable of generating real-time reproducible and quantitative information. This study outlines an end-to-end pipeline to calculate the percentage of the liver parenchyma disrupted by trauma, an important component of the American Association for the Surgery of Trauma (AAST) liver injury scale, the primary tool to assess liver trauma severity at CT. METHODS: This framework comprises deep convolutional neural networks that first generate initial masks of both liver parenchyma (including normal and affected liver) and regions affected by trauma using three dimensional contrast-enhanced CT scans. Next, during the post-processing step, human domain knowledge about the location and intensity distribution of liver trauma is integrated into the model to avoid false positive regions. After generating the liver parenchyma and trauma masks, the corresponding volumes are calculated. Liver parenchymal disruption is then computed as the volume of the liver parenchyma that is disrupted by trauma. RESULTS: The proposed model was trained and validated on an internal dataset from the University of Michigan Health System (UMHS) including 77 CT scans (34 with and 43 without liver parenchymal trauma). The Dice/recall/precision coefficients of the proposed segmentation models are 96.13/96.00/96.35% and 51.21/53.20/56.76%, respectively, in segmenting liver parenchyma and liver trauma regions. In volume-based severity analysis, the proposed model yields a linear regression relation of 0.95 in estimating the percentage of liver parenchyma disrupted by trauma. The model shows an accurate performance in avoiding false positives for patients without any liver parenchymal trauma. These results indicate that the model is generalizable on patients with pre-existing liver conditions, including fatty livers and congestive hepatopathy. CONCLUSION: The proposed algorithms are able to accurately segment the liver and the regions affected by trauma. This pipeline demonstrates an accurate performance in estimating the percentage of liver parenchyma that is affected by trauma. Such a system can aid critical care medical personnel by providing a reproducible quantitative assessment of liver trauma as an alternative to the sometimes subjective AAST grading system that is used currently.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fígado/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
3.
BMC Med Imaging ; 22(1): 10, 2022 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-35045816

RESUMO

BACKGROUND: Automated segmentation of coronary arteries is a crucial step for computer-aided coronary artery disease (CAD) diagnosis and treatment planning. Correct delineation of the coronary artery is challenging in X-ray coronary angiography (XCA) due to the low signal-to-noise ratio and confounding background structures. METHODS: A novel ensemble framework for coronary artery segmentation in XCA images is proposed, which utilizes deep learning and filter-based features to construct models using the gradient boosting decision tree (GBDT) and deep forest classifiers. The proposed method was trained and tested on 130 XCA images. For each pixel of interest in the XCA images, a 37-dimensional feature vector was constructed based on (1) the statistics of multi-scale filtering responses in the morphological, spatial, and frequency domains; and (2) the feature maps obtained from trained deep neural networks. The performance of these models was compared with those of common deep neural networks on metrics including precision, sensitivity, specificity, F1 score, AUROC (the area under the receiver operating characteristic curve), and IoU (intersection over union). RESULTS: With hybrid under-sampling methods, the best performing GBDT model achieved a mean F1 score of 0.874, AUROC of 0.947, sensitivity of 0.902, and specificity of 0.992; while the best performing deep forest model obtained a mean F1 score of 0.867, AUROC of 0.95, sensitivity of 0.867, and specificity of 0.993. Compared with the evaluated deep neural networks, both models had better or comparable performance for all evaluated metrics with lower standard deviations over the test images. CONCLUSIONS: The proposed feature-based ensemble method outperformed common deep convolutional neural networks in most performance metrics while yielding more consistent results. Such a method can be used to facilitate the assessment of stenosis and improve the quality of care in patients with CAD.


Assuntos
Angiografia Coronária/métodos , Doença das Coronárias/diagnóstico por imagem , Vasos Coronários/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Humanos
4.
BMC Med Inform Decis Mak ; 22(1): 203, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35915430

RESUMO

BACKGROUND: Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatment plans can be adopted. However, many ML techniques are "black box": it is difficult for humans to understand the decisions made by the model, with post-hoc explanations only identifying isolated relevant factors rather than combinations of factors. Moreover, such models often rely on many variables, some of which might not be available at the time of hospitalization. METHODS: In this study, we apply an interpretable neural network model based on tropical geometry to predict unfavorable outcomes at six months from hospitalization in TBI patients, based on information available at the time of admission. RESULTS: The proposed method is compared to established machine learning methods-XGBoost, Random Forest, and SVM-achieving comparable performance in terms of area under the receiver operating characteristic curve (AUC)-0.799 for the proposed method vs. 0.810 for the best black box model. Moreover, the proposed method allows for the extraction of simple, human-understandable rules that explain the model's predictions and can be used as general guidelines by clinicians to inform treatment decisions. CONCLUSIONS: The classification results for the proposed model are comparable with those of traditional ML methods. However, our model is interpretable, and it allows the extraction of intelligible rules. These rules can be used to determine relevant factors in assessing TBI outcomes and can be used in situations when not all necessary factors are known to inform the full model's decision.


Assuntos
Lesões Encefálicas Traumáticas , Redes Neurais de Computação , Lesões Encefálicas Traumáticas/diagnóstico , Lesões Encefálicas Traumáticas/terapia , Humanos , Aprendizado de Máquina , Prognóstico , Curva ROC
5.
Am Heart J ; 241: 1-5, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34157300

RESUMO

Symptoms in atrial fibrillation are generally assumed to correspond to heart rhythm; however, patient affect - the experience of feelings, emotion or mood - is known to frequently modulate how patients report symptoms but this has not been studied in atrial fibrillation. In this study, we investigated the relationship between affect, symptoms and heart rhythm in patients with paroxysmal or persistent atrial fibrillation. We found that presence of negative affect portended reporting of more severe symptoms to the same or greater extent than heart rhythm.


Assuntos
Sintomas Afetivos , Fibrilação Atrial , Efeitos Psicossociais da Doença , Eletrocardiografia Ambulatorial/métodos , Qualidade de Vida , Avaliação de Sintomas , Afeto/fisiologia , Sintomas Afetivos/diagnóstico , Sintomas Afetivos/fisiopatologia , Idoso , Fibrilação Atrial/fisiopatologia , Fibrilação Atrial/psicologia , Dor no Peito/etiologia , Dor no Peito/psicologia , Correlação de Dados , Dispneia/etiologia , Dispneia/psicologia , Emoções/fisiologia , Feminino , Comportamentos Relacionados com a Saúde , Humanos , Masculino , Avaliação de Sintomas/métodos , Avaliação de Sintomas/estatística & dados numéricos
6.
Gastrointest Endosc ; 93(3): 728-736.e1, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-32810479

RESUMO

BACKGROUND AND AIMS: Endoscopy is essential for disease assessment in ulcerative colitis (UC), but subjectivity threatens accuracy and precision. We aimed to pilot a fully automated video analysis system for grading endoscopic disease in UC. METHODS: A developmental set of high-resolution UC endoscopic videos were assigned Mayo endoscopic scores (MESs) provided by 2 experienced reviewers. Video still-image stacks were annotated for image quality (informativeness) and MES. Models to predict still-image informativeness and disease severity were trained using convolutional neural networks. A template-matching grid search was used to estimate whole-video MESs provided by human reviewers using predicted still-image MES proportions. The automated whole-video MES workflow was tested using unaltered endoscopic videos from a multicenter UC clinical trial. RESULTS: The developmental high-resolution and testing multicenter clinical trial sets contained 51 and 264 videos, respectively. The still-image informative classifier had excellent performance with a sensitivity of 0.902 and specificity of 0.870. In high-resolution videos, fully automated methods correctly predicted MESs in 78% (41 of 50, κ = 0.84) of videos. In external clinical trial videos, reviewers agreed on MESs in 82.8% (140 of 169) of videos (κ = 0.78). Automated and central reviewer scoring agreement occurred in 57.1% of videos (κ = 0.59), but improved to 69.5% (107 of 169) when accounting for reviewer disagreement. Automated MES grading of clinical trial videos (often low resolution) correctly distinguished remission (MES 0,1) versus active disease (MES 2,3) in 83.7% (221 of 264) of videos. CONCLUSIONS: These early results support the potential for artificial intelligence to provide endoscopic disease grading in UC that approximates the scoring of experienced reviewers.


Assuntos
Colite Ulcerativa , Inteligência Artificial , Colite Ulcerativa/diagnóstico por imagem , Colonoscopia , Humanos , Índice de Gravidade de Doença , Gravação em Vídeo
7.
Orthod Craniofac Res ; 24 Suppl 2: 26-36, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33973362

RESUMO

Advancements in technology and data collection generated immense amounts of information from various sources such as health records, clinical examination, imaging, medical devices, as well as experimental and biological data. Proper management and analysis of these data via high-end computing solutions, artificial intelligence and machine learning approaches can assist in extracting meaningful information that enhances population health and well-being. Furthermore, the extracted knowledge can provide new avenues for modern healthcare delivery via clinical decision support systems. This manuscript presents a narrative review of data science approaches for clinical decision support systems in orthodontics. We describe the fundamental components of data science approaches including (a) Data collection, storage and management; (b) Data processing; (c) In-depth data analysis; and (d) Data communication. Then, we introduce a web-based data management platform, the Data Storage for Computation and Integration, for temporomandibular joint and dental clinical decision support systems.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Ortodontia , Inteligência Artificial , Ciência de Dados , Aprendizado de Máquina
8.
BMC Med Inform Decis Mak ; 21(1): 364, 2021 12 28.
Artigo em Inglês | MEDLINE | ID: mdl-34963444

RESUMO

BACKGROUND: Rapid and irregular ventricular rates (RVR) are an important consequence of atrial fibrillation (AF). Raw accelerometry data in combination with electrocardiogram (ECG) data have the potential to distinguish inappropriate from appropriate tachycardia in AF. This can allow for the development of a just-in-time intervention for clinical treatments of AF events. The objective of this study is to develop a machine learning algorithm that can distinguish episodes of AF with RVR that are associated with low levels of activity. METHODS: This study involves 45 patients with persistent or paroxysmal AF. The ECG and accelerometer data were recorded continuously for up to 3 weeks. The prediction of AF episodes with RVR and low activity was achieved using a deterministic probabilistic finite-state automata (DPFA)-based approach. Rapid and irregular ventricular rate (RVR) is defined as having heart rates (HR) greater than 110 beats per minute (BPM) and high activity is defined as greater than 0.75 quantile of the activity level. The AF events were annotated using the FDA-cleared BeatLogic algorithm. Various time intervals prior to the events were used to determine the longest prediction intervals for predicting AF with RVR episodes associated with low levels of activity. RESULTS: Among the 961 annotated AF events, 292 met the criterion for RVR episode. There were 176 and 116 episodes with low and high activity levels respectively. Out of the 961 AF episodes, 770 (80.1%) were used in the training data set and the remaining 191 intervals were held out for testing. The model was able to predict AF with RVR and low activity up to 4.5 min before the events. The mean prediction performance gradually decreased as the time to events increased. The overall Area under the ROC Curve (AUC) for the model lies within the range of 0.67-0.78. CONCLUSION: The DPFA algorithm can predict AF with RVR associated with low levels of activity up to 4.5 min before the onset of the event. This would enable the development of just-in-time interventions that could reduce the morbidity and mortality associated with AF and other similar arrhythmias.


Assuntos
Fibrilação Atrial , Algoritmos , Fibrilação Atrial/diagnóstico , Eletrocardiografia , Frequência Cardíaca , Ventrículos do Coração , Humanos
9.
Semin Orthod ; 27(2): 78-86, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34305383

RESUMO

With the exponential growth of computational systems and increased patient data acquisition, dental research faces new challenges to manage a large quantity of information. For this reason, data science approaches are needed for the integrative diagnosis of multifactorial diseases, such as Temporomandibular joint (TMJ) Osteoarthritis (OA). The Data science spectrum includes data capture/acquisition, data processing with optimized web-based storage and management, data analytics involving in-depth statistical analysis, machine learning (ML) approaches, and data communication. Artificial intelligence (AI) plays a crucial role in this process. It consists of developing computational systems that can perform human intelligence tasks, such as disease diagnosis, using many features to help in the decision-making support. Patient's clinical parameters, imaging exams, and molecular data are used as the input in cross-validation tasks, and human annotation/diagnosis is also used as the gold standard to train computational learning models and automatic disease classifiers. This paper aims to review and describe AI and ML techniques to diagnose TMJ OA and data science approaches for imaging processing. We used a web-based system for multi-center data communication, algorithms integration, statistics deployment, and process the computational machine learning models. We successfully show AI and data-science applications using patients' data to improve the TMJ OA diagnosis decision-making towards personalized medicine.

10.
Entropy (Basel) ; 23(4)2021 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-33804831

RESUMO

The spleen is one of the most frequently injured organs in blunt abdominal trauma. Computed tomography (CT) is the imaging modality of choice to assess patients with blunt spleen trauma, which may include lacerations, subcapsular or parenchymal hematomas, active hemorrhage, and vascular injuries. While computer-assisted diagnosis systems exist for other conditions assessed using CT scans, the current method to detect spleen injuries involves the manual review of scans by radiologists, which is a time-consuming and repetitive process. In this study, we propose an automated spleen injury detection method using machine learning. CT scans from patients experiencing traumatic injuries were collected from Michigan Medicine and the Crash Injury Research Engineering Network (CIREN) dataset. Ninety-nine scans of healthy and lacerated spleens were split into disjoint training and test sets, with random forest (RF), naive Bayes, SVM, k-nearest neighbors (k-NN) ensemble, and subspace discriminant ensemble models trained via 5-fold cross validation. Of these models, random forest performed the best, achieving an Area Under the receiver operating characteristic Curve (AUC) of 0.91 and an F1 score of 0.80 on the test set. These results suggest that an automated, quantitative assessment of traumatic spleen injury has the potential to enable faster triage and improve patient outcomes.

11.
BMC Med Imaging ; 20(1): 116, 2020 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-33059612

RESUMO

BACKGROUND: This study outlines an image processing algorithm for accurate and consistent lung segmentation in chest radiographs of critically ill adults and children typically obscured by medical equipment. In particular, this work focuses on applications in analysis of acute respiratory distress syndrome - a critical illness with a mortality rate of 40% that affects 200,000 patients in the United States and 3 million globally each year. METHODS: Chest radiographs were obtained from critically ill adults (n = 100), adults diagnosed with acute respiratory distress syndrome (ARDS) (n = 25), and children (n = 100) hospitalized at Michigan Medicine. Physicians annotated the lung field of each radiograph to establish the ground truth. A Total Variation-based Active Contour (TVAC) lung segmentation algorithm was developed and compared to multiple state-of-the-art methods including a deep learning model (U-Net), a random walker algorithm, and an active spline model, using the Sørensen-Dice coefficient to measure segmentation accuracy. RESULTS: The TVAC algorithm accurately segmented lung fields in all patients in the study. For the adult cohort, an averaged Dice coefficient of 0.86 ±0.04 (min: 0.76) was reported for TVAC, 0.89 ±0.12 (min: 0.01) for U-Net, 0.74 ±0.19 (min: 0.15) for the random walker algorithm, and 0.64 ±0.17 (min: 0.20) for the active spline model. For the pediatric cohort, a Dice coefficient of 0.85 ±0.04 (min: 0.75) was reported for TVAC, 0.87 ±0.09 (min: 0.56) for U-Net, 0.67 ±0.18 (min: 0.18) for the random walker algorithm, and 0.61 ±0.18 (min: 0.18) for the active spline model. CONCLUSION: The proposed algorithm demonstrates the most consistent performance of all segmentation methods tested. These results suggest that TVAC can accurately identify lung fields in chest radiographs in critically ill adults and children.


Assuntos
Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/métodos , Síndrome do Desconforto Respiratório/diagnóstico por imagem , Adolescente , Adulto , Idoso , Algoritmos , Criança , Pré-Escolar , Aprendizado Profundo , Feminino , Hospitalização , Humanos , Lactente , Recém-Nascido , Masculino , Pessoa de Meia-Idade , Adulto Jovem
12.
Entropy (Basel) ; 21(5)2019 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-33267156

RESUMO

Fibromyalgia is a medical condition characterized by widespread muscle pain and tenderness and is often accompanied by fatigue and alteration in sleep, mood, and memory. Poor sleep quality and fatigue, as prominent characteristics of fibromyalgia, have a direct impact on patient behavior and quality of life. As such, the detection of extreme cases of sleep quality and fatigue level is a prerequisite for any intervention that can improve sleep quality and reduce fatigue level for people with fibromyalgia and enhance their daytime functionality. In this study, we propose a new supervised machine learning method called Learning Using Concave and Convex Kernels (LUCCK). This method employs similarity functions whose convexity or concavity can be configured so as to determine a model for each feature separately, and then uses this information to reweight the importance of each feature proportionally during classification. The data used for this study was collected from patients with fibromyalgia and consisted of blood volume pulse (BVP), 3-axis accelerometer, temperature, and electrodermal activity (EDA), recorded by an Empatica E4 wristband over the courses of several days, as well as a self-reported survey. Experiments on this dataset demonstrate that the proposed machine learning method outperforms conventional machine learning approaches in detecting extreme cases of poor sleep and fatigue in people with fibromyalgia.

13.
Artif Intell Med ; 156: 102947, 2024 10.
Artigo em Inglês | MEDLINE | ID: mdl-39208711

RESUMO

The advanced learning paradigm, learning using privileged information (LUPI), leverages information in training that is not present at the time of prediction. In this study, we developed privileged logistic regression (PLR) models under the LUPI paradigm to detect acute respiratory distress syndrome (ARDS), with mechanical ventilation variables or chest x-ray image features employed in the privileged domain and electronic health records in the base domain. In model training, the objective of privileged logistic regression was designed to incorporate data from the privileged domain and encourage knowledge transfer across the privileged and base domains. An asymptotic analysis was also performed, yielding sufficient conditions under which the addition of privileged information increases the rate of convergence in the proposed model. Results for ARDS detection show that PLR models achieve better classification performances than logistic regression models trained solely on the base domain, even when privileged information is partially available. Furthermore, PLR models demonstrate performance on par with or superior to state-of-the-art models under the LUPI paradigm. As the proposed models are effective, easy to interpret, and highly explainable, they are ideal for other clinical applications where privileged information is at least partially available.


Assuntos
Síndrome do Desconforto Respiratório , Síndrome do Desconforto Respiratório/diagnóstico por imagem , Síndrome do Desconforto Respiratório/terapia , Humanos , Modelos Logísticos , Respiração Artificial , Aprendizado de Máquina , Registros Eletrônicos de Saúde
14.
Diagnostics (Basel) ; 14(3)2024 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-38337750

RESUMO

The aim of this research is to apply the learning using privileged information paradigm to sepsis prognosis. We used signal processing of electrocardiogram and electronic health record data to construct support vector machines with and without privileged information to predict an increase in a given patient's quick-Sequential Organ Failure Assessment score, using a retrospective dataset. We applied this to both a small, critically ill cohort and a broader cohort of patients in the intensive care unit. Within the smaller cohort, privileged information proved helpful in a signal-informed model, and across both cohorts, electrocardiogram data proved to be informative to creating the prediction. Although learning using privileged information did not significantly improve results in this study, it is a paradigm worth studying further in the context of using signal processing for sepsis prognosis.

15.
J Allergy Clin Immunol Glob ; 3(3): 100252, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38745865

RESUMO

Background: Clinical testing, including food-specific skin and serum IgE level tests, provides limited accuracy to predict food allergy. Confirmatory oral food challenges (OFCs) are often required, but the associated risks, cost, and logistic difficulties comprise a barrier to proper diagnosis. Objective: We sought to utilize advanced machine learning methodologies to integrate clinical variables associated with peanut allergy to create a predictive model for OFCs to improve predictive performance over that of purely statistical methods. Methods: Machine learning was applied to the Learning Early about Peanut Allergy (LEAP) study of 463 peanut OFCs and associated clinical variables. Patient-wise cross-validation was used to create ensemble models that were evaluated on holdout test sets. These models were further evaluated by using 2 additional peanut allergy OFC cohorts: the IMPACT study cohort and a local University of Michigan cohort. Results: In the LEAP data set, the ensemble models achieved a maximum mean area under the curve of 0.997, with a sensitivity and specificity of 0.994 and 1.00, respectively. In the combined validation data sets, the top ensemble model achieved a maximum area under the curve of 0.871, with a sensitivity and specificity of 0.763 and 0.980, respectively. Conclusions: Machine learning models for predicting peanut OFC results have the potential to accurately predict OFC outcomes, potentially minimizing the need for OFCs while increasing confidence in food allergy diagnoses.

16.
Sci Rep ; 14(1): 18155, 2024 08 06.
Artigo em Inglês | MEDLINE | ID: mdl-39103488

RESUMO

The quick Sequential Organ Failure Assessment (qSOFA) system identifies an individual's risk to progress to poor sepsis-related outcomes using minimal variables. We used Support Vector Machine, Learning Using Concave and Convex Kernels, and Random Forest to predict an increase in qSOFA score using electronic health record (EHR) data, electrocardiograms (ECG), and arterial line signals. We structured physiological signals data in a tensor format and used Canonical Polyadic/Parallel Factors (CP) decomposition for feature reduction. Random Forests trained on ECG data show improved performance after tensor decomposition for predictions in a 6-h time frame (AUROC 0.67 ± 0.06 compared to 0.57 ± 0.08, p = 0.01 ). Adding arterial line features can also improve performance (AUROC 0.69 ± 0.07, p < 0.01 ), and benefit from tensor decomposition (AUROC 0.71 ± 0.07, p = 0.01 ). Adding EHR data features to a tensor-reduced signal model further improves performance (AUROC 0.77 ± 0.06, p < 0.01 ). Despite reduction in performance going from an EHR data-informed model to a tensor-reduced waveform data model, the signals-informed model offers distinct advantages. The first is that predictions can be made on a continuous basis in real-time, and second is that these predictions are not limited by the availability of EHR data. Additionally, structuring the waveform features as a tensor conserves structural and temporal information that would otherwise be lost if the data were presented as flat vectors.


Assuntos
Eletrocardiografia , Sepse , Humanos , Sepse/fisiopatologia , Eletrocardiografia/métodos , Registros Eletrônicos de Saúde , Masculino , Feminino , Escores de Disfunção Orgânica , Máquina de Vetores de Suporte , Pessoa de Meia-Idade , Idoso
17.
PLOS Digit Health ; 2(6): e0000281, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37384608

RESUMO

Missing data presents a challenge for machine learning applications specifically when utilizing electronic health records to develop clinical decision support systems. The lack of these values is due in part to the complex nature of clinical data in which the content is personalized to each patient. Several methods have been developed to handle this issue, such as imputation or complete case analysis, but their limitations restrict the solidity of findings. However, recent studies have explored how using some features as fully available privileged information can increase model performance including in SVM. Building on this insight, we propose a computationally efficient kernel SVM-based framework (l2-SVMp+) that leverages partially available privileged information to guide model construction. Our experiments validated the superiority of l2-SVMp+ over common approaches for handling missingness and previous implementations of SVMp+ in both digit recognition, disease classification and patient readmission prediction tasks. The performance improves as the percentage of available privileged information increases. Our results showcase the capability of l2-SVMp+ to handle incomplete but important features in real-world medical applications, surpassing traditional SVMs that lack privileged information. Additionally, l2-SVMp+ achieves comparable or superior model performance compared to imputed privileged features.

18.
PLoS One ; 18(11): e0295016, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38015947

RESUMO

BACKGROUND: Timely referral for advanced therapies (i.e., heart transplantation, left ventricular assist device) is critical for ensuring optimal outcomes for heart failure patients. Using electronic health records, our goal was to use data from a single hospitalization to develop an interpretable clinical decision-making system for predicting the need for advanced therapies at the subsequent hospitalization. METHODS: Michigan Medicine heart failure patients from 2013-2021 with a left ventricular ejection fraction ≤ 35% and at least two heart failure hospitalizations within one year were used to train an interpretable machine learning model constructed using fuzzy logic and tropical geometry. Clinical knowledge was used to initialize the model. The performance and robustness of the model were evaluated with the mean and standard deviation of the area under the receiver operating curve (AUC), the area under the precision-recall curve (AUPRC), and the F1 score of the ensemble. We inferred membership functions from the model for continuous clinical variables, extracted decision rules, and then evaluated their relative importance. RESULTS: The model was trained and validated using data from 557 heart failure hospitalizations from 300 patients, of whom 193 received advanced therapies. The mean (standard deviation) of AUC, AUPRC, and F1 scores of the proposed model initialized with clinical knowledge was 0.747 (0.080), 0.642 (0.080), and 0.569 (0.067), respectively, showing superior predictive performance or increased interpretability over other machine learning methods. The model learned critical risk factors predicting the need for advanced therapies in the subsequent hospitalization. Furthermore, our model displayed transparent rule sets composed of these critical concepts to justify the prediction. CONCLUSION: These results demonstrate the ability to successfully predict the need for advanced heart failure therapies by generating transparent and accessible clinical rules although further research is needed to prospectively validate the risk factors identified by the model.


Assuntos
Insuficiência Cardíaca , Função Ventricular Esquerda , Humanos , Volume Sistólico , Hospitalização , Redes Neurais de Computação , Insuficiência Cardíaca/terapia
19.
IEEE J Biomed Health Inform ; 27(1): 239-250, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36194714

RESUMO

A model's interpretability is essential to many practical applications such as clinical decision support systems. In this article, a novel interpretable machine learning method is presented, which can model the relationship between input variables and responses in humanly understandable rules. The method is built by applying tropical geometry to fuzzy inference systems, wherein variable encoding functions and salient rules can be discovered by supervised learning. Experiments using synthetic datasets were conducted to demonstrate the performance and capacity of the proposed algorithm in classification and rule discovery. Furthermore, we present a pilot application in identifying heart failure patients that are eligible for advanced therapies as proof of principle. From our results on this particular application, the proposed network achieves the highest F1 score. The network is capable of learning rules that can be interpreted and used by clinical providers. In addition, existing fuzzy domain knowledge can be easily transferred into the network and facilitate model training. In our application, with the existing knowledge, the F1 score was improved by over 5%. The characteristics of the proposed network make it promising in applications requiring model reliability and justification.


Assuntos
Lógica Fuzzy , Insuficiência Cardíaca , Humanos , Reprodutibilidade dos Testes , Algoritmos , Aprendizado de Máquina
20.
J Heart Lung Transplant ; 41(12): 1781-1789, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36192320

RESUMO

BACKGROUND: Systems level barriers to heart failure (HF) care limit access to HF advanced therapies (heart transplantation, left ventricular assist devices). There is a need for automated systems that can help clinicians ensure patients with HF are evaluated for HF advanced therapies at the appropriate time to optimize outcomes. METHODS: We performed a retrospective study using the REVIVAL (Registry Evaluation of Vital Information for VADs in Ambulatory Life) and INTERMACS (Interagency Registry for Mechanically Assisted Circulatory Support) registries. We developed a novel machine learning model based on principles of tropical geometry and fuzzy logic that can accommodate clinician knowledge and provide recommendations regarding need for advanced therapies evaluations that are accessible to end-users. RESULTS: The model was trained and validated using data from 4,694 HF patients. When initiated with clinical knowledge from HF and transplant cardiologists, the model achieved an F1 score of 43.8%, recall of 51.1%, and precision of 46.9%. The model achieved comparable performance compared with other commonly used machine learning models. Importantly, our model was 1 of only 3 models providing transparent and parsimonious clinical rules, significantly outperforming the other 2 models. Eleven clinical rules were extracted from the model which can be leveraged in clinical practice. CONCLUSIONS: A machine learning model capable of accepting clinical knowledge and making accessible recommendations was trained to identify patients with advanced HF. While this model was developed for HF care, the methodology has multiple potential uses in other important clinical applications.


Assuntos
Insuficiência Cardíaca , Coração Auxiliar , Humanos , Estudos Retrospectivos , Insuficiência Cardíaca/cirurgia , Aprendizado de Máquina , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA