RESUMEN
BACKGROUND & AIMS: Endoscopic assessment of ulcerative colitis (UC) typically reports only the maximum severity observed. Computer vision methods may better quantify mucosal injury detail, which varies among patients. METHODS: Endoscopic video from the UNIFI clinical trial (A Study to Evaluate the Safety and Efficacy of Ustekinumab Induction and Maintenance Therapy in Participants With Moderately to Severely Active Ulcerative Colitis) comparing ustekinumab and placebo for UC were processed in a computer vision analysis that spatially mapped Mayo Endoscopic Score (MES) to generate the Cumulative Disease Score (CDS). CDS was compared with the MES for differentiating ustekinumab vs placebo treatment response and agreement with symptomatic remission at week 44. Statistical power, effect, and estimated sample sizes for detecting endoscopic differences between treatments were calculated using both CDS and MES measures. Endoscopic video from a separate phase 2 clinical trial replication cohort was performed for validation of CDS performance. RESULTS: Among 748 induction and 348 maintenance patients, CDS was lower in ustekinumab vs placebo users at week 8 (141.9 vs 184.3; P < .0001) and week 44 (78.2 vs 151.5; P < .0001). CDS was correlated with the MES (P < .0001) and all clinical components of the partial Mayo score (P < .0001). Stratification by pretreatment CDS revealed ustekinumab was more effective than placebo (P < .0001) with increasing effect in severe vs mild disease (-85.0 vs -55.4; P < .0001). Compared with the MES, CDS was more sensitive to change, requiring 50% fewer participants to demonstrate endoscopic differences between ustekinumab and placebo (Hedges' g = 0.743 vs 0.460). CDS performance in the JAK-UC replication cohort was similar to UNIFI. CONCLUSIONS: As an automated and quantitative measure of global endoscopic disease severity, the CDS offers artificial intelligence enhancement of traditional MES capability to better evaluate UC in clinical trials and potentially practice.
Asunto(s)
Colitis Ulcerosa , Humanos , Inteligencia Artificial , Colitis Ulcerosa/diagnóstico , Colitis Ulcerosa/tratamiento farmacológico , Colonoscopía/métodos , Computadores , Inducción de Remisión , Índice de Severidad de la Enfermedad , Ustekinumab/efectos adversosRESUMEN
Timely and accurate referral of end-stage heart failure patients for advanced therapies, including heart transplants and mechanical circulatory support, plays an important role in improving patient outcomes and saving costs. However, the decision-making process is complex, nuanced, and time-consuming, requiring cardiologists with specialized expertise and training in heart failure and transplantation. In this study, we propose two logistic tensor regression-based models to predict patients with heart failure warranting evaluation for advanced heart failure therapies using irregularly spaced sequential electronic health records at the population and individual levels. The clinical features were collected at the previous visit and the predictions were made at the very beginning of the subsequent visit. Patient-wise ten-fold cross-validation experiments were performed. Standard LTR achieved an average F1 score of 0.708, AUC of 0.903, and AUPRC of 0.836. Personalized LTR obtained an F1 score of 0.670, an AUC of 0.869 and an AUPRC of 0.839. The two models not only outperformed all other machine learning models to which they were compared but also improved the performance and robustness of the other models via weight transfer. The AUPRC scores of support vector machine, random forest, and Naive Bayes are improved by 8.87%, 7.24%, and 11.38%, respectively. The two models can evaluate the importance of clinical features associated with advanced therapy referral. The five most important medical codes, including chronic kidney disease, hypotension, pulmonary heart disease, mitral regurgitation, and atherosclerotic heart disease, were reviewed and validated with literature and by heart failure cardiologists. Our proposed models effectively utilize EHRs for potential advanced therapies necessity in heart failure patients while explaining the importance of comorbidities and other clinical events. The information learned from trained model training could offer further insight into risk factors contributing to the progression of heart failure at both the population and individual levels.
Asunto(s)
Insuficiencia Cardíaca , Aprendizaje Automático , Humanos , Teorema de Bayes , Factores de Riesgo , Insuficiencia Cardíaca/diagnóstico , Insuficiencia Cardíaca/terapia , ComorbilidadRESUMEN
Monitoring blood pressure, a parameter closely related to cardiovascular activity, can help predict imminent cardiovascular events. In this paper, a novel method is proposed to customize an existing mechanistic model of the cardiovascular system through feature extraction from cardiopulmonary acoustic signals to estimate blood pressure using artificial intelligence. As various factors, such as drug consumption, can alter the biomechanical properties of the cardiovascular system, the proposed method seeks to personalize the mechanistic model using information extracted from vibroacoustic sensors. Simulation results for the proposed approach are evaluated by calculating the error in blood pressure estimates compared to ground truth arterial line measurements, with the results showing promise for this method.
Asunto(s)
Inteligencia Artificial , Sistema Cardiovascular , Presión Sanguínea , Determinación de la Presión Sanguínea , AcústicaRESUMEN
Predicting the interactions between drugs and targets plays an important role in the process of new drug discovery, drug repurposing (also known as drug repositioning). There is a need to develop novel and efficient prediction approaches in order to avoid the costly and laborious process of determining drug-target interactions (DTIs) based on experiments alone. These computational prediction approaches should be capable of identifying the potential DTIs in a timely manner. Matrix factorization methods have been proven to be the most reliable group of methods. Here, we first propose a matrix factorization-based method termed 'Coupled Matrix-Matrix Completion' (CMMC). Next, in order to utilize more comprehensive information provided in different databases and incorporate multiple types of scores for drug-drug similarities and target-target relationship, we then extend CMMC to 'Coupled Tensor-Matrix Completion' (CTMC) by considering drug-drug and target-target similarity/interaction tensors. Results: Evaluation on two benchmark datasets, DrugBank and TTD, shows that CTMC outperforms the matrix-factorization-based methods: GRMF, $L_{2,1}$-GRMF, NRLMF and NRLMF$\beta $. Based on the evaluation, CMMC and CTMC outperform the above three methods in term of area under the curve, F1 score, sensitivity and specificity in a considerably shorter run time.
Asunto(s)
Biología Computacional/métodos , Sistemas de Liberación de Medicamentos , Algoritmos , Desarrollo de Medicamentos , Interacciones Farmacológicas , HumanosRESUMEN
The task of predicting the interactions between drugs and targets plays a key role in the process of drug discovery. There is a need to develop novel and efficient prediction approaches in order to avoid costly and laborious yet not-always-deterministic experiments to determine drug-target interactions (DTIs) by experiments alone. These approaches should be capable of identifying the potential DTIs in a timely manner. In this article, we describe the data required for the task of DTI prediction followed by a comprehensive catalog consisting of machine learning methods and databases, which have been proposed and utilized to predict DTIs. The advantages and disadvantages of each set of methods are also briefly discussed. Lastly, the challenges one may face in prediction of DTI using machine learning approaches are highlighted and we conclude by shedding some lights on important future research directions.
Asunto(s)
Biología Computacional/métodos , Descubrimiento de Drogas/métodos , Aprendizaje Automático , Bases de Datos Factuales , HumanosRESUMEN
BACKGROUND: Postoperative hemodynamic deterioration among cardiac surgical patients can indicate or lead to adverse outcomes. Whereas prediction models for such events using electronic health records or physiologic waveform data are previously described, their combined value remains incompletely defined. The authors hypothesized that models incorporating electronic health record and processed waveform signal data (electrocardiogram lead II, pulse plethysmography, arterial catheter tracing) would yield improved performance versus either modality alone. METHODS: Intensive care unit data were reviewed after elective adult cardiac surgical procedures at an academic center between 2013 and 2020. Model features included electronic health record features and physiologic waveforms. Tensor decomposition was used for waveform feature reduction. Machine learning-based prediction models included a 2013 to 2017 training set and a 2017 to 2020 temporal holdout test set. The primary outcome was a postoperative deterioration event, defined as a composite of low cardiac index of less than 2.0 ml min-1 m-2, mean arterial pressure of less than 55 mmHg sustained for 120 min or longer, new or escalated inotrope/vasopressor infusion, epinephrine bolus of 1 mg or more, or intensive care unit mortality. Prediction models analyzed data 8 h before events. RESULTS: Among 1,555 cases, 185 (12%) experienced 276 deterioration events, most commonly including low cardiac index (7.0% of patients), new inotrope (1.9%), and sustained hypotension (1.4%). The best performing model on the 2013 to 2017 training set yielded a C-statistic of 0.803 (95% CI, 0.799 to 0.807), although performance was substantially lower in the 2017 to 2020 test set (0.709, 0.705 to 0.712). Test set performance of the combined model was greater than corresponding models limited to solely electronic health record features (0.641; 95% CI, 0.637 to 0.646) or waveform features (0.697; 95% CI, 0.693 to 0.701). CONCLUSIONS: Clinical deterioration prediction models combining electronic health record data and waveform data were superior to either modality alone, and performance of combined models was primarily driven by waveform data. Decreased performance of prediction models during temporal validation may be explained by data set shift, a core challenge of healthcare prediction modeling.
Asunto(s)
Procedimientos Quirúrgicos Cardíacos , Hipotensión , Humanos , Adulto , Registros Electrónicos de Salud , Aprendizaje Automático , EpinefrinaRESUMEN
OBJECTIVE: Although sleep difficulties are common after spinal cord injury (SCI), little is known about how day-to-day fluctuations in sleep quality affects health-related quality of life (HRQOL) among these individuals. We examined the effect of sleep quality on same-day HRQOL using ecological momentary assessment methods over a 7-day period. DESIGN: Repeated-measures study involving 7 days of home monitoring; participants completed HRQOL measures each night and ecological momentary assessment ratings 3 times throughout the day; multilevel models were used to analyze data. SETTING: Two academic medical centers. PARTICIPANTS: A total of 170 individuals with SCI (N=170). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Daily sleep quality was rated on a scale of 0 (worst) to 10 (best) each morning. Participants completed end-of-day diaries each night that included several HRQOL measures (Sleep Disturbance, Sleep-related Impairment, Fatigue, Cognitive Abilities, Pain Intensity, Pain Interference, Ability to Participate in Social Roles and Activities, Depression, Anxiety) and ecological momentary assessment ratings of HRQOL (pain, fatigue, subjective thinking) 3 times throughout each day. RESULTS: Multilevel models indicated that fluctuations in sleep quality (as determined by end-of-day ratings) were significantly related to next-day ratings of HRQOL; sleep quality was related to other reports of sleep (Sleep Disturbance; Sleep-related Impairment; Fatigue) but not to other aspects of HRQOL. For ecological momentary assessment ratings, nights of poor sleep were related to worse pain, fatigue, and thinking. Generally, sleep quality showed consistent associations with fatigue and thinking across the day, but the association between sleep quality and these ecological momentary assessment ratings weakened over the course of the day. CONCLUSIONS: Findings highlight the important association between sleep and HRQOL for people with SCI. Future work targeting sleep quality improvement may have positive downstream effects for improving HRQOL in people with SCI.
Asunto(s)
Trastornos del Inicio y del Mantenimiento del Sueño , Traumatismos de la Médula Espinal , Fatiga/etiología , Humanos , Dolor/complicaciones , Calidad de Vida , Calidad del Sueño , Traumatismos de la Médula Espinal/complicacionesRESUMEN
BACKGROUND: Both early detection and severity assessment of liver trauma are critical for optimal triage and management of trauma patients. Current trauma protocols utilize computed tomography (CT) assessment of injuries in a subjective and qualitative (v.s. quantitative) fashion, shortcomings which could both be addressed by automated computer-aided systems that are capable of generating real-time reproducible and quantitative information. This study outlines an end-to-end pipeline to calculate the percentage of the liver parenchyma disrupted by trauma, an important component of the American Association for the Surgery of Trauma (AAST) liver injury scale, the primary tool to assess liver trauma severity at CT. METHODS: This framework comprises deep convolutional neural networks that first generate initial masks of both liver parenchyma (including normal and affected liver) and regions affected by trauma using three dimensional contrast-enhanced CT scans. Next, during the post-processing step, human domain knowledge about the location and intensity distribution of liver trauma is integrated into the model to avoid false positive regions. After generating the liver parenchyma and trauma masks, the corresponding volumes are calculated. Liver parenchymal disruption is then computed as the volume of the liver parenchyma that is disrupted by trauma. RESULTS: The proposed model was trained and validated on an internal dataset from the University of Michigan Health System (UMHS) including 77 CT scans (34 with and 43 without liver parenchymal trauma). The Dice/recall/precision coefficients of the proposed segmentation models are 96.13/96.00/96.35% and 51.21/53.20/56.76%, respectively, in segmenting liver parenchyma and liver trauma regions. In volume-based severity analysis, the proposed model yields a linear regression relation of 0.95 in estimating the percentage of liver parenchyma disrupted by trauma. The model shows an accurate performance in avoiding false positives for patients without any liver parenchymal trauma. These results indicate that the model is generalizable on patients with pre-existing liver conditions, including fatty livers and congestive hepatopathy. CONCLUSION: The proposed algorithms are able to accurately segment the liver and the regions affected by trauma. This pipeline demonstrates an accurate performance in estimating the percentage of liver parenchyma that is affected by trauma. Such a system can aid critical care medical personnel by providing a reproducible quantitative assessment of liver trauma as an alternative to the sometimes subjective AAST grading system that is used currently.
Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Hígado/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada por Rayos XRESUMEN
BACKGROUND: Automated segmentation of coronary arteries is a crucial step for computer-aided coronary artery disease (CAD) diagnosis and treatment planning. Correct delineation of the coronary artery is challenging in X-ray coronary angiography (XCA) due to the low signal-to-noise ratio and confounding background structures. METHODS: A novel ensemble framework for coronary artery segmentation in XCA images is proposed, which utilizes deep learning and filter-based features to construct models using the gradient boosting decision tree (GBDT) and deep forest classifiers. The proposed method was trained and tested on 130 XCA images. For each pixel of interest in the XCA images, a 37-dimensional feature vector was constructed based on (1) the statistics of multi-scale filtering responses in the morphological, spatial, and frequency domains; and (2) the feature maps obtained from trained deep neural networks. The performance of these models was compared with those of common deep neural networks on metrics including precision, sensitivity, specificity, F1 score, AUROC (the area under the receiver operating characteristic curve), and IoU (intersection over union). RESULTS: With hybrid under-sampling methods, the best performing GBDT model achieved a mean F1 score of 0.874, AUROC of 0.947, sensitivity of 0.902, and specificity of 0.992; while the best performing deep forest model obtained a mean F1 score of 0.867, AUROC of 0.95, sensitivity of 0.867, and specificity of 0.993. Compared with the evaluated deep neural networks, both models had better or comparable performance for all evaluated metrics with lower standard deviations over the test images. CONCLUSIONS: The proposed feature-based ensemble method outperformed common deep convolutional neural networks in most performance metrics while yielding more consistent results. Such a method can be used to facilitate the assessment of stenosis and improve the quality of care in patients with CAD.
Asunto(s)
Angiografía Coronaria/métodos , Enfermedad Coronaria/diagnóstico por imagen , Vasos Coronarios/diagnóstico por imagen , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , HumanosRESUMEN
The status of peripheral arteries is known to be a key physiological indicator of the body's response to both acute and chronic medical conditions. In this paper, peripheral artery deformation is tracked by wearable photoplethysmograph (PPG) and piezo-electric (polyvinylidene difluoride, PVDF) sensors, under pressure-varying cuff. A simple mechanical model for the local artery and intervening tissue captures broad features present in the PPG and PVDF signals on multiple swine subjects, with respect to varying cuff pressure. These behaviors provide insight into the robustness of cardiovascular property identification by noninvasive wearable sensing. This is found to help refine noninvasive blood pressure measurements and estimation of systemic vascular resistance (SVR) using selected features of sensor amplitude versus applied pressure.
Asunto(s)
Fotopletismografía , Dispositivos Electrónicos Vestibles , Animales , Arterias , Hemodinámica , Humanos , Fotopletismografía/métodos , Porcinos , Resistencia VascularRESUMEN
BACKGROUND: Traumatic Brain Injury (TBI) is a common condition with potentially severe long-term complications, the prediction of which remains challenging. Machine learning (ML) methods have been used previously to help physicians predict long-term outcomes of TBI so that appropriate treatment plans can be adopted. However, many ML techniques are "black box": it is difficult for humans to understand the decisions made by the model, with post-hoc explanations only identifying isolated relevant factors rather than combinations of factors. Moreover, such models often rely on many variables, some of which might not be available at the time of hospitalization. METHODS: In this study, we apply an interpretable neural network model based on tropical geometry to predict unfavorable outcomes at six months from hospitalization in TBI patients, based on information available at the time of admission. RESULTS: The proposed method is compared to established machine learning methods-XGBoost, Random Forest, and SVM-achieving comparable performance in terms of area under the receiver operating characteristic curve (AUC)-0.799 for the proposed method vs. 0.810 for the best black box model. Moreover, the proposed method allows for the extraction of simple, human-understandable rules that explain the model's predictions and can be used as general guidelines by clinicians to inform treatment decisions. CONCLUSIONS: The classification results for the proposed model are comparable with those of traditional ML methods. However, our model is interpretable, and it allows the extraction of intelligible rules. These rules can be used to determine relevant factors in assessing TBI outcomes and can be used in situations when not all necessary factors are known to inform the full model's decision.
Asunto(s)
Lesiones Traumáticas del Encéfalo , Redes Neurales de la Computación , Lesiones Traumáticas del Encéfalo/diagnóstico , Lesiones Traumáticas del Encéfalo/terapia , Humanos , Aprendizaje Automático , Pronóstico , Curva ROCRESUMEN
Symptoms in atrial fibrillation are generally assumed to correspond to heart rhythm; however, patient affect - the experience of feelings, emotion or mood - is known to frequently modulate how patients report symptoms but this has not been studied in atrial fibrillation. In this study, we investigated the relationship between affect, symptoms and heart rhythm in patients with paroxysmal or persistent atrial fibrillation. We found that presence of negative affect portended reporting of more severe symptoms to the same or greater extent than heart rhythm.
Asunto(s)
Síntomas Afectivos , Fibrilación Atrial , Costo de Enfermedad , Electrocardiografía Ambulatoria/métodos , Calidad de Vida , Evaluación de Síntomas , Afecto/fisiología , Síntomas Afectivos/diagnóstico , Síntomas Afectivos/fisiopatología , Anciano , Fibrilación Atrial/fisiopatología , Fibrilación Atrial/psicología , Dolor en el Pecho/etiología , Dolor en el Pecho/psicología , Correlación de Datos , Disnea/etiología , Disnea/psicología , Emociones/fisiología , Femenino , Conductas Relacionadas con la Salud , Humanos , Masculino , Evaluación de Síntomas/métodos , Evaluación de Síntomas/estadística & datos numéricosRESUMEN
BACKGROUND AND AIMS: Endoscopy is essential for disease assessment in ulcerative colitis (UC), but subjectivity threatens accuracy and precision. We aimed to pilot a fully automated video analysis system for grading endoscopic disease in UC. METHODS: A developmental set of high-resolution UC endoscopic videos were assigned Mayo endoscopic scores (MESs) provided by 2 experienced reviewers. Video still-image stacks were annotated for image quality (informativeness) and MES. Models to predict still-image informativeness and disease severity were trained using convolutional neural networks. A template-matching grid search was used to estimate whole-video MESs provided by human reviewers using predicted still-image MES proportions. The automated whole-video MES workflow was tested using unaltered endoscopic videos from a multicenter UC clinical trial. RESULTS: The developmental high-resolution and testing multicenter clinical trial sets contained 51 and 264 videos, respectively. The still-image informative classifier had excellent performance with a sensitivity of 0.902 and specificity of 0.870. In high-resolution videos, fully automated methods correctly predicted MESs in 78% (41 of 50, κ = 0.84) of videos. In external clinical trial videos, reviewers agreed on MESs in 82.8% (140 of 169) of videos (κ = 0.78). Automated and central reviewer scoring agreement occurred in 57.1% of videos (κ = 0.59), but improved to 69.5% (107 of 169) when accounting for reviewer disagreement. Automated MES grading of clinical trial videos (often low resolution) correctly distinguished remission (MES 0,1) versus active disease (MES 2,3) in 83.7% (221 of 264) of videos. CONCLUSIONS: These early results support the potential for artificial intelligence to provide endoscopic disease grading in UC that approximates the scoring of experienced reviewers.
Asunto(s)
Colitis Ulcerosa , Inteligencia Artificial , Colitis Ulcerosa/diagnóstico por imagen , Colonoscopía , Humanos , Índice de Severidad de la Enfermedad , Grabación en VideoRESUMEN
Cationic amphiphilic polymers have been a platform to create new antimicrobial materials that act by disrupting bacterial cell membranes. While activity characterization and chemical optimization have been done in numerous studies, there remains a gap in our knowledge on the antimicrobial mechanisms of the polymers, which is needed to connect their chemical structures and biological activities. To that end, we used a single giant unilamellar vesicle (GUV) method to identify the membrane-disrupting mechanism of methacrylate random copolymers. The copolymers consist of random sequences of aminoethyl methacrylate and methyl (MMA) or butyl (BMA) methacrylate, with low molecular weights of 1600-2100 g·mol-1. GUVs consisting of an 8:2 mixture of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphoethanolamine (POPE) and 1-palmitoyl-2-oleoyl-sn-glycero-3-phospho-(1'-rac-glycerol), sodium salt (POPG) and those with only 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) were prepared to mimic the bacterial (Escherichia coli) or mammalian membranes, respectively. The disruption of bacteria and mammalian cell membrane-mimetic lipid bilayers in GUVs reflected the antimicrobial and hemolytic activities of the copolymers, suggesting that the copolymers act by disrupting cell membranes. The copolymer with BMA formed pores in the lipid bilayer, while that with MMA caused GUVs to burst. Therefore, we propose that the mechanism is inherent to the chemical identity or properties of hydrophobic groups. The copolymer with MMA showed characteristic sigmoid curves of the time course of GUV burst. We propose a new kinetic model with a positive feedback loop in the insertion of the polymer chains in the lipid bilayer. The novel finding of alkyl-dependent membrane-disrupting mechanisms will provide a new insight into the role of hydrophobic groups in the optimization strategy for antimicrobial activity and selectivity.
Asunto(s)
Antiinfecciosos , Fosfatidilcolinas , Animales , Membrana Dobles de Lípidos , Metacrilatos , PolímerosRESUMEN
Advancements in technology and data collection generated immense amounts of information from various sources such as health records, clinical examination, imaging, medical devices, as well as experimental and biological data. Proper management and analysis of these data via high-end computing solutions, artificial intelligence and machine learning approaches can assist in extracting meaningful information that enhances population health and well-being. Furthermore, the extracted knowledge can provide new avenues for modern healthcare delivery via clinical decision support systems. This manuscript presents a narrative review of data science approaches for clinical decision support systems in orthodontics. We describe the fundamental components of data science approaches including (a) Data collection, storage and management; (b) Data processing; (c) In-depth data analysis; and (d) Data communication. Then, we introduce a web-based data management platform, the Data Storage for Computation and Integration, for temporomandibular joint and dental clinical decision support systems.
Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Ortodoncia , Inteligencia Artificial , Ciencia de los Datos , Aprendizaje AutomáticoRESUMEN
BACKGROUND: Rapid and irregular ventricular rates (RVR) are an important consequence of atrial fibrillation (AF). Raw accelerometry data in combination with electrocardiogram (ECG) data have the potential to distinguish inappropriate from appropriate tachycardia in AF. This can allow for the development of a just-in-time intervention for clinical treatments of AF events. The objective of this study is to develop a machine learning algorithm that can distinguish episodes of AF with RVR that are associated with low levels of activity. METHODS: This study involves 45 patients with persistent or paroxysmal AF. The ECG and accelerometer data were recorded continuously for up to 3 weeks. The prediction of AF episodes with RVR and low activity was achieved using a deterministic probabilistic finite-state automata (DPFA)-based approach. Rapid and irregular ventricular rate (RVR) is defined as having heart rates (HR) greater than 110 beats per minute (BPM) and high activity is defined as greater than 0.75 quantile of the activity level. The AF events were annotated using the FDA-cleared BeatLogic algorithm. Various time intervals prior to the events were used to determine the longest prediction intervals for predicting AF with RVR episodes associated with low levels of activity. RESULTS: Among the 961 annotated AF events, 292 met the criterion for RVR episode. There were 176 and 116 episodes with low and high activity levels respectively. Out of the 961 AF episodes, 770 (80.1%) were used in the training data set and the remaining 191 intervals were held out for testing. The model was able to predict AF with RVR and low activity up to 4.5 min before the events. The mean prediction performance gradually decreased as the time to events increased. The overall Area under the ROC Curve (AUC) for the model lies within the range of 0.67-0.78. CONCLUSION: The DPFA algorithm can predict AF with RVR associated with low levels of activity up to 4.5 min before the onset of the event. This would enable the development of just-in-time interventions that could reduce the morbidity and mortality associated with AF and other similar arrhythmias.
Asunto(s)
Fibrilación Atrial , Algoritmos , Fibrilación Atrial/diagnóstico , Electrocardiografía , Frecuencia Cardíaca , Ventrículos Cardíacos , HumanosRESUMEN
This study investigated the use of a wearable ring made of polyvinylidene fluoride film to identify a low cardiac index (≤2 L/min). The waveform generated by the ring contains patterns that may be indicative of low blood pressure and/or high vascular resistance, both of which are markers of a low cardiac index. In particular, the waveform contains reflection waves whose timing and amplitude are correlated with pulse travel time and vascular resistance, respectively. Hence, the pattern of the waveform is expected to vary in response to changes in blood pressure and vascular resistance. By analyzing the morphology of the waveform, our aim was to create a tool to identify patients with low cardiac index. This was done using a convolutional neural network which was trained on data from animal models. The model was then tested on waveforms that were collected from patients undergoing pulmonary artery catheterization. The results indicate high accuracy in classifying patients with a low cardiac index, achieving an area under the receiver operating characteristics and precision-recall curves of 0.88 and 0.71, respectively.
RESUMEN
With the exponential growth of computational systems and increased patient data acquisition, dental research faces new challenges to manage a large quantity of information. For this reason, data science approaches are needed for the integrative diagnosis of multifactorial diseases, such as Temporomandibular joint (TMJ) Osteoarthritis (OA). The Data science spectrum includes data capture/acquisition, data processing with optimized web-based storage and management, data analytics involving in-depth statistical analysis, machine learning (ML) approaches, and data communication. Artificial intelligence (AI) plays a crucial role in this process. It consists of developing computational systems that can perform human intelligence tasks, such as disease diagnosis, using many features to help in the decision-making support. Patient's clinical parameters, imaging exams, and molecular data are used as the input in cross-validation tasks, and human annotation/diagnosis is also used as the gold standard to train computational learning models and automatic disease classifiers. This paper aims to review and describe AI and ML techniques to diagnose TMJ OA and data science approaches for imaging processing. We used a web-based system for multi-center data communication, algorithms integration, statistics deployment, and process the computational machine learning models. We successfully show AI and data-science applications using patients' data to improve the TMJ OA diagnosis decision-making towards personalized medicine.
RESUMEN
The spleen is one of the most frequently injured organs in blunt abdominal trauma. Computed tomography (CT) is the imaging modality of choice to assess patients with blunt spleen trauma, which may include lacerations, subcapsular or parenchymal hematomas, active hemorrhage, and vascular injuries. While computer-assisted diagnosis systems exist for other conditions assessed using CT scans, the current method to detect spleen injuries involves the manual review of scans by radiologists, which is a time-consuming and repetitive process. In this study, we propose an automated spleen injury detection method using machine learning. CT scans from patients experiencing traumatic injuries were collected from Michigan Medicine and the Crash Injury Research Engineering Network (CIREN) dataset. Ninety-nine scans of healthy and lacerated spleens were split into disjoint training and test sets, with random forest (RF), naive Bayes, SVM, k-nearest neighbors (k-NN) ensemble, and subspace discriminant ensemble models trained via 5-fold cross validation. Of these models, random forest performed the best, achieving an Area Under the receiver operating characteristic Curve (AUC) of 0.91 and an F1 score of 0.80 on the test set. These results suggest that an automated, quantitative assessment of traumatic spleen injury has the potential to enable faster triage and improve patient outcomes.
RESUMEN
BACKGROUND: Heart failure with reduced ejection fraction (HFrEF) is a condition imposing significant health care burden. Given its syndromic nature and often insidious onset, the diagnosis may not be made until clinical manifestations prompt further evaluation. Detecting HFrEF in precursor stages could allow for early initiation of treatments to modify disease progression. Granular data collected during the perioperative period may represent an underutilized method for improving the diagnosis of HFrEF. We hypothesized that patients ultimately diagnosed with HFrEF following surgery can be identified via machine-learning approaches using pre- and intraoperative data. METHODS: Perioperative data were reviewed from adult patients undergoing general anesthesia for major surgical procedures at an academic quaternary care center between 2010 and 2016. Patients with known HFrEF, heart failure with preserved ejection fraction, preoperative critical illness, or undergoing cardiac, cardiology, or electrophysiologic procedures were excluded. Patients were classified as healthy controls or undiagnosed HFrEF. Undiagnosed HFrEF was defined as lacking a HFrEF diagnosis preoperatively but establishing a diagnosis within 730 days postoperatively. Undiagnosed HFrEF patients were adjudicated by expert clinician review, excluding cases for which HFrEF was secondary to a perioperative triggering event, or any event not associated with HFrEF natural disease progression. Machine-learning models, including L1 regularized logistic regression, random forest, and extreme gradient boosting were developed to detect undiagnosed HFrEF, using perioperative data including 628 preoperative and 1195 intraoperative features. Training/validation and test datasets were used with parameter tuning. Test set model performance was evaluated using area under the receiver operating characteristic curve (AUROC), positive predictive value, and other standard metrics. RESULTS: Among 67,697 cases analyzed, 279 (0.41%) patients had undiagnosed HFrEF. The AUROC for the logistic regression model was 0.869 (95% confidence interval, 0.829-0.911), 0.872 (0.836-0.909) for the random forest model, and 0.873 (0.833-0.913) for the extreme gradient boosting model. The corresponding positive predictive values were 1.69% (1.06%-2.32%), 1.42% (0.85%-1.98%), and 1.78% (1.15%-2.40%), respectively. CONCLUSIONS: Machine-learning models leveraging perioperative data can detect undiagnosed HFrEF with good performance. However, the low prevalence of the disease results in a low positive predictive value, and for clinically meaningful sensitivity thresholds to be actionable, confirmatory testing with high specificity (eg, echocardiography or cardiac biomarkers) would be required following model detection. Future studies are necessary to externally validate algorithm performance at additional centers and explore the feasibility of embedding algorithms into the perioperative electronic health record for clinician use in real time.