RESUMO
Sleep Apnea (SA) is a prevalent sleep disorder with multifaceted etiologies that can have severe consequences for patients. Diagnosing SA traditionally relies on the in-laboratory polysomnogram (PSG), which records various human physiological activities overnight. SA diagnosis involves manual scoring by qualified physicians. Traditional machine learning methods for SA detection depend on hand-crafted features, making feature selection pivotal for downstream classification tasks. In recent years, deep learning has gained popularity in SA detection due to its capability for automatic feature extraction and superior classification accuracy. This study introduces a Deep Attention Network with Multi-Temporal Information Fusion (DAN-MTIF) for SA detection using single-lead electrocardiogram (ECG) signals. This framework utilizes three 1D convolutional neural network (CNN) blocks to extract features from R-R intervals and R-peak amplitudes using segments of varying lengths. Recognizing that features derived from different temporal scales vary in their contribution to classification, we integrate a multi-head attention module with a self-attention mechanism to learn the weights for each feature vector. Comprehensive experiments and comparisons between two paradigms of classical machine learning approaches and deep learning approaches are conducted. Our experiment results demonstrate that (1) compared with benchmark methods, the proposed DAN-MTIF exhibits excellent performance with 0.9106 accuracy, 0.9396 precision, 0.8470 sensitivity, 0.9588 specificity, and 0.8909 [Formula: see text] score at per-segment level; (2) DAN-MTIF can effectively extract features with a higher degree of discrimination from ECG segments of multiple timescales than those with a single time scale, ensuring a better SA detection performance; (3) the overall performance of deep learning methods is better than the classical machine learning algorithms, highlighting the superior performance of deep learning approaches for SA detection.
RESUMO
Electroencephalography (EEG) or Magnetoencephalography (MEG) source imaging aims to estimate the underlying activated brain sources to explain the observed EEG/MEG recordings. Solving the inverse problem of EEG/MEG Source Imaging (ESI) is challenging due to its ill-posed nature. To achieve a unique solution, it is essential to apply sophisticated regularization constraints to restrict the solution space. Traditionally, the design of regularization terms is based on assumptions about the spatiotemporal structure of the underlying source dynamics. In this paper, we propose a novel paradigm for ESI via an Explainable Deep Learning framework, termed as XDL-ESI, which connects the iterative optimization algorithm with deep learning architecture by unfolding the iterative updates with neural network modules. The proposed framework has the advantages of (1) establishing a data-driven approach to model the source solution structure instead of using hand-crafted regularization terms; (2) improving the robustness of source solutions by introducing a topological loss that leverages the geometric spatial information applying varying penalties on distinct localization errors; (3) improving the reconstruction efficiency and interpretability as it inherits the advantages from both the iterative optimization algorithms (interpretability) and deep learning approaches (function approximation). The proposed XDL-ESI framework provides an efficient, accurate, and interpretable paradigm to solve the ESI inverse problem with satisfactory performance in both simulated data and real clinical data. Specially, this approach is further validated using simultaneous EEG and intracranial EEG (iEEG).
Assuntos
Aprendizado Profundo , Eletroencefalografia , Magnetoencefalografia , Humanos , Eletroencefalografia/métodos , Magnetoencefalografia/métodos , Magnetoencefalografia/normas , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Eletrocorticografia/métodos , Eletrocorticografia/normas , AlgoritmosRESUMO
The process of reconstructing underlying cortical and subcortical electrical activities from Electroencephalography (EEG) or Magnetoencephalography (MEG) recordings is called Electrophysiological Source Imaging (ESI). Given the complementarity between EEG and MEG in measuring radial and tangential cortical sources, combined EEG/MEG is considered beneficial in improving the reconstruction performance of ESI algorithms. Traditional algorithms mainly emphasize incorporating predesigned neurophysiological priors to solve the ESI problem. Deep learning frameworks aim to directly learn the mapping from scalp EEG/MEG measurements to the underlying brain source activities in a data-driven manner, demonstrating superior performance compared to traditional methods. However, most of the existing deep learning approaches for the ESI problem are performed on a single modality of EEG or MEG, meaning the complementarity of these two modalities has not been fully utilized. How to fuse the EEG and MEG in a more principled manner under the deep learning paradigm remains a challenging question. This study develops a Multi-Modal Deep Fusion (MMDF) framework using Attention Neural Networks (ANN) to fully leverage the complementary information between EEG and MEG for solving the ESI inverse problem, which is termed as MMDF-ANN. Specifically, our proposed brain source imaging approach consists of four phases, including feature extraction, weight generation, deep feature fusion, and source mapping. Our experimental results on both synthetic dataset and real dataset demonstrated that using a fusion of EEG and MEG can significantly improve the source localization accuracy compared to using a single-modality of EEG or MEG. Compared to the benchmark algorithms, MMDF-ANN demonstrated good stability when reconstructing sources with extended activation areas and situations of EEG/MEG measurements with a low signal-to-noise ratio.
Assuntos
Algoritmos , Aprendizado Profundo , Eletroencefalografia , Magnetoencefalografia , Redes Neurais de Computação , Magnetoencefalografia/métodos , Humanos , Eletroencefalografia/métodos , Adulto , Masculino , Imagem Multimodal/métodos , Feminino , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Adulto JovemRESUMO
To avoid unexpected failures of units in manufacturing systems, failure mode recognition and prognostics are critically important in prognostics health management (PHM). Most existing methods either ignored the effects of various failure modes on remaining useful lifetime (RUL) prediction or implemented failure mode recognition and RUL prediction as two independent tasks, which failed to exploit failure mode information to obtain accurate RUL prediction. In fact, RUL highly depends on failure modes because sensor signals under different failure modes usually present different degradation patterns. To address the issue, this paper proposes a joint learning model of failure mode recognition and RUL prediction for degradation processes based on multiple sensor signals. The proposed joint learning model first extracts features by considering the degradation mechanism to ensure good interpretability for degradation modeling, and then takes the extracted features as inputs to a deep neural network. By conducting failure mode recognition and RUL prediction as a collaborative task, the proposed model can fully characterize the complex relationship among the extracted features, RUL and failure modes, and outputs the recognized failure modes and the predicted RUL of units simultaneously. A case study on the degradation of aircraft gas turbine engines is presented to evaluate the proposed model performance. Note to Practitioners: The paper aims to develop a joint learning method for failure mode recognition and RUL prediction of operating units. Specifically, the developed method addresses a challenging issue in practice, i.e., how to effectively conduct failure mode recognition and RUL prediction as a joint task based on interpretable extracted degradation features from multiple sensor signals. To implement this method in practice, four steps are included as follows: First, collect multiple sensor signals, failure time, and failure modes of historical units. Second, construct the joint learning model based on features extracted from sensor signals by considering the degradation mechanism. Third, estimate model parameters using the data of historical units. Fourth, recognize the failure mode and predict the RUL of an in-service unit. Since the proposed method is a data-driven neural network with flexible model structure that considers complex data relationships, it is expected to be applicable to many practical situations and use cases, especially for manufacturing systems with complex structures and unknown failure thresholds.
RESUMO
Although degradation modeling has been widely applied to use multiple sensor signals to monitor the degradation process and predict the remaining useful lifetime (RUL) of operating machinery units, three challenging issues remain. One challenge is that units in engineering cases usually work under multiple operational conditions, causing the distribution of sensor signals to vary over conditions. It remains unexplored to characterize time-varying conditions as a distribution shift problem. The second challenge is that sensor signal fusion and degradation status modeling are separated into two independent steps in most of the existing methods, which ignores the intrinsic correlation between the two parts. The last challenge is how to find an accurate health index (HI) of units using previous knowledge of degradation. To tackle these issues, this article proposes an adaptation-aware interactive learning (AAIL) approach for degradation modeling. First, a condition-invariant HI is developed to handle time-varying operation conditions. Second, an interactive framework based on the fusion and degradation model is constructed, which naturally integrates a supervised learner and an unsupervised learner. To estimate the model parameters of AAIL, we propose an interactive training algorithm that shares learned degradation and fusion information during the model training process. A case study that uses the degradation data set of aircraft engines demonstrates that the proposed AAIL outperforms related benchmark methods.
RESUMO
Metabolic engineering uses enzymes as parts to build biosystems for specified tasks. Although a part's working life and failure modes are key engineering performance indicators, this is not yet so in metabolic engineering because it is not known how long enzymes remain functional in vivo or whether cumulative deterioration (wear-out), sudden random failure, or other causes drive replacement. Consequently, enzymes cannot be engineered to extend life and cut the high energy costs of replacement. Guided by catalyst engineering, we adopted catalytic cycles until replacement (CCR) as a metric for enzyme functional life span in vivo. CCR is the number of catalytic cycles that an enzyme mediates in vivo before failure or replacement, i.e., metabolic flux rate/protein turnover rate. We used estimated fluxes and measured protein turnover rates to calculate CCRs for â¼100-200 enzymes each from Lactococcus lactis, yeast, and Arabidopsis CCRs in these organisms had similar ranges (<103 to >107) but different median values (3-4 × 104 in L. lactis and yeast versus 4 × 105 in Arabidopsis). In all organisms, enzymes whose substrates, products, or mechanisms can attack reactive amino acid residues had significantly lower median CCR values than other enzymes. Taken with literature on mechanism-based inactivation, the latter finding supports the proposal that 1) random active-site damage by reaction chemistry is an important cause of enzyme failure, and 2) reactive noncatalytic residues in the active-site region are likely contributors to damage susceptibility. Enzyme engineering to raise CCRs and lower replacement costs may thus be both beneficial and feasible.
Assuntos
Arabidopsis/enzimologia , Biocatálise , Enzimas/química , Lactococcus lactis/enzimologia , Engenharia Metabólica , Saccharomyces cerevisiae/enzimologiaRESUMO
Hospital emergency department (ED) operations are affected when critically ill or injured patients arrive. Such events often lead to the initiation of specific protocols, referred to as Resuscitation-team Activation (RA), in the ED of Mayo Clinic, Rochester, MN where this study was conducted. RA events lead to the diversion of resources from other patients in the ED to provide care to critically ill patients; therefore, it has an impact on the entire ED system. This paper presents a data-driven and flexible statistical learning model to quantify the impact of RA on the ED. The model learns the pattern of operations in the ED from historical patient arrival and departure timestamps and quantifies the impact of RA by measuring the deviation of the departure of patients during RA from normal processes. The proposed method significantly outperforms baseline methods based on measuring the average time patients spend in the ED.
Assuntos
Estado Terminal/terapia , Serviço Hospitalar de Emergência/estatística & dados numéricos , Equipe de Respostas Rápidas de Hospitais/estatística & dados numéricos , Modelos Estatísticos , Ressuscitação , Humanos , Fatores de TempoRESUMO
Obstructive sleep apnea (OSA) syndrome is a common sleep disorder suffered by an increasing number of people worldwide. As an alternative to polysomnography (PSG) for OSA diagnosis, the automatic OSA detection methods used in the current practice mainly concentrate on feature extraction and classifier selection based on collected physiological signals. However, one common limitation in these methods is that the temporal dependence of signals are usually ignored, which may result in critical information loss for OSA diagnosis. In this study, we propose a novel OSA detection approach based on ECG signals by considering temporal dependence within segmented signals. A discriminative hidden Markov model (HMM) and corresponding parameter estimation algorithms are provided. In addition, subject-specific transition probabilities within the model are employed to characterize the subject-to-subject differences of potential OSA patients. To validate our approach, 70 recordings obtained from the Physionet Apnea-ECG database were used. Accuracies of 97.1% for per-recording classification and 86.2% for per-segment OSA detection with satisfactory sensitivity and specificity were achieved. Compared with other existing methods that simply ignore the temporal dependence of signals, the proposed HMM-based detection approach delivers more satisfactory detection performance and could be extended to other disease diagnosis applications.