Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Med Inform ; 186: 105425, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38554589

ABSTRACT

OBJECTIVE: For patients in the Intensive Care Unit (ICU), the timing of intubation has a significant association with patients' outcomes. However, accurate prediction of the timing of intubation remains an unsolved challenge due to the noisy, sparse, heterogeneous, and unbalanced nature of ICU data. In this study, our objective is to develop a workflow for pre-processing ICU data and to develop a customized deep learning model to predict the need for intubation. METHODS: To improve the prediction accuracy, we transform the intubation prediction task into a time series classification task. We carefully design a sequence of data pre-processing steps to handle the multimodal noisy data. Firstly, we discretize the sequential data and address missing data using interpolation. Next, we employ a sampling strategy to address data imbalance and standardize the data to facilitate faster model convergence. Furthermore, we employ the feature selection technique and propose an ensemble model to combine features learned by different deep learning models. RESULTS: The performance is evaluated on Medical Information Mart for Intensive Care (MIMIC)-III, an ICU dataset. Our proposed Deep Feature Fusion method achieves an area under the curve (AUC) of the receiver operating curve (ROC) of 0.8953, surpassing the performance of other deep learning and traditional machine learning models. CONCLUSION: Our proposed Deep Feature Fusion method proves to be a viable approach for predicting intubation and outperforms other deep learning and classical machine learning models. The study confirms that high-frequency time-varying indicators, particularly Mean Blood Pressure (MeanBP) and peripheral oxygen saturation (SpO2), are significant risk factors for predicting intubation.


Subject(s)
Deep Learning , Humans , ROC Curve , Critical Care , Intensive Care Units , Machine Learning
2.
J Environ Manage ; 354: 120322, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38350279

ABSTRACT

The effects of different current intensities and voltage levels on nutrient removal performance and microbial community evolution in a Bio-Electrical Anammox (BEA) membrane bioreactor (MBR) were evaluated. The nitrogen removal efficiency increased with the current intensity within the range of 64-83 mA, but this improvement was limited at the current further increased. The phosphorus removal in the BEA MBR was attributed to the release of Fe2+, which was closely associated with the applied current to the electrodes. Heme c concentration, enzyme activities, and specific anammox activity exhibited a decreasing trend, while the functional denitrification genes showed a positive correlation with rising voltage. The nitrogen removal efficiency of the BEA system initially increased and then decreased with the voltage rose from 1.5V to 3.5V, peaking at 2.0V of 94.02% ± 1.19%. Transmission electron microscopy and flow cytometry results indicated that accelerated cell apoptosis/lysis led to an irreversible collapse of the biological nitrogen removal system at 3.5V. Candidatus Brocadia was the predominant anammox bacteria in the BEA system. In contrast, closely related Candidatus Kuenenia and Chloroflexi bacteria were gradually eliminated in electrolytic environment. The abundances of Proteobacteria-affiliated denitrifiers were increased with the voltage rising since the organic matter released by the cell apoptosis/lysis was accelerated at a high voltage level.


Subject(s)
Anaerobic Ammonia Oxidation , Microbiota , Denitrification , Oxidation-Reduction , Bacteria/genetics , Bioreactors/microbiology , Nitrogen
3.
Article in English | MEDLINE | ID: mdl-38393839

ABSTRACT

Few-shot classification aims to adapt classifiers trained on base classes to novel classes with a few shots. However, the limited amount of training data is often inadequate to represent the intraclass variations in novel classes. This can result in biased estimation of the feature distribution, which in turn results in inaccurate decision boundaries, especially when the support data are outliers. To address this issue, we propose a feature enhancement method called CORrelation-guided feature Enrichment that generates improved features for novel classes using weak supervision from the base classes. The proposed CORrelation-guided feature Enhancement (CORE) method utilizes an autoencoder (AE) architecture but incorporates classification information into its latent space. This design allows the CORE to generate more discriminative features while discarding irrelevant content information. After being trained on base classes, CORE's generative ability can be transferred to novel classes that are similar to those in the base classes. By using these generative features, we can reduce the estimation bias of the class distribution, which makes few-shot learning (FSL) less sensitive to the selection of support data. Our method is generic and flexible and can be used with any feature extractor and classifier. It can be easily integrated into existing FSL approaches. Experiments with different backbones and classifiers show that our proposed method consistently outperforms existing methods on various widely used benchmarks.

4.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9562-9567, 2023 Nov.
Article in English | MEDLINE | ID: mdl-35333722

ABSTRACT

The ResNet and its variants have achieved remarkable successes in various computer vision tasks. Despite its success in making gradient flow through building blocks, the information communication of intermediate layers of blocks is ignored. To address this issue, in this brief, we propose to introduce a regulator module as a memory mechanism to extract complementary features of the intermediate layers, which are further fed to the ResNet. In particular, the regulator module is composed of convolutional recurrent neural networks (RNNs) [e.g., convolutional long short-term memories (LSTMs) or convolutional gated recurrent units (GRUs)], which are shown to be good at extracting spatio-temporal information. We named the new regulated network as regulated residual network (RegNet). The regulator module can be easily implemented and appended to any ResNet architecture. Experimental results on three image classification datasets have demonstrated the promising performance of the proposed architecture compared with the standard ResNet, squeeze-and-excitation ResNet, and other state-of-the-art architectures.

5.
Neural Netw ; 155: 360-368, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36115162

ABSTRACT

Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks including image classification. Residual-like networks, such as ResNets, mainly focus on the skip connection to avoid gradient vanishing. However, the skip connection mechanism limits the utilization of intermediate features due to simple iterative updates. To mitigate the redundancy of residual-like networks, we design Attentive Feature Integration (AFI) modules, which are widely applicable to most residual-like network architectures, leading to new architectures named AFI-Nets. AFI-Nets explicitly model the correlations among different levels of features and selectively transfer features with a little overhead. AFI-ResNet-152 obtains a 1.24% relative improvement on the ImageNet dataset while decreases the FLOPs by about 10% and the number of parameters by about 9.2% compared to ResNet-152.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
6.
Sensors (Basel) ; 20(15)2020 Aug 04.
Article in English | MEDLINE | ID: mdl-32759800

ABSTRACT

Standard convolutional filters usually capture unnecessary overlap of features resulting in a waste of computational cost. In this paper, we aim to solve this problem by proposing a novel Learned Depthwise Separable Convolution (LdsConv) operation that is smart but has a strong capacity for learning. It integrates the pruning technique into the design of convolutional filters, formulated as a generic convolutional unit that can be used as a direct replacement of convolutions without any adjustments of the architecture. To show the effectiveness of the proposed method, experiments are carried out using the state-of-the-art convolutional neural networks (CNNs), including ResNet, DenseNet, SE-ResNet and MobileNet, respectively. The results show that by simply replacing the original convolution with LdsConv in these CNNs, it can achieve a significantly improved accuracy while reducing computational cost. For the case of ResNet50, the FLOPs can be reduced by 40.9%, meanwhile the accuracy on the associated ImageNet increases.

SELECTION OF CITATIONS
SEARCH DETAIL
...