Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Brief Bioinform ; 23(1)2022 01 17.
Article in English | MEDLINE | ID: mdl-34962264

ABSTRACT

Transcription factors (TFs) are proteins specifically involved in gene expression regulation. It is generally accepted in epigenetics that methylated nucleotides could prevent the TFs from binding to DNA fragments. However, recent studies have confirmed that some TFs have capability to interact with methylated DNA fragments to further regulate gene expression. Although biochemical experiments could recognize TFs binding to methylated DNA sequences, these wet experimental methods are time-consuming and expensive. Machine learning methods provide a good choice for quickly identifying these TFs without experimental materials. Thus, this study aims to design a robust predictor to detect methylated DNA-bound TFs. We firstly proposed using tripeptide word vector feature to formulate protein samples. Subsequently, based on recurrent neural network with long short-term memory, a two-step computational model was designed. The first step predictor was utilized to discriminate transcription factors from non-transcription factors. Once proteins were predicted as TFs, the second step predictor was employed to judge whether the TFs can bind to methylated DNA. Through the independent dataset test, the accuracies of the first step and the second step are 86.63% and 73.59%, respectively. In addition, the statistical analysis of the distribution of tripeptides in training samples showed that the position and number of some tripeptides in the sequence could affect the binding of TFs to methylated DNA. Finally, on the basis of our model, a free web server was established based on the proposed model, which can be available at https://bioinfor.nefu.edu.cn/TFPM/.


Subject(s)
DNA Methylation , Neural Networks, Computer , Transcription Factors/metabolism , Algorithms , Binding Sites , DNA/genetics , DNA-Binding Proteins , Deep Learning , Gene Expression Regulation , Humans , Protein Binding
2.
Network ; : 1-26, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38829364

ABSTRACT

The dynamic workload is evenly distributed among all nodes using balancing methods like hosts or VMs. Load Balancing as a Service (LBaaS) is another name for load balancing in the cloud. In this research work, the load is balanced by the application of Virtual Machine (VM) migration carried out by proposed Sail Jelly Fish Optimization (SJFO). The SJFO is formed by combining Sail Fish Optimizer (SFO) and Jellyfish Search (JS) optimizer. In the Cloud model, many Physical Machines (PMs) are present, where these PMs are comprised of many VMs. Each VM has many tasks, and these tasks depend on various parameters like Central Processing Unit (CPU), memory, Million Instructions per Second (MIPS), capacity, total number of processing entities, as well as bandwidth. Here, the load is predicted by Deep Recurrent Neural Network (DRNN) and this predicted load is compared with a threshold value, where VM migration is done based on predicted values. Furthermore, the performance of SJFO-VM is analysed using the metrics like capacity, load, and resource utilization. The proposed method shows better performance with a superior capacity of 0.598, an inferior load of 0.089, and an inferior resource utilization of 0.257.

3.
J Biomed Inform ; 147: 104511, 2023 11.
Article in English | MEDLINE | ID: mdl-37813326

ABSTRACT

Analyzing large EHR databases to predict cancer progression and treatments has become a hot trend in recent years. An increasing number of modern deep learning models have been proposed to find the milestones of essential patient medical journey characteristics to predict their disease status and give healthcare professionals valuable insights. However, most of the existing methods are lack of consideration for the inter-relationship among different patients. We believe that more valuable information can be extracted, especially when patients with similar disease statuses visit the same doctors. Towards this end, a similar patient augmentation-based approach named SimPA is proposed to enhance the learning of patient representations and further predict lines of therapy transition. Our experiment results on a real-world multiple myeloma dataset show that our proposed approach outperforms state-of-the-art baseline approaches in terms of standard evaluation metrics for classification tasks.


Subject(s)
Electronic Health Records , Humans , Databases, Factual
4.
Environ Monit Assess ; 193(12): 798, 2021 Nov 13.
Article in English | MEDLINE | ID: mdl-34773156

ABSTRACT

Dissolved oxygen (DO) concentration in water is one of the key parameters for assessing river water quality. Artificial intelligence (AI) methods have previously proved to be accurate tools for DO concentration prediction. This study presents the implementation of a deep learning approach applied to a recurrent neural network (RNN) algorithm. The proposed deep recurrent neural network (DRNN) model is compared with support vector machine (SVM) and artificial neural network (ANN) models, formerly shown to be robust AI algorithms. The Fanno Creek in Oregon (USA) is selected as a case study and daily values of water temperature, specific conductance, streamflow discharge, pH, and DO concentration are used as input variables to predict DO concentration for three different lead times ("t + 1," "t + 3," and "t + 7"). Based on Pearson's correlation coefficient, several input variable combinations are formed and used for prediction. The model prediction performance is evaluated using various indices such as correlation coefficient, Nash-Sutcliffe efficiency, root mean square error, and mean absolute error. The results identify the DRNN model ([Formula: see text]) as the most accurate among the three models considered, highlighting the potential of deep learning approaches for water quality parameter prediction.


Subject(s)
Artificial Intelligence , Rivers , Environmental Monitoring , Neural Networks, Computer , Oxygen/analysis
5.
Sensors (Basel) ; 20(20)2020 Oct 21.
Article in English | MEDLINE | ID: mdl-33096769

ABSTRACT

Automated lying-posture tracking is important in preventing bed-related disorders, such as pressure injuries, sleep apnea, and lower-back pain. Prior research studied in-bed lying posture tracking using sensors of different modalities (e.g., accelerometer and pressure sensors). However, there remain significant gaps in research regarding how to design efficient in-bed lying posture tracking systems. These gaps can be articulated through several research questions, as follows. First, can we design a single-sensor, pervasive, and inexpensive system that can accurately detect lying postures? Second, what computational models are most effective in the accurate detection of lying postures? Finally, what physical configuration of the sensor system is most effective for lying posture tracking? To answer these important research questions, in this article we propose a comprehensive approach for designing a sensor system that uses a single accelerometer along with machine learning algorithms for in-bed lying posture classification. We design two categories of machine learning algorithms based on deep learning and traditional classification with handcrafted features to detect lying postures. We also investigate what wearing sites are the most effective in the accurate detection of lying postures. We extensively evaluate the performance of the proposed algorithms on nine different body locations and four human lying postures using two datasets. Our results show that a system with a single accelerometer can be used with either deep learning or traditional classifiers to accurately detect lying postures. The best models in our approach achieve an F1 score that ranges from 95.2% to 97.8% with a coefficient of variation from 0.03 to 0.05. The results also identify the thighs and chest as the most salient body sites for lying posture tracking. Our findings in this article suggest that, because accelerometers are ubiquitous and inexpensive sensors, they can be a viable source of information for pervasive monitoring of in-bed postures.

6.
BMC Bioinformatics ; 18(1): 417, 2017 Sep 18.
Article in English | MEDLINE | ID: mdl-28923002

ABSTRACT

BACKGROUND: Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. RESULTS: We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. CONCLUSIONS: Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.


Subject(s)
Machine Learning , Proteins/chemistry , Molecular Structure , Neural Networks, Computer , Protein Structure, Secondary
7.
Digit Health ; 10: 20552076241249874, 2024.
Article in English | MEDLINE | ID: mdl-38726217

ABSTRACT

Automated epileptic seizure detection from ectroencephalogram (EEG) signals has attracted significant attention in the recent health informatics field. The serious brain condition known as epilepsy, which is characterized by recurrent seizures, is typically described as a sudden change in behavior caused by a momentary shift in the excessive electrical discharges in a group of brain cells, and EEG signal is primarily used in most cases to identify seizure to revitalize the close loop brain. The development of various deep learning (DL) algorithms for epileptic seizure diagnosis has been driven by the EEG's non-invasiveness and capacity to provide repetitive patterns of seizure-related electrophysiological information. Existing DL models, especially in clinical contexts where irregular and unordered structures of physiological recordings make it difficult to think of them as a matrix; this has been a key disadvantage to producing a consistent and appropriate diagnosis outcome due to EEG's low amplitude and nonstationary nature. Graph neural networks have drawn significant improvement by exploiting implicit information that is present in a brain anatomical system, whereas inter-acting nodes are connected by edges whose weights can be determined by either temporal associations or anatomical connections. Considering all these aspects, a novel hybrid framework is proposed for epileptic seizure detection by combined with a sequential graph convolutional network (SGCN) and deep recurrent neural network (DeepRNN). Here, DepRNN is developed by fusing a gated recurrent unit (GRU) with a traditional RNN; its key benefit is that it solves the vanishing gradient problem and achieve this hybrid framework greater sophistication. The line length feature, auto-covariance, auto-correlation, and periodogram are applied as a feature from the raw EEG signal and then grouped the resulting matrix into time-frequency domain as inputs for the SGCN to use for seizure classification. This model extracts both spatial and temporal information, resulting in improved accuracy, precision, and recall for seizure detection. Extensive experiments conducted on the CHB-MIT and TUH datasets showed that the SGCN-DeepRNN model outperforms other deep learning models for seizure detection, achieving an accuracy of 99.007%, with high sensitivity and specificity.

8.
Neural Netw ; 157: 240-256, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36399979

ABSTRACT

Time series forecasting models that use the past information of exogenous or endogenous sequences to forecast future series play an important role in the real world because most real-world time series datasets are rich in time-dependent information. Most conventional prediction models for time series datasets are time-consuming and fraught with complex limitations because they usually fail to adequately exploit the latent spatial dependence between pairs of variables. As a successful variant of recurrent neural networks, the long short-term memory network (LSTM) has been demonstrated to have stronger nonlinear dynamics to store sequential data than traditional machine learning models. Nevertheless, the common shallow LSTM architecture has limited capacity to fully extract the transient characteristics of long interval sequential datasets. In this study, a novel deep autoregression feature augmented bidirectional LSTM network (DAFA-BiLSTM) is proposed as a new deep BiLSTM architecture for time series prediction. Initially, the input vectors are fed into a vector autoregression (VA) transformation module to represent the time-delayed linear and nonlinear properties of the input signals in an unsupervised way. Then, the learned nonlinear combination vectors of VA are progressively fed into different layers of BiLSTM and the output of the previous BiLSTM module is also concatenated with the time-delayed linear vectors of the VA as an augmented feature to form new additional input signals for the next adjacent BiLSTM layer. Extensive real-world time series applications are addressed to demonstrate the superiority and robustness of the proposed DAFA-BiLSTM. Comparative experimental results and statistical analysis show that the proposed DAFA-BiLSTM has good adaptive performance as well as robustness even in noisy environment.


Subject(s)
Memory, Long-Term , Neural Networks, Computer , Time Factors , Forecasting
9.
Med Biol Eng Comput ; 59(5): 1005-1021, 2021 May.
Article in English | MEDLINE | ID: mdl-33851321

ABSTRACT

Cancer is one of the deadly diseases prevailing worldwide and the patients with cancer are rescued only when the cancer is detected at the very early stage. Early detection of cancer is essential as, in the final stage, the chance of survival is limited. The symptoms of cancers are rigorous and therefore, all the symptoms should be studied properly before the diagnosis. Thus, an automatic prediction system is necessary for classifying cancer as malignant or benign. Hence, this paper introduces the novel strategy based on the JayaAnt lion optimization-based Deep recurrent neural network (JayaALO-based DeepRNN) for cancer classification. The steps followed in the developed model are data normalization, data transformation, feature dimension detection, and classification. The first step is data normalization. The goal of data normalization is to eliminate data redundancy and to mitigate the storage of objects in a relational database that maintains the same information in several places. After that, the data transformation is carried out based on log transformation that generates the patterns using more interpretable and helps fulfill the supposition, and to reduce skew. Also, the non-negative matrix factorization is employed for reducing the feature dimension. Finally, the proposed JayaALO-based DeepRNN method effectively classifies cancer based on the reduced dimension features to produce a satisfactory result. Thus, the resulted output of the proposed JayaALO-based DeepRNN is employed for cancer classification. The proposed JayaALO-based DeepRNN showed improved results with maximal accuracy of 95.97%, maximal sensitivity of 95.95%, and maximal specificity of 96.96%. The goal of this research is to devise the cancer classification strategy using the proposed JayaALO-based DeepRNN. It is required to detect the cancer at an early stage to prevent the destruction caused to the other organs. The developed model involves four phases to perform the cancer classification, namely data normalization, data transformation, feature dimension detection, and the classification. Initially, the input images are gathered and are adapted to perform data normalization. The normalized data is fed to the data transformation, which will be performed using log transformation. The obtained transformed data is fed to feature dimension reduction which is performed using non-negative matrix factorization. The reduced features will be employed in DeepRNN for cancer classification. The training of DeepRNN is done using the proposed JayaALO, which is designed by combining ALO and the Jaya algorithm the block diagram of the proposed cancer classification approach using JayaALO-based DeepRNN approach is given below.


Subject(s)
Neoplasms , Neural Networks, Computer , Algorithms , Databases, Factual , Gene Expression , Humans
10.
Trends Hear ; 25: 23312165211041475, 2021.
Article in English | MEDLINE | ID: mdl-34606381

ABSTRACT

A deep recurrent neural network (RNN) for reducing transient sounds was developed and its effects on subjective speech intelligibility and listening comfort were investigated. The RNN was trained using sentences spoken with different accents and corrupted by transient sounds, using the clean speech as the target. It was tested using sentences spoken by unseen talkers and corrupted by unseen transient sounds. A paired-comparison procedure was used to compare all possible combinations of three conditions for subjective speech intelligibility and listening comfort for two relative levels of the transients. The conditions were: no processing (NP); processing using the RNN; and processing using a multi-channel transient reduction method (MCTR). Ten participants with normal hearing and ten with mild-to-moderate hearing loss participated. For the latter, frequency-dependent linear amplification was applied to all stimuli to compensate for individual audibility losses. For the normal-hearing participants, processing using the RNN was significantly preferred over that for NP for subjective intelligibility and comfort, processing using the RNN was significantly preferred over that for MCTR for subjective intelligibility, and processing using the MCTR was significantly preferred over that for NP for comfort for the higher transient level only. For the hearing-impaired participants, processing using the RNN was significantly preferred over that for NP for both subjective intelligibility and comfort, processing using the RNN was significantly preferred over that for MCTR for comfort, and processing using the MCTR was significantly preferred over that for NP for comfort.


Subject(s)
Hearing Aids , Hearing Loss, Sensorineural , Speech Perception , Humans , Neural Networks, Computer , Noise/adverse effects , Speech Intelligibility
11.
Int J Med Inform ; 129: 1-12, 2019 09.
Article in English | MEDLINE | ID: mdl-31445242

ABSTRACT

BACKGROUND: Cleft palate patients have inability to produce adequate velopharyngeal closure, which results in hypernasal speech. In clinic, hypernasal speech is assessed through subject assessment by speech language pathologists. Automatic hypernasal speech detection can provide aided diagnoses for speech language pathologists and clinicians. OBJECTIVES: This study aims to develop Long Short-Term Memory (LSTM) based Deep Recurrent Neural Network (DRNN) system to detect hypernasal speech from cleft palate patients, thus to provide aided diagnoses for clinical operation and speech therapy. Meanwhile, the feature mining and classification abilities of LSTM-DRNN system are explored. METHODS: The utilized speech recordings are 14,544 vowels in Mandarin. Speech data is collected from 144 children (72 children with hypernasality and 72 controls) with the age of 5-12 years old. This work proposes a LSTM based DRNN system to achieve automatic hypernasal speech detection, since LSTM-DRNN can learn short-time dependences of hypernasal speech. The vocal tract based features are fed into LSTM-DRNN to achieve deep mining of features. To verify the feature mining ability of LSTM-DRNN, features projected by LSTM-DRNN are fed into shallow classifiers instead of the following two fully connected layers and a softmax layer. And the features without the projecting process of LSTM-DRNN are directly fed into shallow classifiers as a comparison. Hypernasality-sensitive vowels (/a/, /i/, and /u/) are analyzed for the first time. RESULTS: This LSTM-DRNN based hypernasal speech detection method reaches higher detection accuracy than that using shallow classifiers, since LSTM-DRNN mines features through time axis and network depth simultaneously. The proposed LSTM-DRNN based hypernasality detection system reaches the highest accuracy of 93.35%. According to the analysis of hypernasality-sensitive vowels, the experimental result concludes that vowels /i/ and /u/ are the most sensitive vowels to hypernasal speech. CONCLUSIONS: The results show that LSTM-DRNN has robust feature mining ability and classification ability. This is the first work that applies the LSTM-DRNN technique to automatically detect hypernasality in cleft palate speech. The experimental results demonstrate the potential of deep learning on pathologist speech detection.


Subject(s)
Neural Networks, Computer , Nose Diseases/diagnosis , Adolescent , Child , Child, Preschool , Cleft Palate/complications , Female , Humans , Male , Nose Diseases/etiology , Speech
SELECTION OF CITATIONS
SEARCH DETAIL