Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
1.
Brief Bioinform ; 25(2)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38493338

RESUMO

In recent years, there has been a growing trend in the realm of parallel clustering analysis for single-cell RNA-seq (scRNA) and single-cell Assay of Transposase Accessible Chromatin (scATAC) data. However, prevailing methods often treat these two data modalities as equals, neglecting the fact that the scRNA mode holds significantly richer information compared to the scATAC. This disregard hinders the model benefits from the insights derived from multiple modalities, compromising the overall clustering performance. To this end, we propose an effective multi-modal clustering model scEMC for parallel scRNA and Assay of Transposase Accessible Chromatin data. Concretely, we have devised a skip aggregation network to simultaneously learn global structural information among cells and integrate data from diverse modalities. To safeguard the quality of integrated cell representation against the influence stemming from sparse scATAC data, we connect the scRNA data with the aggregated representation via skip connection. Moreover, to effectively fit the real distribution of cells, we introduced a Zero Inflated Negative Binomial-based denoising autoencoder that accommodates corrupted data containing synthetic noise, concurrently integrating a joint optimization module that employs multiple losses. Extensive experiments serve to underscore the effectiveness of our model. This work contributes significantly to the ongoing exploration of cell subpopulations and tumor microenvironments, and the code of our work will be public at https://github.com/DayuHuu/scEMC.


Assuntos
Cromatina , RNA Citoplasmático Pequeno , Análise da Expressão Gênica de Célula Única , Análise por Conglomerados , Aprendizagem , RNA Citoplasmático Pequeno/genética , Transposases , Análise de Sequência de RNA , Perfilação da Expressão Gênica
2.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36631401

RESUMO

The advances in single-cell ribonucleic acid sequencing (scRNA-seq) allow researchers to explore cellular heterogeneity and human diseases at cell resolution. Cell clustering is a prerequisite in scRNA-seq analysis since it can recognize cell identities. However, the high dimensionality, noises and significant sparsity of scRNA-seq data have made it a big challenge. Although many methods have emerged, they still fail to fully explore the intrinsic properties of cells and the relationship among cells, which seriously affects the downstream clustering performance. Here, we propose a new deep contrastive clustering algorithm called scDCCA. It integrates a denoising auto-encoder and a dual contrastive learning module into a deep clustering framework to extract valuable features and realize cell clustering. Specifically, to better characterize and learn data representations robustly, scDCCA utilizes a denoising Zero-Inflated Negative Binomial model-based auto-encoder to extract low-dimensional features. Meanwhile, scDCCA incorporates a dual contrastive learning module to capture the pairwise proximity of cells. By increasing the similarities between positive pairs and the differences between negative ones, the contrasts at both the instance and the cluster level help the model learn more discriminative features and achieve better cell segregation. Furthermore, scDCCA joins feature learning with clustering, which realizes representation learning and cell clustering in an end-to-end manner. Experimental results of 14 real datasets validate that scDCCA outperforms eight state-of-the-art methods in terms of accuracy, generalizability, scalability and efficiency. Cell visualization and biological analysis demonstrate that scDCCA significantly improves clustering and facilitates downstream analysis for scRNA-seq data. The code is available at https://github.com/WJ319/scDCCA.


Assuntos
Perfilação da Expressão Gênica , Análise da Expressão Gênica de Célula Única , Humanos , Perfilação da Expressão Gênica/métodos , Análise de Sequência de RNA/métodos , Análise de Célula Única/métodos , Algoritmos , Análise por Conglomerados
3.
Brief Bioinform ; 24(3)2023 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-36971393

RESUMO

MOTIVATION: A large number of studies have shown that circular RNA (circRNA) affects biological processes by competitively binding miRNA, providing a new perspective for the diagnosis, and treatment of human diseases. Therefore, exploring the potential circRNA-miRNA interactions (CMIs) is an important and urgent task at present. Although some computational methods have been tried, their performance is limited by the incompleteness of feature extraction in sparse networks and the low computational efficiency of lengthy data. RESULTS: In this paper, we proposed JSNDCMI, which combines the multi-structure feature extraction framework and Denoising Autoencoder (DAE) to meet the challenge of CMI prediction in sparse networks. In detail, JSNDCMI integrates functional similarity and local topological structure similarity in the CMI network through the multi-structure feature extraction framework, then forces the neural network to learn the robust representation of features through DAE and finally uses the Gradient Boosting Decision Tree classifier to predict the potential CMIs. JSNDCMI produces the best performance in the 5-fold cross-validation of all data sets. In the case study, seven of the top 10 CMIs with the highest score were verified in PubMed. AVAILABILITY: The data and source code can be found at https://github.com/1axin/JSNDCMI.


Assuntos
MicroRNAs , Humanos , MicroRNAs/genética , RNA Circular , Redes Neurais de Computação , Software , Biologia Computacional/métodos
4.
Sensors (Basel) ; 24(6)2024 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-38544221

RESUMO

The BeiDou Navigation Satellite System (BDS) provides real-time absolute location services to users around the world and plays a key role in the rapidly evolving field of autonomous driving. In complex urban environments, the positioning accuracy of BDS often suffers from large deviations due to non-line-of-sight (NLOS) signals. Deep learning (DL) methods have shown strong capabilities in detecting complex and variable NLOS signals. However, these methods still suffer from the following limitations. On the one hand, supervised learning methods require labeled samples for learning, which inevitably encounters the bottleneck of difficulty in constructing databases with a large number of labels. On the other hand, the collected data tend to have varying degrees of noise, leading to low accuracy and poor generalization performance of the detection model, especially when the environment around the receiver changes. In this article, we propose a novel deep neural architecture named convolutional denoising autoencoder network (CDAENet) to detect NLOS in urban forest environments. Specifically, we first design a denoising autoencoder based on unsupervised DL to reduce the long time series signal dimension and extract the deep features of the data. Meanwhile, denoising autoencoders improve the model's robustness in identifying noisy data by introducing a certain amount of noise into the input data. Then, an MLP algorithm is used to identify the non-linearity of the BDS signal. Finally, the performance of the proposed CDAENet model is validated on a real urban forest dataset. The experimental results show that the satellite detection accuracy of our proposed algorithm is more than 95%, which is about an 8% improvement over existing machine-learning-based methods and about 3% improvement over deep-learning-based approaches.

5.
Mol Divers ; 27(3): 1333-1343, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35871213

RESUMO

Drug-target interaction is crucial in the discovery of new drugs. Computational methods can be used to identify new drug-target interactions at low costs and with reasonable accuracy. Recent studies pay more attention to machine-learning methods, ranging from matrix factorization to deep learning, in the DTI prediction. Since the interaction matrix is often extremely sparse, DTI prediction performance is significantly decreased with matrix factorization-based methods. Therefore, some matrix factorization methods utilize side information to address both the sparsity issue of the interaction matrix and the cold-start issue. By combining matrix factorization and autoencoders, we propose a hybrid DTI prediction model that simultaneously learn the hidden factors of drugs and targets from their side information and interaction matrix. The proposed method is composed of two steps: the pre-processing of the interaction matrix, and the hybrid model. We leverage the similarity matrices of both drugs and targets to address the sparsity problem of the interaction matrix. The comparison of our approach against other algorithms on the same reference datasets has shown good results regarding area under receiver operating characteristic curve and the area under precision-recall curve. More specifically, experimental results achieve high accuracy on golden standard datasets (e.g., Nuclear Receptors, GPCRs, Ion Channels, and Enzymes) when performed with five repetitions of tenfold cross-validation. Display graphical of the hybrid model of Matrix Factorization with Denoising Autoencoders with the help side information of drugs and targets for Prediction of Drug-Target Interactions.


Assuntos
Algoritmos , Aprendizado de Máquina , Interações Medicamentosas , Projetos de Pesquisa , Curva ROC
6.
Sensors (Basel) ; 23(12)2023 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-37420709

RESUMO

In indoor environments, estimating localization using a received signal strength indicator (RSSI) is difficult because of the noise from signals reflected and refracted by walls and obstacles. In this study, we used a denoising autoencoder (DAE) to remove noise in the RSSI of Bluetooth Low Energy (BLE) signals to improve localization performance. In addition, it is known that the signal of an RSSI can be exponentially aggravated when the noise is increased proportionally to the square of the distance increment. Based on the problem, to effectively remove the noise by adapting this characteristic, we proposed adaptive noise generation schemes to train the DAE model to reflect the characteristics in which the signal-to-noise ratio (SNR) considerably increases as the distance between the terminal and beacon increases. We compared the model's performance with that of Gaussian noise and other localization algorithms. The results showed an accuracy of 72.6%, a 10.2% improvement over the model with Gaussian noise. Furthermore, our model outperformed the Kalman filter in terms of denoising.


Assuntos
Algoritmos , Fenômenos Biológicos , Razão Sinal-Ruído , Distribuição Normal
7.
Sensors (Basel) ; 23(14)2023 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-37514597

RESUMO

Urban intersections are one of the most common sources of traffic congestion. Especially for multiple intersections, an appropriate control method should be able to regulate the traffic flow within the control area. The intersection signal-timing problem is crucial for ensuring efficient traffic operations, with the key issues being the determination of a traffic model and the design of an optimization algorithm. So, an optimization method for signalized intersections integrating a multi-objective model and an NSGAIII-DAE algorithm is established in this paper. Firstly, the multi-objective model is constructed including the usual signal control delay and traffic capacity indices. In addition, the conflict delay caused by right-turning vehicles crossing straight-going non-motor vehicles is considered and combined with the proposed algorithm, enabling the traffic model to better balance the traffic efficiency of intersections without adding infrastructure. Secondly, to address the challenges of diversity and convergence faced by the classic NSGA-III algorithm in solving traffic models with high-dimensional search spaces, a denoising autoencoder (DAE) is adopted to learn the compact representation of the original high-dimensional search space. Some genetic operations are performed in the compressed space and then mapped back to the original search space through the DAE. As a result, an appropriate balance between the local and global searching in an iteration can be achieved. To validate the proposed method, numerical experiments were conducted using actual traffic data from intersections in Jinzhou, China. The numerical results show that the signal control delay and conflict delay are significantly reduced compared with the existing algorithm, and the optimal reduction is 33.7% and 31.3%, respectively. The capacity value obtained by the proposed method in this paper is lower than that of the compared algorithm, but it is also 11.5% higher than that of the current scheme in this case. The comparisons and discussions demonstrate the effectiveness of the proposed method designed for improving the efficiency of signalized intersections.

8.
Sensors (Basel) ; 23(24)2023 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-38139543

RESUMO

Supervisory control and data acquisition (SCADA) systems are widely utilized in power equipment for condition monitoring. For the collected data, there generally exists a problem-missing data of different types and patterns. This leads to the poor quality and utilization difficulties of the collected data. To address this problem, this paper customizes methodology that combines an asymmetric denoising autoencoder (ADAE) and moving average filter (MAF) to perform accurate missing data imputation. First, convolution and gated recurrent unit (GRU) are applied to the encoder of the ADAE, while the decoder still utilizes the fully connected layers to form an asymmetric network structure. The ADAE extracts the local periodic and temporal features from monitoring data and then decodes the features to realize the imputation of the multi-type missing. On this basis, according to the continuity of power data in the time domain, the MAF is utilized to fuse the prior knowledge of the neighborhood of missing data to secondarily optimize the imputed data. Case studies reveal that the developed method achieves greater accuracy compared to existing models. This paper adopts experiments under different scenarios to justify that the MAF-ADAE method applies to actual power equipment monitoring data imputation.

9.
Sensors (Basel) ; 23(14)2023 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-37514900

RESUMO

Recently, remarkable successes have been achieved in the quality assurance of automotive software systems (ASSs) through the utilization of real-time hardware-in-the-loop (HIL) simulation. Based on the HIL platform, safe, flexible and reliable realistic simulation during the system development process can be enabled. However, notwithstanding the test automation capability, large amounts of recordings data are generated as a result of HIL test executions. Expert knowledge-based approaches to analyze the generated recordings, with the aim of detecting and identifying the faults, are costly in terms of time, effort and difficulty. Therefore, in this study, a novel deep learning-based methodology is proposed so that the faults of automotive sensor signals can be efficiently and automatically detected and identified without human intervention. Concretely, a hybrid GRU-based denoising autoencoder (GRU-based DAE) model with the k-means algorithm is developed for the fault-detection and clustering problem in sequential data. By doing so, based on the real-time historical data, not only individual faults but also unknown simultaneous faults under noisy conditions can be accurately detected and clustered. The applicability and advantages of the proposed method for the HIL testing process are demonstrated by two automotive case studies. To be specific, a high-fidelity gasoline engine and vehicle dynamic system along with an entire vehicle model are considered to verify the performance of the proposed model. The superiority of the proposed architecture compared to other autoencoder variants is presented in the results in terms of reconstruction error under several noise levels. The validation results indicate that the proposed model can perform high detection and clustering accuracy of unknown faults compared to stand-alone techniques.

10.
Neuroimage ; 263: 119586, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36031182

RESUMO

Electroencephalography (EEG) signals are often contaminated with artifacts. It is imperative to develop a practical and reliable artifact removal method to prevent the misinterpretation of neural signals and the underperformance of brain-computer interfaces. Based on the U-Net architecture, we developed a new artifact removal model, IC-U-Net, for removing pervasive EEG artifacts and reconstructing brain signals. IC-U-Net was trained using mixtures of brain and non-brain components decomposed by independent component analysis. It uses an ensemble of loss functions to model complex signal fluctuations in EEG recordings. The effectiveness of the proposed method in recovering brain activities and removing various artifacts (e.g., eye blinks/movements, muscle activities, and line/channel noise) was demonstrated in a simulation study and four real-world EEG experiments. IC-U-Net can reconstruct a multi-channel EEG signal and is applicable to most artifact types, offering a promising end-to-end solution for automatically removing artifacts from EEG recordings. It also meets the increasing need to image natural brain dynamics in a mobile setting. The code and pre-trained IC-U-Net model are available at https://github.com/roseDwayane/AIEEG.


Assuntos
Artefatos , Processamento de Sinais Assistido por Computador , Humanos , Movimentos Oculares , Piscadela , Eletroencefalografia/métodos , Algoritmos
11.
Sensors (Basel) ; 22(19)2022 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-36236349

RESUMO

Errors in microelectromechanical systems (MEMS) inertial measurement units (IMUs) are large, complex, nonlinear, and time varying. The traditional noise reduction and compensation methods based on traditional models are not applicable. This paper proposes a noise reduction method based on multi-layer combined deep learning for the MEMS gyroscope in the static base state. In this method, the combined model of MEMS gyroscope is constructed by Convolutional Denoising Auto-Encoder (Conv-DAE) and Multi-layer Temporal Convolutional Neural with the Attention Mechanism (MultiTCN-Attention) model. Based on the robust data processing capability of deep learning, the noise features are obtained from the past gyroscope data, and the parameter optimization of the Kalman filter (KF) by the Particle Swarm Optimization algorithm (PSO) significantly improves the filtering and noise reduction accuracy. The experimental results show that, compared with the original data, the noise standard deviation of the filtering effect of the combined model proposed in this paper decreases by 77.81% and 76.44% on the x and y axes, respectively; compared with the existing MEMS gyroscope noise compensation method based on the Autoregressive Moving Average with Kalman filter (ARMA-KF) model, the noise standard deviation of the filtering effect of the combined model proposed in this paper decreases by 44.00% and 46.66% on the x and y axes, respectively, reducing the noise impact by nearly three times.

12.
Sensors (Basel) ; 22(15)2022 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-35957446

RESUMO

The heterogeneity of wireless receiving devices, co-channel interference, and multi-path effect make the received signal strength indication (RSSI) of Wi-Fi fluctuate greatly, which seriously degrades the RSSI-based positioning accuracy. Signal strength difference (DIFF), a calibration-free solution for handling the received signal strength variance between diverse devices, can effectively reduce the negative impact of signal fluctuation. However, DIFF also leads to the explosion of the RSSI data dimension, expanding the number of dimensions from m to Cm2, which reduces the positioning efficiency. To this end, we design a data hierarchical processing strategy based on a building-floor-specific location, which effectively improves the efficiency of high-dimensional data processing. Moreover, based on a deep neural network (DNN), we design three different positioning algorithms for multi-building, multi-floor, and specific-location respectively, extending the indoor positioning from the single plane to three dimensions. Specifically, in the stage of data preprocessing, we first create the original RSSI database. Next, we create the optimized RSSI database by identifying and deleting the unavailable data in the RSSI database. Finally, we perform DIFF processing on the optimized RSSI database to create the DIFF database. In the stage of positioning, firstly, we design an improved multi-building positioning algorithm based on a denoising autoencoder (DAE). Secondly, we design an enhanced DNN for multi-floor positioning. Finally, the newly deep denoising autoencoder (DDAE) used for specific location positioning is proposed. The experimental results show that the proposed algorithms have better positioning efficiency and accuracy compared with the traditional machine learning algorithms and the current advanced deep learning algorithms.

13.
Entropy (Basel) ; 24(5)2022 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-35626462

RESUMO

In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one's emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method's practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features.

14.
Expert Syst Appl ; 192: 116366, 2022 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-34937995

RESUMO

Chest imaging can represent a powerful tool for detecting the Coronavirus disease 2019 (COVID-19). Among the available technologies, the chest Computed Tomography (CT) scan is an effective approach for reliable and early detection of the disease. However, it could be difficult to rapidly identify by human inspection anomalous area in CT images belonging to the COVID-19 disease. Hence, it becomes necessary the exploitation of suitable automatic algorithms able to quick and precisely identify the disease, possibly by using few labeled input data, because large amounts of CT scans are not usually available for the COVID-19 disease. The method proposed in this paper is based on the exploitation of the compact and meaningful hidden representation provided by a Deep Denoising Convolutional Autoencoder (DDCAE). Specifically, the proposed DDCAE, trained on some target CT scans in an unsupervised way, is used to build up a robust statistical representation generating a target histogram. A suitable statistical distance measures how this target histogram is far from a companion histogram evaluated on an unknown test scan: if this distance is greater of a threshold, the test image is labeled as anomaly, i.e. the scan belongs to a patient affected by COVID-19 disease. Some experimental results and comparisons with other state-of-the-art methods show the effectiveness of the proposed approach reaching a top accuracy of 100% and similar high values for other metrics. In conclusion, by using a statistical representation of the hidden features provided by DDCAEs, the developed architecture is able to differentiate COVID-19 from normal and pneumonia scans with high reliability and at low computational cost.

15.
BMC Bioinformatics ; 22(Suppl 3): 415, 2021 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-34429059

RESUMO

BACKGROUND: Plant long non-coding RNAs (lncRNAs) play vital roles in many biological processes mainly through interactions with RNA-binding protein (RBP). To understand the function of lncRNAs, a fundamental method is to identify which types of proteins interact with the lncRNAs. However, the models or rules of interactions are a major challenge when calculating and estimating the types of RBP. RESULTS: In this study, we propose an ensemble deep learning model to predict plant lncRNA-protein interactions using stacked denoising autoencoder and convolutional neural network based on sequence and structural information, named PRPI-SC. PRPI-SC predicts interactions between lncRNAs and proteins based on the k-mer features of RNAs and proteins. Experiments proved good results on Arabidopsis thaliana and Zea mays datasets (ATH948 and ZEA22133). The accuracy rates of ATH948 and ZEA22133 datasets were 88.9% and 82.6%, respectively. PRPI-SC also performed well on some public RNA protein interaction datasets. CONCLUSIONS: PRPI-SC accurately predicts the interaction between plant lncRNA and protein, which plays a guiding role in studying the function and expression of plant lncRNA. At the same time, PRPI-SC has a strong generalization ability and good prediction effect for non-plant data.


Assuntos
Aprendizado Profundo , RNA Longo não Codificante , Biologia Computacional , Redes Neurais de Computação , RNA Longo não Codificante/genética , Proteínas de Ligação a RNA
16.
BMC Bioinformatics ; 22(1): 204, 2021 Apr 20.
Artigo em Inglês | MEDLINE | ID: mdl-33879050

RESUMO

BACKGROUND: Drug-target interaction (DTI) plays a vital role in drug discovery. Identifying drug-target interactions related to wet-lab experiments are costly, laborious, and time-consuming. Therefore, computational methods to predict drug-target interactions are an essential task in the drug discovery process. Meanwhile, computational methods can reduce search space by proposing potential drugs already validated on wet-lab experiments. Recently, deep learning-based methods in drug-target interaction prediction have gotten more attention. Traditionally, DTI prediction methods' performance heavily depends on additional information, such as protein sequence and molecular structure of the drug, as well as deep supervised learning. RESULTS: This paper proposes a method based on deep unsupervised learning for drug-target interaction prediction called AutoDTI++. The proposed method includes three steps. The first step is to pre-process the interaction matrix. Since the interaction matrix is sparse, we solved the sparsity of the interaction matrix with drug fingerprints. Then, in the second step, the AutoDTI approach is introduced. In the third step, we post-preprocess the output of the AutoDTI model. CONCLUSIONS: Experimental results have shown that we were able to improve the prediction performance. To this end, the proposed method has been compared to other algorithms using the same reference datasets. The proposed method indicates that the experimental results of running five repetitions of tenfold cross-validation on golden standard datasets (Nuclear Receptors, GPCRs, Ion channels, and Enzymes) achieve good performance with high accuracy.


Assuntos
Desenvolvimento de Medicamentos , Aprendizado de Máquina não Supervisionado , Algoritmos , Sequência de Aminoácidos , Descoberta de Drogas
17.
Genomics ; 112(4): 2833-2841, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32234433

RESUMO

Gene expression analysis plays a significant role for providing molecular insights in cancer. Various genetic and epigenetic factors (being dealt under multi-omics) affect gene expression giving rise to cancer phenotypes. A recent growth in understanding of multi-omics seems to provide a resource for integration in interdisciplinary biology since they altogether can draw the comprehensive picture of an organism's developmental and disease biology in cancers. Such large scale multi-omics data can be obtained from public consortium like The Cancer Genome Atlas (TCGA) and several other platforms. Integrating these multi-omics data from varied platforms is still challenging due to high noise and sensitivity of the platforms used. Currently, a robust integrative predictive model to estimate gene expression from these genetic and epigenetic data is lacking. In this study, we have developed a deep learning-based predictive model using Deep Denoising Auto-encoder (DDAE) and Multi-layer Perceptron (MLP) that can quantitatively capture how genetic and epigenetic alterations correlate with directionality of gene expression for liver hepatocellular carcinoma (LIHC). The DDAE used in the study has been trained to extract significant features from the input omics data to estimate the gene expression. These features have then been used for back-propagation learning by the multilayer perceptron for the task of regression and classification. We have benchmarked the proposed model against state-of-the-art regression models. Finally, the deep learning-based integration model has been evaluated for its disease classification capability, where an accuracy of 95.1% has been obtained.


Assuntos
Variações do Número de Cópias de DNA , Metilação de DNA , Aprendizado Profundo , RNA-Seq , Carcinoma Hepatocelular/genética , Epigenômica , Genômica , Modelos Lineares , Neoplasias Hepáticas/genética , Transcriptoma
18.
Sensors (Basel) ; 21(8)2021 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-33924305

RESUMO

Wi-Fi based localization has become one of the most practical methods for mobile users in location-based services. However, due to the interference of multipath and high-dimensional sparseness of fingerprint data, with the localization system based on received signal strength (RSS), is hard to obtain high accuracy. In this paper, we propose a novel indoor positioning method, named JLGBMLoc (Joint denoising auto-encoder with LightGBM Localization). Firstly, because the noise and outliers may influence the dimensionality reduction on high-dimensional sparseness fingerprint data, we propose a novel feature extraction algorithm-named joint denoising auto-encoder (JDAE)-which reconstructs the sparseness fingerprint data for a better feature representation and restores the fingerprint data. Then, the LightGBM is introduced to the Wi-Fi localization by scattering the processed fingerprint data to histogram, and dividing the decision tree under leaf-wise algorithm with depth limitation. At last, we evaluated the proposed JLGBMLoc on the UJIIndoorLoc dataset and the Tampere dataset, the experimental results show that the proposed model increases the positioning accuracy dramatically compared with other existing methods.

19.
Sensors (Basel) ; 21(4)2021 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-33562754

RESUMO

WiFi is widely used for indoor positioning because of its advantages such as long transmission distance and ease of use indoors. To improve the accuracy and robustness of indoor WiFi fingerprint localization technology, this paper proposes a positioning system CCPos (CADE-CNN Positioning), which is based on a convolutional denoising autoencoder (CDAE) and a convolutional neural network (CNN). In the offline stage, this system applies the K-means algorithm to extract the validation set from the all-training set. In the online stage, the RSSI is first denoised and key features are extracted by the CDAE. Then the location estimation is output by the CNN. In this paper, the Alcala Tutorial 2017 dataset and UJIIndoorLoc are adopted to verify the performance of the CCpos system. The experimental results show that our system has excellent noise immunity and generalization performance. The mean positioning errors on the Alcala Tutorial 2017 dataset and the UJIIndoorLoc are 1.05 m and 12.4 m, respectively.

20.
Sensors (Basel) ; 21(15)2021 Jul 23.
Artigo em Inglês | MEDLINE | ID: mdl-34372256

RESUMO

For subjects with amyotrophic lateral sclerosis (ALS), the verbal and nonverbal communication is greatly impaired. Steady state visually evoked potential (SSVEP)-based brain computer interfaces (BCIs) is one of successful alternative augmentative communications to help subjects with ALS communicate with others or devices. For practical applications, the performance of SSVEP-based BCIs is severely reduced by the effects of noises. Therefore, developing robust SSVEP-based BCIs is very important to help subjects communicate with others or devices. In this study, a noise suppression-based feature extraction and deep neural network are proposed to develop a robust SSVEP-based BCI. To suppress the effects of noises, a denoising autoencoder is proposed to extract the denoising features. To obtain an acceptable recognition result for practical applications, the deep neural network is used to find the decision results of SSVEP-based BCIs. The experimental results showed that the proposed approaches can effectively suppress the effects of noises and the performance of SSVEP-based BCIs can be greatly improved. Besides, the deep neural network outperforms other approaches. Therefore, the proposed robust SSVEP-based BCI is very useful for practical applications.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Potenciais Evocados , Potenciais Evocados Visuais , Humanos , Estimulação Luminosa
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa