Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
1.
BMC Bioinformatics ; 25(1): 332, 2024 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-39407120

RESUMO

BACKGROUND: Long non-coding RNAs (lncRNAs) can prevent, diagnose, and treat a variety of complex human diseases, and it is crucial to establish a method to efficiently predict lncRNA-disease associations. RESULTS: In this paper, we propose a prediction method for the lncRNA-disease association relationship, named LDAGM, which is based on the Graph Convolutional Autoencoder and Multilayer Perceptron model. The method first extracts the functional similarity and Gaussian interaction profile kernel similarity of lncRNAs and miRNAs, as well as the semantic similarity and Gaussian interaction profile kernel similarity of diseases. It then constructs six homogeneous networks and deeply fuses them using a deep topology feature extraction method. The fused networks facilitate feature complementation and deep mining of the original association relationships, capturing the deep connections between nodes. Next, by combining the obtained deep topological features with the similarity network of lncRNA, disease, and miRNA interactions, we construct a multi-view heterogeneous network model. The Graph Convolutional Autoencoder is employed for nonlinear feature extraction. Finally, the extracted nonlinear features are combined with the deep topological features of the multi-view heterogeneous network to obtain the final feature representation of the lncRNA-disease pair. Prediction of the lncRNA-disease association relationship is performed using the Multilayer Perceptron model. To enhance the performance and stability of the Multilayer Perceptron model, we introduce a hidden layer called the aggregation layer in the Multilayer Perceptron model. Through a gate mechanism, it controls the flow of information between each hidden layer in the Multilayer Perceptron model, aiming to achieve optimal feature extraction from each hidden layer. CONCLUSIONS: Parameter analysis, ablation studies, and comparison experiments verified the effectiveness of this method, and case studies verified the accuracy of this method in predicting lncRNA-disease association relationships.


Assuntos
Redes Neurais de Computação , RNA Longo não Codificante , RNA Longo não Codificante/genética , Humanos , Biologia Computacional/métodos , MicroRNAs/genética , Algoritmos
2.
Comput Biol Med ; 183: 109243, 2024 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-39369548

RESUMO

OBJECTIVE: Kidney failure manifests in various forms, from sudden occurrences such as Acute Kidney Injury (AKI) to progressive like Chronic Kidney Disease (CKD). Given its intricate nature, marked by overlapping comorbidities and clinical similarities-including treatment modalities like dialysis-we sought to design and validate an end-to-end framework for clustering kidney failure subtypes. MATERIALS AND METHODS: Our emphasis was on dialysis, utilizing a comprehensive dataset from the UK Biobank (UKB). We transformed raw Electronic Health Record (EHR) data into standardized matrices that incorporate patient demographics, clinical visit data, and the innovative feature of visit time-gaps. This matrix structure was achieved using a unique data cutting method. Latent space transformation was facilitated using a convolution autoencoder (ConvAE) model, which was then subjected to clustering using Principal Component Analysis (PCA) and K-means algorithms. RESULTS: Our transformation model effectively reduced data dimensionality, thereby accelerating computational processes. The derived latent space demonstrated remarkable clustering capacities. Through cluster analysis, two distinct groups were identified: CKD-majority (cluster 1) and a mixed group of non-CKD and some CKD subtypes (cluster 0). Cluster 1 exhibited notably low survival probability, suggesting it predominantly represented severe CKD. In contrast, cluster 0, with substantially higher survival probability, likely to include milder CKD forms and severe AKI. Our end-to-end framework effectively differentiates kidney failure subtypes using the UKB dataset, offering potential for nuanced therapeutic interventions. CONCLUSIONS: This innovative approach integrates diverse data sources, providing a holistic understanding of kidney failure, which is imperative for patient management and targeted therapeutic interventions.

3.
Appl Spectrosc ; : 37028241268279, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39091033

RESUMO

A new optical diagnostic method that predicts the global fuel-air equivalence ratio of a swirl combustor using absorption spectra from only three optical paths is proposed here. Under normal operation, the global equivalence ratio and total flow rate determine the temperature and concentration fields of the combustor, which subsequently determine the absorption spectra of any combustion species. Therefore, spectra, as the fingerprint for a produced combustion field, were employed to predict the global equivalence ratio, one of the key operational parameters, in this study. Specifically, absorption spectra of water vapor at wavenumbers around 7444.36, 7185.6, and 6805.6 cm-1 measured at three different downstream locations of the combustor were used to predict the global equivalence ratio. As it is difficult to find analytical relationships between the spectra and produced combustion fields, a predictive model was a data-driven acquisition. The absorption spectra as an input were first feature-extracted through stacked convolutional autoencoders and then a dense neural network was used for regression prediction between the feature scores and the global equivalence ratio. The model could predict the equivalence ratio with an absolute error of ±0.025 with a probability of 96%, and a gradient-weighted regression activation mapping analysis revealed that the model leverages not only the peak intensities but also the variations in the shape of absorption lines for its predictions.

4.
Magn Reson Med ; 92(6): 2404-2419, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39030953

RESUMO

PURPOSE: To develop a SNR enhancement method for CEST imaging using a denoising convolutional autoencoder (DCAE) and compare its performance with state-of-the-art denoising methods. METHOD: The DCAE-CEST model encompasses an encoder and a decoder network. The encoder learns features from the input CEST Z-spectrum via a series of one-dimensional convolutions, nonlinearity applications, and pooling. Subsequently, the decoder reconstructs an output denoised Z-spectrum using a series of up-sampling and convolution layers. The DCAE-CEST model underwent multistage training in an environment constrained by Kullback-Leibler divergence, while ensuring data adaptability through context learning using Principal Component Analysis-processed Z-spectrum as a reference. The model was trained using simulated Z-spectra, and its performance was evaluated using both simulated data and in vivo data from an animal tumor model. Maps of amide proton transfer (APT) and nuclear Overhauser enhancement (NOE) effects were quantified using the multiple-pool Lorentzian fit, along with an apparent exchange-dependent relaxation metric. RESULTS: In digital phantom experiments, the DCAE-CEST method exhibited superior performance, surpassing existing denoising techniques, as indicated by the peak SNR and Structural Similarity Index. Additionally, in vivo data further confirm the effectiveness of the DCAE-CEST in denoising the APT and NOE maps when compared with other methods. Although no significant difference was observed in APT between tumors and normal tissues, there was a significant difference in NOE, consistent with previous findings. CONCLUSION: The DCAE-CEST can learn the most important features of the CEST Z-spectrum and provide the most effective denoising solution compared with other methods.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Imagens de Fantasmas , Razão Sinal-Ruído , Animais , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Camundongos , Algoritmos , Análise de Componente Principal
5.
Sensors (Basel) ; 24(14)2024 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-39066058

RESUMO

Deep learning (DL) models require enormous amounts of data to produce reliable diagnosis results. The superiority of DL models over traditional machine learning (ML) methods in terms of feature extraction, feature dimension reduction, and diagnosis performance has been shown in various studies of fault diagnosis systems. However, data acquisition can sometimes be compromised by sensor issues, resulting in limited data samples. In this study, we propose a novel DL model based on a stacked convolutional autoencoder (SCAE) to address the challenge of limited data. The innovation of the SCAE model lies in its ability to enhance gradient information flow and extract richer hierarchical features, leading to superior diagnostic performance even with limited and noisy data samples. This article describes the development of a fault diagnosis method for a hydraulic piston pump using time-frequency visual pattern recognition. The proposed SCAE model has been evaluated on limited data samples of a hydraulic piston pump. The findings of the experiment demonstrate that the suggested approach can achieve excellent diagnostic performance with over 99.5% accuracy. Additionally, the SCAE model has outperformed traditional DL models such as deep neural networks (DNN), standard stacked sparse autoencoders (SSAE), and convolutional neural networks (CNN) in terms of diagnosis performance. Furthermore, the proposed model demonstrates robust performance under noisy data conditions, further highlighting its effectiveness and reliability.

6.
Bioengineering (Basel) ; 11(6)2024 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-38927822

RESUMO

Respiratory diseases are among the leading causes of death, with many individuals in a population frequently affected by various types of pulmonary disorders. Early diagnosis and patient monitoring (traditionally involving lung auscultation) are essential for the effective management of respiratory diseases. However, the interpretation of lung sounds is a subjective and labor-intensive process that demands considerable medical expertise, and there is a good chance of misclassification. To address this problem, we propose a hybrid deep learning technique that incorporates signal processing techniques. Parallel transformation is applied to adventitious respiratory sounds, transforming lung sound signals into two distinct time-frequency scalograms: the continuous wavelet transform and the mel spectrogram. Furthermore, parallel convolutional autoencoders are employed to extract features from scalograms, and the resulting latent space features are fused into a hybrid feature pool. Finally, leveraging a long short-term memory model, a feature from the latent space is used as input for classifying various types of respiratory diseases. Our work is evaluated using the ICBHI-2017 lung sound dataset. The experimental findings indicate that our proposed method achieves promising predictive performance, with average values for accuracy, sensitivity, specificity, and F1-score of 94.16%, 89.56%, 99.10%, and 89.56%, respectively, for eight-class respiratory diseases; 79.61%, 78.55%, 92.49%, and 78.67%, respectively, for four-class diseases; and 85.61%, 83.44%, 83.44%, and 84.21%, respectively, for binary-class (normal vs. abnormal) lung sounds.

7.
bioRxiv ; 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38895366

RESUMO

Purpose: To develop a SNR enhancement method for chemical exchange saturation transfer (CEST) imaging using a denoising convolutional autoencoder (DCAE), and compare its performance with state-of-the-art denoising methods. Method: The DCAE-CEST model encompasses an encoder and a decoder network. The encoder learns features from the input CEST Z-spectrum via a series of 1D convolutions, nonlinearity applications and pooling. Subsequently, the decoder reconstructs an output denoised Z-spectrum using a series of up-sampling and convolution layers. The DCAE-CEST model underwent multistage training in an environment constrained by Kullback-Leibler divergence, while ensuring data adaptability through context learning using Principal Component Analysis processed Z-spectrum as a reference. The model was trained using simulated Z-spectra, and its performance was evaluated using both simulated data and in-vivo data from an animal tumor model. Maps of amide proton transfer (APT) and nuclear Overhauser enhancement (NOE) effects were quantified using the multiple-pool Lorentzian fit, along with an apparent exchange-dependent relaxation metric. Results: In digital phantom experiments, the DCAE-CEST method exhibited superior performance, surpassing existing denoising techniques, as indicated by the peak SNR and Structural Similarity Index. Additionally, in vivo data further confirms the effectiveness of the DCAE-CEST in denoising the APT and NOE maps when compared to other methods. While no significant difference was observed in APT between tumors and normal tissues, there was a significant difference in NOE, consistent with previous findings. Conclusion: The DCAE-CEST can learn the most important features of the CEST Z-spectrum and provide the most effective denoising solution compared to other methods.

8.
Int J Neural Syst ; 34(8): 2450040, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38753012

RESUMO

Neonatal epilepsy is a common emergency phenomenon in neonatal intensive care units (NICUs), which requires timely attention, early identification, and treatment. Traditional detection methods mostly use supervised learning with enormous labeled data. Hence, this study offers a semi-supervised hybrid architecture for detecting seizures, which combines the extracted electroencephalogram (EEG) feature dataset and convolutional autoencoder, called Fd-CAE. First, various features in the time domain and entropy domain are extracted to characterize the EEG signal, which helps distinguish epileptic seizures subsequently. Then, the unlabeled EEG features are fed into the convolutional autoencoder (CAE) for training, which effectively represents EEG features by optimizing the loss between the input and output features. This unsupervised feature learning process can better combine and optimize EEG features from unlabeled data. After that, the pre-trained encoder part of the model is used for further feature learning of labeled data to obtain its low-dimensional feature representation and achieve classification. This model is performed on the neonatal EEG dataset collected at the University of Helsinki Hospital, which has a high discriminative ability to detect seizures, with an accuracy of 92.34%, precision of 93.61%, recall rate of 98.74%, and F1-score of 95.77%, respectively. The results show that unsupervised learning by CAE is beneficial to the characterization of EEG signals, and the proposed Fd-CAE method significantly improves classification performance.


Assuntos
Eletroencefalografia , Convulsões , Humanos , Eletroencefalografia/métodos , Recém-Nascido , Convulsões/diagnóstico , Convulsões/fisiopatologia , Processamento de Sinais Assistido por Computador , Aprendizado Profundo , Aprendizado de Máquina não Supervisionado , Redes Neurais de Computação
9.
Brief Bioinform ; 25(2)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38324624

RESUMO

Connections between circular RNAs (circRNAs) and microRNAs (miRNAs) assume a pivotal position in the onset, evolution, diagnosis and treatment of diseases and tumors. Selecting the most potential circRNA-related miRNAs and taking advantage of them as the biological markers or drug targets could be conducive to dealing with complex human diseases through preventive strategies, diagnostic procedures and therapeutic approaches. Compared to traditional biological experiments, leveraging computational models to integrate diverse biological data in order to infer potential associations proves to be a more efficient and cost-effective approach. This paper developed a model of Convolutional Autoencoder for CircRNA-MiRNA Associations (CA-CMA) prediction. Initially, this model merged the natural language characteristics of the circRNA and miRNA sequence with the features of circRNA-miRNA interactions. Subsequently, it utilized all circRNA-miRNA pairs to construct a molecular association network, which was then fine-tuned by labeled samples to optimize the network parameters. Finally, the prediction outcome is obtained by utilizing the deep neural networks classifier. This model innovatively combines the likelihood objective that preserves the neighborhood through optimization, to learn the continuous feature representation of words and preserve the spatial information of two-dimensional signals. During the process of 5-fold cross-validation, CA-CMA exhibited exceptional performance compared to numerous prior computational approaches, as evidenced by its mean area under the receiver operating characteristic curve of 0.9138 and a minimal SD of 0.0024. Furthermore, recent literature has confirmed the accuracy of 25 out of the top 30 circRNA-miRNA pairs identified with the highest CA-CMA scores during case studies. The results of these experiments highlight the robustness and versatility of our model.


Assuntos
MicroRNAs , Neoplasias , Humanos , MicroRNAs/genética , RNA Circular/genética , Funções Verossimilhança , Redes Neurais de Computação , Neoplasias/genética , Biologia Computacional/métodos
10.
J Imaging Inform Med ; 37(1): 412-427, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38343221

RESUMO

This paper presents a fully automated pipeline using a sparse convolutional autoencoder for quality control (QC) of affine registrations in large-scale T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) studies. Here, a customized 3D convolutional encoder-decoder (autoencoder) framework is proposed and the network is trained in a fully unsupervised manner. For cross-validating the proposed model, we used 1000 correctly aligned MRI images of the human connectome project young adult (HCP-YA) dataset. We proposed that the quality of the registration is proportional to the reconstruction error of the autoencoder. Further, to make this method applicable to unseen datasets, we have proposed dataset-specific optimal threshold calculation (using the reconstruction error) from ROC analysis that requires a subset of the correctly aligned and artificially generated misalignments specific to that dataset. The calculated optimum threshold is used for testing the quality of remaining affine registrations from the corresponding datasets. The proposed framework was tested on four unseen datasets from autism brain imaging data exchange (ABIDE I, 215 subjects), information eXtraction from images (IXI, 577 subjects), Open Access Series of Imaging Studies (OASIS4, 646 subjects), and "Food and Brain" study (77 subjects). The framework has achieved excellent performance for T1w and T2w affine registrations with an accuracy of 100% for HCP-YA. Further, we evaluated the generality of the model on four unseen datasets and obtained accuracies of 81.81% for ABIDE I (only T1w), 93.45% (T1w) and 81.75% (T2w) for OASIS4, and 92.59% for "Food and Brain" study (only T1w) and in the range 88-97% for IXI (for both T1w and T2w and stratified concerning scanner vendor and magnetic field strengths). Moreover, the real failures from "Food and Brain" and OASIS4 datasets were detected with sensitivities of 100% and 80% for T1w and T2w, respectively. In addition, AUCs of > 0.88 in all scenarios were obtained during threshold calculation on the four test sets.

11.
Sci Rep ; 14(1): 4154, 2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38378845

RESUMO

A key challenge in quantum photonics today is the efficient and on-demand generation of high-quality single photons and entangled photon pairs. In this regard, one of the most promising types of emitters are semiconductor quantum dots, fluorescent nanostructures also described as artificial atoms. The main technological challenge in upscaling to an industrial level is the typically random spatial and spectral distribution in their growth. Furthermore, depending on the intended application, different requirements are imposed on a quantum dot, which are reflected in its spectral properties. Given that an in-depth suitability analysis is lengthy and costly, it is common practice to pre-select promising candidate quantum dots using their emission spectrum. Currently, this is done by hand. Therefore, to automate and expedite this process, in this paper, we propose a data-driven machine-learning-based method of evaluating the applicability of a semiconductor quantum dot as single photon source. For this, first, a minimally redundant, but maximally relevant feature representation for quantum dot emission spectra is derived by combining conventional spectral analysis with an autoencoding convolutional neural network. The obtained feature vector is subsequently used as input to a neural network regression model, which is specifically designed to not only return a rating score, gauging the technical suitability of a quantum dot, but also a measure of confidence for its evaluation. For training and testing, a large dataset of self-assembled InAs/GaAs semiconductor quantum dot emission spectra is used, partially labelled by a team of experts in the field. Overall, highly convincing results are achieved, as quantum dots are reliably evaluated correctly. Note, that the presented methodology can account for different spectral requirements and is applicable regardless of the underlying photonic structure, fabrication method and material composition. We therefore consider it the first step towards a fully integrated evaluation framework for quantum dots, proving the use of machine learning beneficial in the advancement of future quantum technologies.

12.
Sensors (Basel) ; 24(3)2024 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-38339571

RESUMO

This paper proposes a new fault diagnosis method for centrifugal pumps by combining signal processing with deep learning techniques. Centrifugal pumps facilitate fluid transport through the energy generated by the impeller. Throughout the operation, variations in the fluid pressure at the pump's inlet may impact the generalization of traditional machine learning models trained on raw statistical features. To address this concern, first, vibration signals are collected from centrifugal pumps, followed by the application of a lowpass filter to isolate frequencies indicative of faults. These signals are then subjected to a continuous wavelet transform and Stockwell transform, generating two distinct time-frequency scalograms. The Sobel filter is employed to further highlight essential features within these scalograms. For feature extraction, this approach employs two parallel convolutional autoencoders, each tailored for a specific scalogram type. Subsequently, extracted features are merged into a unified feature pool, which forms the basis for training a two-layer artificial neural network, with the aim of achieving accurate fault classification. The proposed method is validated using three distinct datasets obtained from the centrifugal pump under varying inlet fluid pressures. The results demonstrate classification accuracies of 100%, 99.2%, and 98.8% for each dataset, surpassing the accuracies achieved by the reference comparison methods.

13.
Sensors (Basel) ; 23(22)2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-38005598

RESUMO

Predictive maintenance is considered a proactive approach that capitalizes on advanced sensing technologies and data analytics to anticipate potential equipment malfunctions, enabling cost savings and improved operational efficiency. For journal bearings, predictive maintenance assumes critical significance due to the inherent complexity and vital role of these components in mechanical systems. The primary objective of this study is to develop a data-driven methodology for indirectly determining the wear condition by leveraging experimentally collected vibration data. To accomplish this goal, a novel experimental procedure was devised to expedite wear formation on journal bearings. Seventeen bearings were tested and the collected sensor data were employed to evaluate the predictive capabilities of various sensors and mounting configurations. The effects of different downsampling methods and sampling rates on the sensor data were also explored within the framework of feature engineering. The downsampled sensor data were further processed using convolutional autoencoders (CAEs) to extract a latent state vector, which was found to exhibit a strong correlation with the wear state of the bearing. Remarkably, the CAE, trained on unlabeled measurements, demonstrated an impressive performance in wear estimation, achieving an average Pearson coefficient of 91% in four different experimental configurations. In essence, the proposed methodology facilitated an accurate estimation of the wear of the journal bearings, even when working with a limited amount of labeled data.

14.
Sensors (Basel) ; 23(22)2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-38005599

RESUMO

Recently, security monitoring facilities have mainly adopted artificial intelligence (AI) technology to provide both increased security and improved performance. However, there are technical challenges in the pursuit of elevating system performance, automation, and security efficiency. In this paper, we proposed intelligent anomaly detection and classification based on deep learning (DL) using multi-modal fusion. To verify the method, we combined two DL-based schemes, such as (i) the 3D Convolutional AutoEncoder (3D-AE) for anomaly detection and (ii) the SlowFast neural network for anomaly classification. The 3D-AE can detect occurrence points of abnormal events and generate regions of interest (ROI) by the points. The SlowFast model can classify abnormal events using the ROI. These multi-modal approaches can complement weaknesses and leverage strengths in the existing security system. To enhance anomaly learning effectiveness, we also attempted to create a new dataset using the virtual environment in Grand Theft Auto 5 (GTA5). The dataset consists of 400 abnormal-state data and 78 normal-state data with clip sizes in the 8-20 s range. Virtual data collection can also supplement the original dataset, as replicating abnormal states in the real world is challenging. Consequently, the proposed method can achieve a classification accuracy of 85%, which is higher compared to the 77.5% accuracy achieved when only employing the single classification model. Furthermore, we validated the trained model with the GTA dataset by using a real-world assault class dataset, consisting of 1300 instances that we reproduced. As a result, 1100 data as the assault were classified and achieved 83.5% accuracy. This also shows that the proposed method can provide high performance in real-world environments.

15.
Bioengineering (Basel) ; 10(11)2023 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-38002417

RESUMO

The application of deep learning for taxonomic categorization of DNA sequences is investigated in this study. Two deep learning architectures, namely the Stacked Convolutional Autoencoder (SCAE) with Multilabel Extreme Learning Machine (MLELM) and the Variational Convolutional Autoencoder (VCAE) with MLELM, have been proposed. These designs provide precise feature maps for individual and inter-label interactions within DNA sequences, capturing their spatial and temporal properties. The collected features are subsequently fed into MLELM networks, which yield soft classification scores and hard labels. The proposed algorithms underwent thorough training and testing on unsupervised data, whereby one or more labels were concurrently taken into account. The introduction of the clade label resulted in improved accuracy for both models compared to the class or genus labels, probably owing to the occurrence of large clusters of similar nucleotides inside a DNA strand. In all circumstances, the VCAE-MLELM model consistently outperformed the SCAE-MLELM model. The best accuracy attained by the VCAE-MLELM model when the clade and family labels were combined was 94%. However, accuracy ratings for single-label categorization using either approach were less than 65%. The approach's effectiveness is based on MLELM networks, which record connected patterns across classes for accurate label categorization. This study advances deep learning in biological taxonomy by emphasizing the significance of combining numerous labels for increased classification accuracy.

16.
Bioengineering (Basel) ; 10(10)2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37892874

RESUMO

The paper proposes a federated content-based medical image retrieval (FedCBMIR) tool that utilizes federated learning (FL) to address the challenges of acquiring a diverse medical data set for training CBMIR models. CBMIR is a tool to find the most similar cases in the data set to assist pathologists. Training such a tool necessitates a pool of whole-slide images (WSIs) to train the feature extractor (FE) to extract an optimal embedding vector. The strict regulations surrounding data sharing in hospitals makes it difficult to collect a rich data set. FedCBMIR distributes an unsupervised FE to collaborative centers for training without sharing the data set, resulting in shorter training times and higher performance. FedCBMIR was evaluated by mimicking two experiments, including two clients with two different breast cancer data sets, namely BreaKHis and Camelyon17 (CAM17), and four clients with the BreaKHis data set at four different magnifications. FedCBMIR increases the F1 score (F1S) of each client from 96% to 98.1% in CAM17 and from 95% to 98.4% in BreaKHis, with 11.44 fewer hours in training time. FedCBMIR provides 98%, 96%, 94%, and 97% F1S in the BreaKHis experiment with a generalized model and accomplishes this in 25.53 fewer hours of training.

17.
Comput Biol Med ; 166: 107534, 2023 Sep 29.
Artigo em Inglês | MEDLINE | ID: mdl-37801923

RESUMO

BACKGROUND: It remains hard to directly apply deep learning-based methods to assist diagnosing essential tremor of voice (ETV) and abductor and adductor spasmodic dysphonia (ABSD and ADSD). One of the main challenges is that, as a class of rare laryngeal movement disorders (LMDs), there are limited available databases to be investigated. Another worthy explored research question is which above sub-disorder benefits most from diagnosis based on sustained phonations. The question is from the fact that sustained phonations can help detect pathological voice from healthy voice. METHOD: A transfer learning strategy is developed for LMD diagnosis with limited data, which consists of three fundamental parts. (1) An extra vocally healthy database from the International Dialects of English Archive (IDEA) is employed to pre-train a convolutional autoencoder. (2) The transferred proportion of the pre-trained encoder is explored. And its impact on LMD diagnosis is also evaluated, yielding a two-stage transfer model. (3) A third stage is designed following the initial two stages to embed information of pathological sustained phonation into the model. This stage verifies the different effects of applying sustained phonation on diagnosing the three sub-disorders, and helps boost the final diagnostic performance. RESULTS: The analysis in this study is based on clinician-labeled LMD data obtained from the Vanderbilt University Medical Center (VUMC). We find that diagnosing ETV shows sensitivity to sustained phonation within the current database. Meanwhile, the results show that the proposed multi-stage transfer learning strategy can produce (1) accuracy of 65.3% on classifying normal and other three sub-disorders all at once, (2) accuracy of 85.3% in differentiating normal, ABSD, and ETV, and (3) accuracy of 77.7% for normal, ADSD and ETV. These findings demonstrate the effectiveness of the proposed approach.

18.
Front Pharmacol ; 14: 1257842, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37731739

RESUMO

Background: Inferring drug-related side effects is beneficial for reducing drug development cost and time. Current computational prediction methods have concentrated on graph reasoning over heterogeneous graphs comprising the drug and side effect nodes. However, the various topologies and node attributes within multiple drug-side effect heterogeneous graphs have not been completely exploited. Methods: We proposed a new drug-side effect association prediction method, GGSC, to deeply integrate the diverse topologies and attributes from multiple heterogeneous graphs and the self-calibration attributes of each drug-side effect node pair. First, we created two heterogeneous graphs comprising the drug and side effect nodes and their related similarity and association connections. Since each heterogeneous graph has its specific topology and node attributes, a node feature learning strategy was designed and the learning for each graph was enhanced from a graph generative and adversarial perspective. We constructed a generator based on a graph convolutional autoencoder to encode the topological structure and node attributes from the whole heterogeneous graph and then generate the node features embedding the graph topology. A discriminator based on multilayer perceptron was designed to distinguish the generated topological features from the original ones. We also designed representation-level attention to discriminate the contributions of topological representations from multiple heterogeneous graphs and adaptively fused them. Finally, we constructed a self-calibration module based on convolutional neural networks to guide pairwise attribute learning through the features of the small latent space. Results: The comparison experiment results showed that GGSC had higher prediction performance than several state-of-the-art prediction methods. The ablation experiments demonstrated the effectiveness of topological enhancement learning, representation-level attention, and self-calibrated pairwise attribute learning. In addition, case studies over five drugs demonstrated GGSC's ability in discovering the potential drug-related side effect candidates. Conclusion: We proposed a drug-side effect association prediction method, and the method is beneficial for screening the reliable association candidates for the biologists to discover the actual associations.

19.
Entropy (Basel) ; 25(9)2023 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-37761615

RESUMO

Contact fatigue is one of the most common failure forms of typical basic components such as bearings and gears. Accurate prediction of contact fatigue performance degradation trends in components is conducive to the scientific formulation of maintenance strategies and health management of equipment, which is of great significance for industrial production. In this paper, to realize the performance degradation trend prediction accurately, a prediction method based on multi-domain features and temporal convolutional networks (TCNs) was proposed. Firstly, a multi-domain and high-dimensional feature set of vibration signals was constructed, and performance degradation indexes with good sensitivity and strong trends were initially screened using comprehensive evaluation indexes. Secondly, the kernel principal component analysis (KPCA) method was used to eliminate redundant information among multi-domain features and construct health indexes (HIs) based on a convolutional autoencoder (CAE) network. Then, the performance degradation trend prediction model based on TCN was constructed, and the degradation trend prediction for the monitored object was realized using direct multi-step prediction. On this basis, the effectiveness of the proposed method was verified using a bearing common-use data set, and it was successfully applied to performance degradation trend prediction for rolling contact fatigue specimens. The results show that using KPCA can reduce the feature set from 14 dimensions to 4 dimensions and retain 98.33% of the information in the original preferred feature set. The method of constructing the HI based on CAE is effective, and change processes versus time of the constructed HI can truly reflect the degradation process of rolling contact fatigue specimen performance; this method has obvious advantages over the two commonly used methods for constructing HIs including auto-encoding (AE) networks and gaussian mixture models (GMMs). The model based on TCN can accurately predict the performance degradation of rolling contact fatigue specimens. Compared with prediction models based on long short-term memory (LSTM) networks and gating recurrent units (GRUs), the model based on TCN has better performance and higher prediction accuracy. The RMS error and average absolute error for a prediction step of 3 are 0.0146 and 0.0105, respectively. Overall, the proposed method has universal significance and can be applied to predict the performance degradation trend of other mechanical equipment/parts.

20.
Sensors (Basel) ; 23(17)2023 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-37687772

RESUMO

Hospitals generate a significant amount of medical data every day, which constitute a very rich database for research. Today, this database is still not exploitable because to make its valorization possible, the images require an annotation which remains a costly and difficult task. Thus, the use of an unsupervised segmentation method could facilitate the process. In this article, we propose two approaches for the semantic segmentation of breast cancer histopathology images. On the one hand, an autoencoder architecture for unsupervised segmentation is proposed, and on the other hand, an improvement U-Net architecture for supervised segmentation is proposed. We evaluate these models on a public dataset of histological images of breast cancer. In addition, the performance of our segmentation methods is measured using several evaluation metrics such as accuracy, recall, precision and F1 score. The results are competitive with those of other modern methods.


Assuntos
Aprendizado Profundo , Neoplasias , Benchmarking , Bases de Dados Factuais , Hospitais , Semântica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA