Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20.798
Filtrar
1.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38980369

RESUMO

Recent studies have extensively used deep learning algorithms to analyze gene expression to predict disease diagnosis, treatment effectiveness, and survival outcomes. Survival analysis studies on diseases with high mortality rates, such as cancer, are indispensable. However, deep learning models are plagued by overfitting owing to the limited sample size relative to the large number of genes. Consequently, the latest style-transfer deep generative models have been implemented to generate gene expression data. However, these models are limited in their applicability for clinical purposes because they generate only transcriptomic data. Therefore, this study proposes ctGAN, which enables the combined transformation of gene expression and survival data using a generative adversarial network (GAN). ctGAN improves survival analysis by augmenting data through style transformations between breast cancer and 11 other cancer types. We evaluated the concordance index (C-index) enhancements compared with previous models to demonstrate its superiority. Performance improvements were observed in nine of the 11 cancer types. Moreover, ctGAN outperformed previous models in seven out of the 11 cancer types, with colon adenocarcinoma (COAD) exhibiting the most significant improvement (median C-index increase of ~15.70%). Furthermore, integrating the generated COAD enhanced the log-rank p-value (0.041) compared with using only the real COAD (p-value = 0.797). Based on the data distribution, we demonstrated that the model generated highly plausible data. In clustering evaluation, ctGAN exhibited the highest performance in most cases (89.62%). These findings suggest that ctGAN can be meaningfully utilized to predict disease progression and select personalized treatments in the medical field.


Assuntos
Aprendizado Profundo , Humanos , Análise de Sobrevida , Algoritmos , Neoplasias/genética , Neoplasias/mortalidade , Perfilação da Expressão Gênica/métodos , Redes Neurais de Computação , Biologia Computacional/métodos , Neoplasias da Mama/genética , Neoplasias da Mama/mortalidade , Feminino , Regulação Neoplásica da Expressão Gênica
2.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38980373

RESUMO

Inferring gene regulatory networks (GRNs) allows us to obtain a deeper understanding of cellular function and disease pathogenesis. Recent advances in single-cell RNA sequencing (scRNA-seq) technology have improved the accuracy of GRN inference. However, many methods for inferring individual GRNs from scRNA-seq data are limited because they overlook intercellular heterogeneity and similarities between different cell subpopulations, which are often present in the data. Here, we propose a deep learning-based framework, DeepGRNCS, for jointly inferring GRNs across cell subpopulations. We follow the commonly accepted hypothesis that the expression of a target gene can be predicted based on the expression of transcription factors (TFs) due to underlying regulatory relationships. We initially processed scRNA-seq data by discretizing data scattering using the equal-width method. Then, we trained deep learning models to predict target gene expression from TFs. By individually removing each TF from the expression matrix, we used pre-trained deep model predictions to infer regulatory relationships between TFs and genes, thereby constructing the GRN. Our method outperforms existing GRN inference methods for various simulated and real scRNA-seq datasets. Finally, we applied DeepGRNCS to non-small cell lung cancer scRNA-seq data to identify key genes in each cell subpopulation and analyzed their biological relevance. In conclusion, DeepGRNCS effectively predicts cell subpopulation-specific GRNs. The source code is available at https://github.com/Nastume777/DeepGRNCS.


Assuntos
Aprendizado Profundo , Redes Reguladoras de Genes , Análise de Célula Única , Humanos , Análise de Célula Única/métodos , Fatores de Transcrição/genética , Fatores de Transcrição/metabolismo , Biologia Computacional/métodos , Análise de Sequência de RNA/métodos , RNA-Seq/métodos
3.
Se Pu ; 42(7): 669-680, 2024 Jul.
Artigo em Chinês | MEDLINE | ID: mdl-38966975

RESUMO

Mass spectrometry imaging (MSI) is a promising method for characterizing the spatial distribution of compounds. Given the diversified development of acquisition methods and continuous improvements in the sensitivity of this technology, both the total amount of generated data and complexity of analysis have exponentially increased, rendering increasing challenges of data postprocessing, such as large amounts of noise, background signal interferences, as well as image registration deviations caused by sample position changes and scan deviations, and etc. Deep learning (DL) is a powerful tool widely used in data analysis and image reconstruction. This tool enables the automatic feature extraction of data by building and training a neural network model, and achieves comprehensive and in-depth analysis of target data through transfer learning, which has great potential for MSI data analysis. This paper reviews the current research status, application progress and challenges of DL in MSI data analysis, focusing on four core stages: data preprocessing, image reconstruction, cluster analysis, and multimodal fusion. The application of a combination of DL and mass spectrometry imaging in the study of tumor diagnosis and subtype classification is also illustrated. This review also discusses trends of development in the future, aiming to promote a better combination of artificial intelligence and mass spectrometry technology.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Espectrometria de Massas , Espectrometria de Massas/métodos , Processamento de Imagem Assistida por Computador/métodos , Humanos , Análise de Dados
4.
J Cancer Res Clin Oncol ; 150(7): 346, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38981916

RESUMO

PURPOSE: To develop a deep learning (DL) model for differentiating between benign and malignant ovarian tumors of Ovarian-Adnexal Reporting and Data System Ultrasound (O-RADS US) Category 4 lesions, and validate its diagnostic performance. METHODS: A retrospective analysis of 1619 US images obtained from three centers from December 2014 to March 2023. DeepLabV3 and YOLOv8 were jointly used to segment, classify, and detect ovarian tumors. Precision and recall and area under the receiver operating characteristic curve (AUC) were employed to assess the model performance. RESULTS: A total of 519 patients (including 269 benign and 250 malignant masses) were enrolled in the study. The number of women included in the training, validation, and test cohorts was 426, 46, and 47, respectively. The detection models exhibited an average precision of 98.68% (95% CI: 0.95-0.99) for benign masses and 96.23% (95% CI: 0.92-0.98) for malignant masses. Moreover, in the training set, the AUC was 0.96 (95% CI: 0.94-0.97), whereas in the validation set, the AUC was 0.93(95% CI: 0.89-0.94) and 0.95 (95% CI: 0.91-0.96) in the test set. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive values for the training set were 0.943,0.957,0.951,0.966, and 0.936, respectively, whereas those for the validation set were 0.905,0.935, 0.935,0.919, and 0.931, respectively. In addition, the sensitivity, specificity, accuracy, positive predictive value, and negative predictive value for the test set were 0.925, 0.955, 0.941, 0.956, and 0.927, respectively. CONCLUSION: The constructed DL model exhibited high diagnostic performance in distinguishing benign and malignant ovarian tumors in O-RADS US category 4 lesions.


Assuntos
Aprendizado Profundo , Neoplasias Ovarianas , Ultrassonografia , Humanos , Feminino , Neoplasias Ovarianas/diagnóstico por imagem , Neoplasias Ovarianas/patologia , Neoplasias Ovarianas/diagnóstico , Estudos Retrospectivos , Pessoa de Meia-Idade , Ultrassonografia/métodos , Adulto , Idoso , Adulto Jovem
5.
Sci Rep ; 14(1): 15775, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38982238

RESUMO

A three-dimensional convolutional neural network model was developed to classify the severity of chronic kidney disease (CKD) using magnetic resonance imaging (MRI) Dixon-based T1-weighted in-phase (IP)/opposed-phase (OP)/water-only (WO) imaging. Seventy-three patients with severe renal dysfunction (estimated glomerular filtration rate [eGFR] < 30 mL/min/1.73 m2, CKD stage G4-5); 172 with moderate renal dysfunction (30 ≤ eGFR < 60 mL/min/1.73 m2, CKD stage G3a/b); and 76 with mild renal dysfunction (eGFR ≥ 60 mL/min/1.73 m2, CKD stage G1-2) participated in this study. The model was applied to the right, left, and both kidneys, as well as to each imaging method (T1-weighted IP/OP/WO images). The best performance was obtained when using bilateral kidneys and IP images, with an accuracy of 0.862 ± 0.036. The overall accuracy was better for the bilateral kidney models than for the unilateral kidney models. Our deep learning approach using kidney MRI can be applied to classify patients with CKD based on the severity of kidney disease.


Assuntos
Taxa de Filtração Glomerular , Rim , Imageamento por Ressonância Magnética , Redes Neurais de Computação , Insuficiência Renal Crônica , Índice de Gravidade de Doença , Humanos , Insuficiência Renal Crônica/diagnóstico por imagem , Insuficiência Renal Crônica/patologia , Imageamento por Ressonância Magnética/métodos , Feminino , Masculino , Pessoa de Meia-Idade , Rim/diagnóstico por imagem , Rim/patologia , Idoso , Adulto , Aprendizado Profundo , Imageamento Tridimensional/métodos
6.
Sci Rep ; 14(1): 15877, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38982267

RESUMO

Develop a radiomics nomogram that integrates deep learning, radiomics, and clinical variables to predict epidermal growth factor receptor (EGFR) mutation status in patients with stage I non-small cell lung cancer (NSCLC). We retrospectively included 438 patients who underwent curative surgery and completed driver-gene mutation tests for stage I NSCLC from four academic medical centers. Predictive models were established by extracting and analyzing radiomic features in intratumoral, peritumoral, and habitat regions of CT images to identify EGFR mutation status in stage I NSCLC. Additionally, three deep learning models based on the intratumoral region were constructed. A nomogram was developed by integrating representative radiomic signatures, deep learning, and clinical features. Model performance was assessed by calculating the area under the receiver operating characteristic (ROC) curve. The established habitat radiomics features demonstrated encouraging performance in discriminating between EGFR mutant and wild-type, with predictive ability superior to other single models (AUC 0.886, 0.812, and 0.790 for the training, validation, and external test sets, respectively). The radiomics-based nomogram exhibited excellent performance, achieving the highest AUC values of 0.917, 0.837, and 0.809 in the training, validation, and external test sets, respectively. Decision curve analysis (DCA) indicated that the nomogram provided a higher net benefit than other radiomics models, offering valuable information for treatment.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Receptores ErbB , Neoplasias Pulmonares , Mutação , Nomogramas , Humanos , Carcinoma Pulmonar de Células não Pequenas/genética , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/patologia , Receptores ErbB/genética , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Estadiamento de Neoplasias , Adulto , Curva ROC , Idoso de 80 Anos ou mais , Radiômica
7.
Commun Biol ; 7(1): 835, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38982288

RESUMO

Significant progress has been made in the field of plant genomics, as demonstrated by the increased use of high-throughput methodologies that enable the characterization of multiple genome-wide molecular phenotypes. These findings have provided valuable insights into plant traits and their underlying genetic mechanisms, particularly in model plant species. Nonetheless, effectively leveraging them to make accurate predictions represents a critical step in crop genomic improvement. We present AgroNT, a foundational large language model trained on genomes from 48 plant species with a predominant focus on crop species. We show that AgroNT can obtain state-of-the-art predictions for regulatory annotations, promoter/terminator strength, tissue-specific gene expression, and prioritize functional variants. We conduct a large-scale in silico saturation mutagenesis analysis on cassava to evaluate the regulatory impact of over 10 million mutations and provide their predicted effects as a resource for variant characterization. Finally, we propose the use of the diverse datasets compiled here as the Plants Genomic Benchmark (PGB), providing a comprehensive benchmark for deep learning-based methods in plant genomic research. The pre-trained AgroNT model is publicly available on HuggingFace at https://huggingface.co/InstaDeepAI/agro-nucleotide-transformer-1b  for future research purposes.


Assuntos
Genoma de Planta , Plantas Comestíveis/genética , Genômica/métodos , Aprendizado Profundo , Manihot/genética
8.
BMC Med Imaging ; 24(1): 170, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38982357

RESUMO

OBJECTIVES: To develop and validate a novel interpretable artificial intelligence (AI) model that integrates radiomic features, deep learning features, and imaging features at multiple semantic levels to predict the prognosis of intracerebral hemorrhage (ICH) patients at 6 months post-onset. MATERIALS AND METHODS: Retrospectively enrolled 222 patients with ICH for Non-contrast Computed Tomography (NCCT) images and clinical data, who were divided into a training cohort (n = 186, medical center 1) and an external testing cohort (n = 36, medical center 2). Following image preprocessing, the entire hematoma region was segmented by two radiologists as the volume of interest (VOI). Pyradiomics algorithm library was utilized to extract 1762 radiomics features, while a deep convolutional neural network (EfficientnetV2-L) was employed to extract 1000 deep learning features. Additionally, radiologists evaluated imaging features. Based on the three different modalities of features mentioned above, the Random Forest (RF) model was trained, resulting in three models (Radiomics Model, Radiomics-Clinical Model, and DL-Radiomics-Clinical Model). The performance and clinical utility of the models were assessed using the Area Under the Receiver Operating Characteristic Curve (AUC), calibration curve, and Decision Curve Analysis (DCA), with AUC compared using the DeLong test. Furthermore, this study employs three methods, Shapley Additive Explanations (SHAP), Grad-CAM, and Guided Grad-CAM, to conduct a multidimensional interpretability analysis of model decisions. RESULTS: The Radiomics-Clinical Model and DL-Radiomics-Clinical Model exhibited relatively good predictive performance, with an AUC of 0.86 [95% Confidence Intervals (CI): 0.71, 0.95; P < 0.01] and 0.89 (95% CI: 0.74, 0.97; P < 0.01), respectively, in the external testing cohort. CONCLUSION: The multimodal explainable AI model proposed in this study can accurately predict the prognosis of ICH. Interpretability methods such as SHAP, Grad-CAM, and Guided Grad-Cam partially address the interpretability limitations of AI models. Integrating multimodal imaging features can effectively improve the performance of the model. CLINICAL RELEVANCE STATEMENT: Predicting the prognosis of patients with ICH is a key objective in emergency care. Accurate and efficient prognostic tools can effectively prevent, manage, and monitor adverse events in ICH patients, maximizing treatment outcomes.


Assuntos
Inteligência Artificial , Hemorragia Cerebral , Aprendizado Profundo , Tomografia Computadorizada por Raios X , Humanos , Hemorragia Cerebral/diagnóstico por imagem , Prognóstico , Tomografia Computadorizada por Raios X/métodos , Masculino , Feminino , Estudos Retrospectivos , Pessoa de Meia-Idade , Idoso , Curva ROC , Redes Neurais de Computação , Algoritmos
9.
Radiat Oncol ; 19(1): 89, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38982452

RESUMO

BACKGROUND AND PURPOSE: To investigate the feasibility of synthesizing computed tomography (CT) images from magnetic resonance (MR) images in multi-center datasets using generative adversarial networks (GANs) for rectal cancer MR-only radiotherapy. MATERIALS AND METHODS: Conventional T2-weighted MR and CT images were acquired from 90 rectal cancer patients at Peking University People's Hospital and 19 patients in public datasets. This study proposed a new model combining contrastive learning loss and consistency regularization loss to enhance the generalization of model for multi-center pelvic MRI-to-CT synthesis. The CT-to-sCT image similarity was evaluated by computing the mean absolute error (MAE), peak signal-to-noise ratio (SNRpeak), structural similarity index (SSIM) and Generalization Performance (GP). The dosimetric accuracy of synthetic CT was verified against CT-based dose distributions for the photon plan. Relative dose differences in the planning target volume and organs at risk were computed. RESULTS: Our model presented excellent generalization with a GP of 0.911 on unseen datasets and outperformed the plain CycleGAN, where MAE decreased from 47.129 to 42.344, SNRpeak improved from 25.167 to 26.979, SSIM increased from 0.978 to 0.992. The dosimetric analysis demonstrated that most of the relative differences in dose and volume histogram (DVH) indicators between synthetic CT and real CT were less than 1%. CONCLUSION: The proposed model can generate accurate synthetic CT in multi-center datasets from T2w-MR images. Most dosimetric differences were within clinically acceptable criteria for photon radiotherapy, demonstrating the feasibility of an MRI-only workflow for patients with rectal cancer.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Planejamento da Radioterapia Assistida por Computador , Neoplasias Retais , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Imageamento por Ressonância Magnética/métodos , Planejamento da Radioterapia Assistida por Computador/métodos , Neoplasias Retais/radioterapia , Neoplasias Retais/diagnóstico por imagem , Feminino , Masculino , Pessoa de Meia-Idade , Dosagem Radioterapêutica , Órgãos em Risco/efeitos da radiação , Adulto , Idoso , Pelve/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Estudos de Viabilidade
10.
Transl Vis Sci Technol ; 13(7): 10, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38984914

RESUMO

Purpose: The purpose of this study was to establish and validate a deep learning model to screen vascular aging using retinal fundus images. Although vascular aging is considered a novel cardiovascular risk factor, the assessment methods are currently limited and often only available in developed regions. Methods: We used 8865 retinal fundus images and clinical parameters of 4376 patients from two independent datasets for training a deep learning algorithm. The gold standard for vascular aging was defined as a pulse wave velocity ≥1400 cm/s. The probability of the presence of vascular aging was defined as deep learning retinal vascular aging score, the Reti-aging score. We compared the performance of the deep learning model and clinical parameters by calculating the area under the receiver operating characteristics curve (AUC). We recruited clinical specialists, including ophthalmologists and geriatricians, to assess vascular aging in patients using retinal fundus images, aiming to compare the diagnostic performance between deep learning models and clinical specialists. Finally, the potential of Reti-aging score for identifying new-onset hypertension (NH) and new-onset carotid artery plaque (NCP) in the subsequent three years was examined. Results: The Reti-aging score model achieved an AUC of 0.826 (95% confidence interval [CI] = 0.793-0.855) and 0.779 (95% CI = 0.765-0.794) in the internal and external dataset. It showed better performance in predicting vascular aging compared with the prediction with clinical parameters. The average accuracy of ophthalmologists (66.3%) was lower than that of the Reti-aging score model, whereas geriatricians were unable to make predictions based on retinal fundus images. The Reti-aging score was associated with the risk of NH and NCP (P < 0.05). Conclusions: The Reti-aging score model might serve as a novel method to predict vascular aging through analysis of retinal fundus images. Reti-aging score provides a novel indicator to predict new-onset cardiovascular diseases. Translational Relevance: Given the robust performance of our model, it provides a new and reliable method for screening vascular aging, especially in undeveloped areas.


Assuntos
Envelhecimento , Aprendizado Profundo , Fundo de Olho , Vasos Retinianos , Humanos , Feminino , Idoso , Masculino , Pessoa de Meia-Idade , Envelhecimento/fisiologia , Vasos Retinianos/diagnóstico por imagem , Vasos Retinianos/patologia , Curva ROC , Análise de Onda de Pulso/métodos , Fatores de Risco , Área Sob a Curva , Idoso de 80 Anos ou mais , Hipertensão/fisiopatologia
11.
Sci Rep ; 14(1): 15516, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969651

RESUMO

The intelligent appearance quality classification method for Auricularia auricula is of great significance to promote this industry. This paper proposes an appearance quality classification method for Auricularia auricula based on the improved Faster Region-based Convolutional Neural Networks (improved Faster RCNN) framework. The original Faster RCNN is improved by establishing a multiscale feature fusion detection model to improve the accuracy and real-time performance of the model. The multiscale feature fusion detection model makes full use of shallow feature information to complete target detection. It fuses shallow features with rich detailed information with deep features rich in strong semantic information. Since the fusion algorithm directly uses the existing information of the feature extraction network, there is no additional calculation. The fused features contain more original detailed feature information. Therefore, the improved Faster RCNN can improve the final detection rate without sacrificing speed. By comparing with the original Faster RCNN model, the mean average precision (mAP) of the improved Faster RCNN is increased by 2.13%. The average precision (AP) of the first-level Auricularia auricula is almost unchanged at a high level. The AP of the second-level Auricularia auricula is increased by nearly 5%. And the third-level Auricularia auricula AP is increased by 1%. The improved Faster RCNN improves the frames per second from 6.81 of the original Faster RCNN to 13.5. Meanwhile, the influence of complex environment and image resolution on the Auricularia auricula detection is explored.


Assuntos
Aprendizado Profundo , Algoritmos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Humanos
12.
Sci Rep ; 14(1): 15537, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969738

RESUMO

Crop yield production could be enhanced for agricultural growth if various plant nutrition deficiencies, and diseases are identified and detected at early stages. Hence, continuous health monitoring of plant is very crucial for handling plant stress. The deep learning methods have proven its superior performances in the automated detection of plant diseases and nutrition deficiencies from visual symptoms in leaves. This article proposes a new deep learning method for plant nutrition deficiencies and disease classification using a graph convolutional network (GNN), added upon a base convolutional neural network (CNN). Sometimes, a global feature descriptor might fail to capture the vital region of a diseased leaf, which causes inaccurate classification of disease. To address this issue, regional feature learning is crucial for a holistic feature aggregation. In this work, region-based feature summarization at multi-scales is explored using spatial pyramidal pooling for discriminative feature representation. Furthermore, a GCN is developed to capacitate learning of finer details for classifying plant diseases and insufficiency of nutrients. The proposed method, called Plant Nutrition Deficiency and Disease Network (PND-Net), has been evaluated on two public datasets for nutrition deficiency, and two for disease classification using four backbone CNNs. The best classification performances of the proposed PND-Net are as follows: (a) 90.00% Banana and 90.54% Coffee nutrition deficiency; and (b) 96.18% Potato diseases and 84.30% on PlantDoc datasets using Xception backbone. Furthermore, additional experiments have been carried out for generalization, and the proposed method has achieved state-of-the-art performances on two public datasets, namely the Breast Cancer Histopathology Image Classification (BreakHis 40 × : 95.50%, and BreakHis 100 × : 96.79% accuracy) and Single cells in Pap smear images for cervical cancer classification (SIPaKMeD: 99.18% accuracy). Also, the proposed method has been evaluated using five-fold cross validation and achieved improved performances on these datasets. Clearly, the proposed PND-Net effectively boosts the performances of automated health analysis of various plants in real and intricate field environments, implying PND-Net's aptness for agricultural growth as well as human cancer classification.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Doenças das Plantas , Folhas de Planta , Humanos
13.
BMC Bioinformatics ; 25(1): 231, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38969970

RESUMO

PURPOSE: In this study, we present DeepVirusClassifier, a tool capable of accurately classifying Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) viral sequences among other subtypes of the coronaviridae family. This classification is achieved through a deep neural network model that relies on convolutional neural networks (CNNs). Since viruses within the same family share similar genetic and structural characteristics, the classification process becomes more challenging, necessitating more robust models. With the rapid evolution of viral genomes and the increasing need for timely classification, we aimed to provide a robust and efficient tool that could increase the accuracy of viral identification and classification processes. Contribute to advancing research in viral genomics and assist in surveilling emerging viral strains. METHODS: Based on a one-dimensional deep CNN, the proposed tool is capable of training and testing on the Coronaviridae family, including SARS-CoV-2. Our model's performance was assessed using various metrics, including F1-score and AUROC. Additionally, artificial mutation tests were conducted to evaluate the model's generalization ability across sequence variations. We also used the BLAST algorithm and conducted comprehensive processing time analyses for comparison. RESULTS: DeepVirusClassifier demonstrated exceptional performance across several evaluation metrics in the training and testing phases. Indicating its robust learning capacity. Notably, during testing on more than 10,000 viral sequences, the model exhibited a more than 99% sensitivity for sequences with fewer than 2000 mutations. The tool achieves superior accuracy and significantly reduced processing times compared to the Basic Local Alignment Search Tool algorithm. Furthermore, the results appear more reliable than the work discussed in the text, indicating that the tool has great potential to revolutionize viral genomic research. CONCLUSION: DeepVirusClassifier is a powerful tool for accurately classifying viral sequences, specifically focusing on SARS-CoV-2 and other subtypes within the Coronaviridae family. The superiority of our model becomes evident through rigorous evaluation and comparison with existing methods. Introducing artificial mutations into the sequences demonstrates the tool's ability to identify variations and significantly contributes to viral classification and genomic research. As viral surveillance becomes increasingly critical, our model holds promise in aiding rapid and accurate identification of emerging viral strains.


Assuntos
COVID-19 , Aprendizado Profundo , Genoma Viral , SARS-CoV-2 , SARS-CoV-2/genética , SARS-CoV-2/classificação , Genoma Viral/genética , COVID-19/virologia , Coronaviridae/genética , Coronaviridae/classificação , Humanos , Redes Neurais de Computação
14.
Sci Data ; 11(1): 733, 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38971865

RESUMO

A simple and cheap way to recognize cervical cancer is using light microscopic analysis of Pap smear images. Training artificial intelligence-based systems becomes possible in this domain, e.g., to follow the European recommendation to screen negative smears to reduce false negative cases. The first step for such a process is segmenting the cells. A large and manually segmented dataset is required for this task, which can be used to train deep learning-based solutions. We describe a corresponding dataset with accurate manual segmentations for the enclosed cells. Altogether, the APACS23 (Annotated PAp smear images for Cell Segmentation 2023) dataset contains about 37 000 manually segmented cells and is separated into dedicated training and test parts, which could be used for an official benchmark of scientific investigations or a grand challenge.


Assuntos
Teste de Papanicolaou , Neoplasias do Colo do Útero , Humanos , Neoplasias do Colo do Útero/patologia , Feminino , Processamento de Imagem Assistida por Computador/métodos , Aprendizado Profundo , Esfregaço Vaginal
15.
Sci Rep ; 14(1): 15580, 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38971875

RESUMO

A recent experiment probed how purposeful action emerges in early life by manipulating infants' functional connection to an object in the environment (i.e., tethering an infant's foot to a colorful mobile). Vicon motion capture data from multiple infant joints were used here to create Histograms of Joint Displacements (HJDs) to generate pose-based descriptors for 3D infant spatial trajectories. Using HJDs as inputs, machine and deep learning systems were tasked with classifying the experimental state from which snippets of movement data were sampled. The architectures tested included k-Nearest Neighbour (kNN), Linear Discriminant Analysis (LDA), Fully connected network (FCNet), 1D-Convolutional Neural Network (1D-Conv), 1D-Capsule Network (1D-CapsNet), 2D-Conv and 2D-CapsNet. Sliding window scenarios were used for temporal analysis to search for topological changes in infant movement related to functional context. kNN and LDA achieved higher classification accuracy with single joint features, while deep learning approaches, particularly 2D-CapsNet, achieved higher accuracy on full-body features. For each AI architecture tested, measures of foot activity displayed the most distinct and coherent pattern alterations across different experimental stages (reflected in the highest classification accuracy rate), indicating that interaction with the world impacts the infant behaviour most at the site of organism~world connection.


Assuntos
Inteligência Artificial , Humanos , Lactente , Movimento/fisiologia , Feminino , Masculino , Aprendizado Profundo , Conscientização/fisiologia , Redes Neurais de Computação , Meio Ambiente
16.
Sci Rep ; 14(1): 15596, 2024 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-38971939

RESUMO

Common beans (CB), a vital source for high protein content, plays a crucial role in ensuring both nutrition and economic stability in diverse communities, particularly in Africa and Latin America. However, CB cultivation poses a significant threat to diseases that can drastically reduce yield and quality. Detecting these diseases solely based on visual symptoms is challenging, due to the variability across different pathogens and similar symptoms caused by distinct pathogens, further complicating the detection process. Traditional methods relying solely on farmers' ability to detect diseases is inadequate, and while engaging expert pathologists and advanced laboratories is necessary, it can also be resource intensive. To address this challenge, we present a AI-driven system for rapid and cost-effective CB disease detection, leveraging state-of-the-art deep learning and object detection technologies. We utilized an extensive image dataset collected from disease hotspots in Africa and Colombia, focusing on five major diseases: Angular Leaf Spot (ALS), Common Bacterial Blight (CBB), Common Bean Mosaic Virus (CBMV), Bean Rust, and Anthracnose, covering both leaf and pod samples in real-field settings. However, pod images are only available for Angular Leaf Spot disease. The study employed data augmentation techniques and annotation at both whole and micro levels for comprehensive analysis. To train the model, we utilized three advanced YOLO architectures: YOLOv7, YOLOv8, and YOLO-NAS. Particularly for whole leaf annotations, the YOLO-NAS model achieves the highest mAP value of up to 97.9% and a recall of 98.8%, indicating superior detection accuracy. In contrast, for whole pod disease detection, YOLOv7 and YOLOv8 outperformed YOLO-NAS, with mAP values exceeding 95% and 93% recall. However, micro annotation consistently yields lower performance than whole annotation across all disease classes and plant parts, as examined by all YOLO models, highlighting an unexpected discrepancy in detection accuracy. Furthermore, we successfully deployed YOLO-NAS annotation models into an Android app, validating their effectiveness on unseen data from disease hotspots with high classification accuracy (90%). This accomplishment showcases the integration of deep learning into our production pipeline, a process known as DLOps. This innovative approach significantly reduces diagnosis time, enabling farmers to take prompt management interventions. The potential benefits extend beyond rapid diagnosis serving as an early warning system to enhance common bean productivity and quality.


Assuntos
Aprendizado Profundo , Phaseolus , Doenças das Plantas , Phaseolus/virologia , Phaseolus/microbiologia , Doenças das Plantas/virologia , Doenças das Plantas/microbiologia , Agricultura/métodos , Folhas de Planta/virologia , Folhas de Planta/microbiologia , África , Colômbia
17.
PLoS One ; 19(7): e0302497, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38976700

RESUMO

This paper presents a deep-learning-based method to detect recreational vessels. The method takes advantage of existing underwater acoustic measurements from an Estuarine Soundscape Observatory Network based in the estuaries of South Carolina (SC), USA. The detection method is a two-step searching method, called Deep Scanning (DS), which includes a time-domain energy analysis and a frequency-domain spectrum analysis. In the time domain, acoustic signals with higher energy, measured by sound pressure level (SPL), are labeled for the potential existence of moving vessels. In the frequency domain, the labeled acoustic signals are examined against a predefined training dataset using a neural network. This research builds training data using diverse vessel sound features obtained from real measurements, with a duration between 5.0 seconds and 7.5 seconds and a frequency between 800 Hz to 10,000 Hz. The proposed method was then evaluated using all acoustic data in the years 2017, 2018, and 2021, respectively; a total of approximately 171,262 2-minute.wav files at three deployed locations in May River, SC. The DS detections were compared to human-observed detections for each audio file and results showed the method was able to classify the existence of vessels, with an average accuracy of around 99.0%.


Assuntos
Acústica , Aprendizado Profundo , Estuários , Rios , South Carolina , Humanos , Recreação , Som , Navios
18.
PLoS One ; 19(7): e0302413, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38976703

RESUMO

During the COVID-19 pandemic, pneumonia was the leading cause of respiratory failure and death. In addition to SARS-COV-2, it can be caused by several other bacterial and viral agents. Even today, variants of SARS-COV-2 are endemic and COVID-19 cases are common in many places. The symptoms of COVID-19 are highly diverse and robust, ranging from invisible to severe respiratory failure. Current detection methods for the disease are time-consuming and expensive with low accuracy and precision. To address such situations, we have designed a framework for COVID-19 and Pneumonia detection using multiple deep learning algorithms further accompanied by a deployment scheme. In this study, we have utilized four prominent deep learning models, which are VGG-19, ResNet-50, Inception V3 and Xception, on two separate datasets of CT scan and X-ray images (COVID/Non-COVID) to identify the best models for the detection of COVID-19. We achieved accuracies ranging from 86% to 99% depending on the model and dataset. To further validate our findings, we have applied the four distinct models on two more supplementary datasets of X-ray images of bacterial pneumonia and viral pneumonia. Additionally, we have implemented a flask app to visualize the outcome of our framework to show the identified COVID and Non-COVID images. The findings of this study will be helpful to develop an AI-driven automated tool for the cost effective and faster detection and better management of COVID-19 patients.


Assuntos
COVID-19 , Aprendizado Profundo , SARS-CoV-2 , Tomografia Computadorizada por Raios X , COVID-19/diagnóstico por imagem , Humanos , Tomografia Computadorizada por Raios X/métodos , SARS-CoV-2/isolamento & purificação , Pneumonia Viral/diagnóstico por imagem , Pandemias , Algoritmos , Pneumonia/diagnóstico por imagem , Pneumonia/diagnóstico , Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/diagnóstico , Internet , Betacoronavirus
19.
Signal Transduct Target Ther ; 9(1): 183, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38972904

RESUMO

Helicobacter pylori (H. pylori) is currently recognized as the primary carcinogenic pathogen associated with gastric tumorigenesis, and its high prevalence and resistance make it difficult to tackle. A graph neural network-based deep learning model, employing different training sets of 13,638 molecules for pre-training and fine-tuning, was aided in predicting and exploring novel molecules against H. pylori. A positively predicted novel berberine derivative 8 with 3,13-disubstituted alkene exhibited a potency against all tested drug-susceptible and resistant H. pylori strains with minimum inhibitory concentrations (MICs) of 0.25-0.5 µg/mL. Pharmacokinetic studies demonstrated an ideal gastric retention of 8, with the stomach concentration significantly higher than its MIC at 24 h post dose. Oral administration of 8 and omeprazole (OPZ) showed a comparable gastric bacterial reduction (2.2-log reduction) to the triple-therapy, namely OPZ + amoxicillin (AMX) + clarithromycin (CLA) without obvious disturbance on the intestinal flora. A combination of OPZ, AMX, CLA, and 8 could further decrease the bacteria load (2.8-log reduction). More importantly, the mono-therapy of 8 exhibited comparable eradication to both triple-therapy (OPZ + AMX + CLA) and quadruple-therapy (OPZ + AMX + CLA + bismuth citrate) groups. SecA and BamD, playing a major role in outer membrane protein (OMP) transport and assembling, were identified and verified as the direct targets of 8 by employing the chemoproteomics technique. In summary, by targeting the relatively conserved OMPs transport and assembling system, 8 has the potential to be developed as a novel anti-H. pylori candidate, especially for the eradication of drug-resistant strains.


Assuntos
Antibacterianos , Berberina , Aprendizado Profundo , Helicobacter pylori , Helicobacter pylori/efeitos dos fármacos , Berberina/farmacologia , Berberina/química , Berberina/farmacocinética , Antibacterianos/farmacologia , Antibacterianos/química , Humanos , Infecções por Helicobacter/tratamento farmacológico , Infecções por Helicobacter/microbiologia , Testes de Sensibilidade Microbiana , Farmacorresistência Bacteriana Múltipla/efeitos dos fármacos , Farmacorresistência Bacteriana Múltipla/genética , Animais , Omeprazol/farmacologia , Claritromicina/farmacologia , Amoxicilina/farmacologia
20.
BMC Med ; 22(1): 282, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38972973

RESUMO

BACKGROUND: The advances in deep learning-based pathological image analysis have invoked tremendous insights into cancer prognostication. Still, lack of interpretability remains a significant barrier to clinical application. METHODS: We established an integrative prognostic neural network for intrahepatic cholangiocarcinoma (iCCA), towards a comprehensive evaluation of both architectural and fine-grained information from whole-slide images. Then, leveraging on multi-modal data, we conducted extensive interrogative approaches to the models, to extract and visualize the morphological features that most correlated with clinical outcome and underlying molecular alterations. RESULTS: The models were developed and optimized on 373 iCCA patients from our center and demonstrated consistent accuracy and robustness on both internal (n = 213) and external (n = 168) cohorts. The occlusion sensitivity map revealed that the distribution of tertiary lymphoid structures, the geometric traits of the invasive margin, the relative composition of tumor parenchyma and stroma, the extent of necrosis, the presence of the disseminated foci, and the tumor-adjacent micro-vessels were the determining architectural features that impacted on prognosis. Quantifiable morphological vector extracted by CellProfiler demonstrated that tumor nuclei from high-risk patients exhibited significant larger size, more distorted shape, with less prominent nuclear envelope and textural contrast. The multi-omics data (n = 187) further revealed key molecular alterations left morphological imprints that could be attended by the network, including glycolysis, hypoxia, apical junction, mTORC1 signaling, and immune infiltration. CONCLUSIONS: We proposed an interpretable deep-learning framework to gain insights into the biological behavior of iCCA. Most of the significant morphological prognosticators perceived by the network are comprehensible to human minds.


Assuntos
Neoplasias dos Ductos Biliares , Colangiocarcinoma , Aprendizado Profundo , Humanos , Colangiocarcinoma/patologia , Prognóstico , Neoplasias dos Ductos Biliares/patologia , Masculino , Feminino , Pessoa de Meia-Idade , Processamento de Imagem Assistida por Computador/métodos , Idoso
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...