Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34.071
Filtrar
1.
Methods Mol Biol ; 2847: 121-135, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312140

RESUMO

Fundamental to the diverse biological functions of RNA are its 3D structure and conformational flexibility, which enable single sequences to adopt a variety of distinct 3D states. Currently, computational RNA design tasks are often posed as inverse problems, where sequences are designed based on adopting a single desired secondary structure without considering 3D geometry and conformational diversity. In this tutorial, we present gRNAde, a geometric RNA design pipeline operating on sets of 3D RNA backbone structures to design sequences that explicitly account for RNA 3D structure and dynamics. gRNAde is a graph neural network that uses an SE (3) equivariant encoder-decoder framework for generating RNA sequences conditioned on backbone structures where the identities of the bases are unknown. We demonstrate the utility of gRNAde for fixed-backbone re-design of existing RNA structures of interest from the PDB, including riboswitches, aptamers, and ribozymes. gRNAde is more accurate in terms of native sequence recovery while being significantly faster compared to existing physics-based tools for 3D RNA inverse design, such as Rosetta.


Assuntos
Aprendizado Profundo , Conformação de Ácido Nucleico , RNA , Software , RNA/química , RNA/genética , Biologia Computacional/métodos , RNA Catalítico/química , RNA Catalítico/genética , Modelos Moleculares , Redes Neurais de Computação
2.
Methods Mol Biol ; 2847: 63-93, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312137

RESUMO

Machine learning algorithms, and in particular deep learning approaches, have recently garnered attention in the field of molecular biology due to remarkable results. In this chapter, we describe machine learning approaches specifically developed for the design of RNAs, with a focus on the learna_tools Python package, a collection of automated deep reinforcement learning algorithms for secondary structure-based RNA design. We explain the basic concepts of reinforcement learning and its extension, automated reinforcement learning, and outline how these concepts can be successfully applied to the design of RNAs. The chapter is structured to guide through the usage of the different programs with explicit examples, highlighting particular applications of the individual tools.


Assuntos
Algoritmos , Aprendizado de Máquina , Conformação de Ácido Nucleico , RNA , Software , RNA/química , RNA/genética , Biologia Computacional/métodos , Aprendizado Profundo
3.
Methods Mol Biol ; 2847: 153-161, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312142

RESUMO

Understanding the connection between complex structural features of RNA and biological function is a fundamental challenge in evolutionary studies and in RNA design. However, building datasets of RNA 3D structures and making appropriate modeling choices remain time-consuming and lack standardization. In this chapter, we describe the use of rnaglib, to train supervised and unsupervised machine learning-based function prediction models on datasets of RNA 3D structures.


Assuntos
Biologia Computacional , Conformação de Ácido Nucleico , RNA , Software , RNA/química , RNA/genética , Biologia Computacional/métodos , Aprendizado de Máquina , Modelos Moleculares
4.
Methods Mol Biol ; 2847: 241-300, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312149

RESUMO

Nucleic acid tests (NATs) are considered as gold standard in molecular diagnosis. To meet the demand for onsite, point-of-care, specific and sensitive, trace and genotype detection of pathogens and pathogenic variants, various types of NATs have been developed since the discovery of PCR. As alternatives to traditional NATs (e.g., PCR), isothermal nucleic acid amplification techniques (INAATs) such as LAMP, RPA, SDA, HDR, NASBA, and HCA were invented gradually. PCR and most of these techniques highly depend on efficient and optimal primer and probe design to deliver accurate and specific results. This chapter starts with a discussion of traditional NATs and INAATs in concert with the description of computational tools available to aid the process of primer/probe design for NATs and INAATs. Besides briefly covering nanoparticles-assisted NATs, a more comprehensive presentation is given on the role CRISPR-based technologies have played in molecular diagnosis. Here we provide examples of a few groundbreaking CRISPR assays that have been developed to counter epidemics and pandemics and outline CRISPR biology, highlighting the role of CRISPR guide RNA and its design in any successful CRISPR-based application. In this respect, we tabularize computational tools that are available to aid the design of guide RNAs in CRISPR-based applications. In the second part of our chapter, we discuss machine learning (ML)- and deep learning (DL)-based computational approaches that facilitate the design of efficient primer and probe for NATs/INAATs and guide RNAs for CRISPR-based applications. Given the role of microRNA (miRNAs) as potential future biomarkers of disease diagnosis, we have also discussed ML/DL-based computational approaches for miRNA-target predictions. Our chapter presents the evolution of nucleic acid-based diagnosis techniques from PCR and INAATs to more advanced CRISPR/Cas-based methodologies in concert with the evolution of deep learning (DL)- and machine learning (ml)-based computational tools in the most relevant application domains.


Assuntos
Aprendizado Profundo , Humanos , Sistemas CRISPR-Cas , Técnicas de Diagnóstico Molecular/métodos , Técnicas de Amplificação de Ácido Nucleico/métodos , RNA/genética , Aprendizado de Máquina , Repetições Palindrômicas Curtas Agrupadas e Regularmente Espaçadas/genética
5.
Methods Mol Biol ; 2834: 3-39, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312158

RESUMO

Quantitative structure-activity relationships (QSAR) is a method for predicting the physical and biological properties of small molecules; it is in use in industry and public services. However, as any scientific method, it is challenged by more and more requests, especially considering its possible role in assessing the safety of new chemicals. To answer the question whether QSAR, by exploiting available knowledge, can build new knowledge, the chapter reviews QSAR methods in search of a QSAR epistemology. QSAR stands on tree pillars, i.e., biological data, chemical knowledge, and modeling algorithms. Usually the biological data, resulting from good experimental practice, are taken as a true picture of the world; chemical knowledge has scientific bases; so if a QSAR model is not working, blame modeling. The role of modeling in developing scientific theories, and in producing knowledge, is so analyzed. QSAR is a mature technology and is part of a large body of in silico methods and other computational methods. The active debate about the acceptability of the QSAR models, about the way to communicate them, and the explanation to provide accompanies the development of today QSAR models. An example about predicting possible endocrine-disrupting chemicals (EDC) shows the many faces of modern QSAR methods.


Assuntos
Relação Quantitativa Estrutura-Atividade , Algoritmos , Humanos , Disruptores Endócrinos/química
6.
Methods Mol Biol ; 2856: 357-400, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39283464

RESUMO

Three-dimensional (3D) chromatin interactions, such as enhancer-promoter interactions (EPIs), loops, topologically associating domains (TADs), and A/B compartments, play critical roles in a wide range of cellular processes by regulating gene expression. Recent development of chromatin conformation capture technologies has enabled genome-wide profiling of various 3D structures, even with single cells. However, current catalogs of 3D structures remain incomplete and unreliable due to differences in technology, tools, and low data resolution. Machine learning methods have emerged as an alternative to obtain missing 3D interactions and/or improve resolution. Such methods frequently use genome annotation data (ChIP-seq, DNAse-seq, etc.), DNA sequencing information (k-mers and transcription factor binding site (TFBS) motifs), and other genomic properties to learn the associations between genomic features and chromatin interactions. In this review, we discuss computational tools for predicting three types of 3D interactions (EPIs, chromatin interactions, and TAD boundaries) and analyze their pros and cons. We also point out obstacles to the computational prediction of 3D interactions and suggest future research directions.


Assuntos
Cromatina , Aprendizado Profundo , Cromatina/genética , Cromatina/metabolismo , Humanos , Biologia Computacional/métodos , Aprendizado de Máquina , Genômica/métodos , Elementos Facilitadores Genéticos , Regiões Promotoras Genéticas , Sítios de Ligação , Genoma , Software
7.
Spectrochim Acta A Mol Biomol Spectrosc ; 324: 125001, 2025 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-39180971

RESUMO

Utilizing visible and near-infrared (Vis-NIR) spectroscopy in conjunction with chemometrics methods has been widespread for identifying plant diseases. However, a key obstacle involves the extraction of relevant spectral characteristics. This study aimed to enhance sugarcane disease recognition by combining convolutional neural network (CNN) with continuous wavelet transform (CWT) spectrograms for spectral features extraction within the Vis-NIR spectra (380-1400 nm) to improve the accuracy of sugarcane diseases recognition. Using 130 sugarcane leaf samples, the obtained one-dimensional CWT coefficients from Vis-NIR spectra were transformed into two-dimensional spectrograms. Employing CNN, spectrogram features were extracted and incorporated into decision tree, K-nearest neighbour, partial least squares discriminant analysis, and random forest (RF) calibration models. The RF model, integrating spectrogram-derived features, demonstrated the best performance with an average precision of 0.9111, sensitivity of 0.9733, specificity of 0.9791, and accuracy of 0.9487. This study may offer a non-destructive, rapid, and accurate means to detect sugarcane diseases, enabling farmers to receive timely and actionable insights on the crops' health, thus minimizing crop loss and optimizing yields.


Assuntos
Aprendizado Profundo , Doenças das Plantas , Saccharum , Espectroscopia de Luz Próxima ao Infravermelho , Análise de Ondaletas , Saccharum/química , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Folhas de Planta/química , Análise dos Mínimos Quadrados , Análise Discriminante
8.
J Biomed Opt ; 30(Suppl 1): S13706, 2025 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39295734

RESUMO

Significance: Oral cancer surgery requires accurate margin delineation to balance complete resection with post-operative functionality. Current in vivo fluorescence imaging systems provide two-dimensional margin assessment yet fail to quantify tumor depth prior to resection. Harnessing structured light in combination with deep learning (DL) may provide near real-time three-dimensional margin detection. Aim: A DL-enabled fluorescence spatial frequency domain imaging (SFDI) system trained with in silico tumor models was developed to quantify the depth of oral tumors. Approach: A convolutional neural network was designed to produce tumor depth and concentration maps from SFDI images. Three in silico representations of oral cancer lesions were developed to train the DL architecture: cylinders, spherical harmonics, and composite spherical harmonics (CSHs). Each model was validated with in silico SFDI images of patient-derived tongue tumors, and the CSH model was further validated with optical phantoms. Results: The performance of the CSH model was superior when presented with patient-derived tumors ( P -value < 0.05 ). The CSH model could predict depth and concentration within 0.4 mm and 0.4 µ g / mL , respectively, for in silico tumors with depths less than 10 mm. Conclusions: A DL-enabled SFDI system trained with in silico CSH demonstrates promise in defining the deep margins of oral tumors.


Assuntos
Simulação por Computador , Aprendizado Profundo , Neoplasias Bucais , Imagem Óptica , Imagens de Fantasmas , Cirurgia Assistida por Computador , Imagem Óptica/métodos , Humanos , Neoplasias Bucais/diagnóstico por imagem , Neoplasias Bucais/cirurgia , Neoplasias Bucais/patologia , Cirurgia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Margens de Excisão
9.
Clin Pract Epidemiol Ment Health ; 20: e17450179315688, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39355197

RESUMO

Introduction: This study aims to investigate the potential of machine learning in predicting mental health conditions among college students by analyzing existing literature on mental health diagnoses using various machine learning algorithms. Methods: The research employed a systematic literature review methodology to investigate the application of deep learning techniques in predicting mental health diagnoses among students from 2011 to 2024. The search strategy involved key terms, such as "deep learning," "mental health," and related terms, conducted on reputable repositories like IEEE, Xplore, ScienceDirect, SpringerLink, PLOS, and Elsevier. Papers published between January, 2011, and May, 2024, specifically focusing on deep learning models for mental health diagnoses, were considered. The selection process adhered to PRISMA guidelines and resulted in 30 relevant studies. Results: The study highlights Convolutional Neural Networks (CNN), Random Forest (RF), Support Vector Machine (SVM), Deep Neural Networks, and Extreme Learning Machine (ELM) as prominent models for predicting mental health conditions. Among these, CNN demonstrated exceptional accuracy compared to other models in diagnosing bipolar disorder. However, challenges persist, including the need for more extensive and diverse datasets, consideration of heterogeneity in mental health condition, and inclusion of longitudinal data to capture temporal dynamics. Conclusion: This study offers valuable insights into the potential and challenges of machine learning in predicting mental health conditions among college students. While deep learning models like CNN show promise, addressing data limitations and incorporating temporal dynamics are crucial for further advancements.

10.
Front Microbiol ; 15: 1446097, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39355420

RESUMO

Bacteriophages are the most prolific organisms on Earth, yet many of their genomes and assemblies from metagenomic sources lack protein sequences with identified functions. While most bacteriophage proteins are structural proteins, categorized as Phage Virion Proteins (PVPs), a considerable number remain unclassified. Complicating matters further, traditional lab-based methods for PVP identification can be tedious. To expedite the process of identifying PVPs, machine-learning models are increasingly being employed. Existing tools have developed models for predicting PVPs from protein sequences as input. However, none of these efforts have built software allowing for both genomic and metagenomic data as input. In addition, there is currently no framework available for easily curating data and creating new types of machine learning models. In response, we introduce PhageScanner, an open-source platform that streamlines data collection for genomic and metagenomic datasets, model training and testing, and includes a prediction pipeline for annotating genomic and metagenomic data. PhageScanner also features a graphical user interface (GUI) for visualizing annotations on genomic and metagenomic data. We further introduce a BLAST-based classifier that outperforms ML-based models and an efficient Long Short-Term Memory (LSTM) classifier. We then showcase the capabilities of PhageScanner by predicting PVPs in six previously uncharacterized bacteriophage genomes. In addition, we create a new model that predicts phage-encoded toxins within bacteriophage genomes, thus displaying the utility of the framework.

11.
Rev Cardiovasc Med ; 25(9): 335, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39355611

RESUMO

Background: Congenital heart diseases (CHDs), particularly atrial and ventricular septal defects, pose significant health risks and common challenges in detection via echocardiography. Doctors often employ the cardiac structural information during the diagnostic process. However, prior CHD research has not determined the influence of including cardiac structural information during the labeling process and the application of data augmentation techniques. Methods: This study utilizes advanced artificial intelligence (AI)-driven object detection frameworks, specifically You Look Only Once (YOLO)v5, YOLOv7, and YOLOv9, to assess the impact of including cardiac structural information and data augmentation techniques on the identification of septal defects in echocardiographic images. Results: The experimental results reveal that different labeling strategies substantially affect the performance of the detection models. Notably, adjustments in bounding box dimensions and the inclusion of cardiac structural details in the annotations are key factors influencing the accuracy of the model. The application of deep learning techniques in echocardiography enhances the precision of detecting septal heart defects. Conclusions: This study confirms that careful annotation of imaging data is crucial for optimizing the performance of object detection algorithms in medical imaging. These findings suggest potential pathways for refining AI applications in diagnostic cardiology studies.

12.
J Environ Manage ; 370: 122703, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39357440

RESUMO

Accurate prediction of PM2.5 concentrations in ports is crucial for authorities to combat ambient air pollution effectively and protect the health of port staff. However, in port clusters formed by multiple neighboring ports, we encountered several challenges owing to the impact of unique meteorological conditions, potential correlation between PM2.5 levels in neighboring ports, and coupling influence of background pollutants in city zones. Therefore, considering the spatiotemporal correlation among the factors influencing PM2.5 concentration variations within the harbor cluster, we developed a novel blending ensemble deep learning model. The proposed model combined the strengths of four deep learning architectures: graph convolutional networks (GCN), long short-term memory networks (LSTM), residual neural networks (ResNet), and convolutional neural networks (CNN). GCN, LSTM, and ResNet served as the base models aimed at capturing the spatial correlation of PM2.5 concentrations in neighboring ports, the potential long-term dependence of meteorological factors and PM2.5 concentrations, and the effects of urban ambient air pollutants, respectively. Following the blending ensemble technique, the prediction outcomes of three base models were used as the input data for the meta-model CNN, which employs the blending ensemble technique to produce the final prediction results. Based on actual data obtained from 18 ports in Nanjing, the proposed model was compared and analyzed for its prediction performance against six state-of-the-art models. The findings revealed that the proposed model provided more accurate predictions. It reduced mean absolute error (MAE) by 10.59 %-20.00 %, reduced root mean square error (RMSE) by 13.22 %-17.11 %, improved coefficient of determination (R2) by 10 %-35.38 %, and improved accuracy (ACC) by 3.48 %-7.08 %. Additionally, the contribution of each component to the prediction performance of the proposed model was measured using a systematic ablation study. The results demonstrated that the GCN model exerted the most substantial influence on the prediction performance of the GCN-LSTM-ResNet model, followed by the LSTM model. The influence of urban background pollutants can significantly enhance the generalizability of the complete model. Moreover, a comparison with three blended ensemble models incorporating any two base models demonstrated that the GCN-LSTM-ResNet model exhibited superior prediction performance and was particularly excellent in predicting the occurrence of high-concentration events. Specifically, the GCN-LSTM-ResNet model improved MAE and RMSE by at least 12.3% and 9.2%, respectively, but reduced R2 and ACC by 26.1% and 6.8%, respectively. The proposed model provided reliable PM2.5 concentration prediction outcomes and decision support for air quality management strategies in dry bulk port clusters.

13.
Phys Med Biol ; 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39357529

RESUMO

OBJECTIVE: Normal tissue complication probability (NTCP) modelling is rapidly embracing deep learning (DL) methods, acknowledging the importance of spatial dose information. Finding effective ways to combine information from radiation dose distribution maps (dosiomics) and clinical data involves technical challenges and requires domain knowledge. We propose different multi-modality data fusion strategies to facilitate future DL-based NTCP studies. Approach. Early, joint and late DL multi-modality fusion strategies were compared using clinical and mandibular radiation dose distribution volumes. These were contrasted with single-modality models: a random forest trained on non-image data (clinical, demographic and dose-volume metrics) and a 3D DenseNet-40 trained on image data (mandibular dose distribution maps). The study involved a matched cohort of 92 ORN cases and 92 controls from a single institution. Main results. The late fusion model exhibited superior discrimination and calibration performance, while the join fusion achieved a more balanced distribution of the predicted probabilities. Discrimination performance did not significantly differ between strategies. Late fusion, though less technically complex, lacks crucial inter-modality interactions for NTCP modelling. In contrast, joint fusion, despite its complexity, resulted in a single network training process which included intra- and inter-modality interactions in its model parameter optimisation. Significance. This study is a pioneering effort in comparing different strategies for including image data into DL-based NTCP models in combination with lower dimensional data such as clinical variables. The discrimination performance of such multi-modality NTCP models and the choice of fusion strategy will depend on the distribution and quality of both types of data. Multiple data fusion strategies should be compared and reported in multi-modality NTCP modelling using DL. .

14.
Drug Discov Today ; : 104195, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39357621

RESUMO

Early toxicity assessment plays a vital role in the drug discovery process on account of its significant influence on the attrition rate of candidates. Recently, constant upgrading of information technology has greatly promoted the continuous development of toxicity prediction. To give an overview of the current state of data-driven toxicity prediction, we reviewed relevant studies and summarize them in three main respects: the features and difficulties of toxicity prediction, the evolution of modeling approaches, and the available tools for toxicity prediction. For each approach, we expound the research status, existing challenges, and feasible solutions. Finally, several new directions and suggestions for toxicity prediction are also put forward.

15.
Surv Ophthalmol ; 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39357748

RESUMO

We focus on the utility of artificial intelligence (AI) in the management of macular hole (MH). Specifically, we examine AI's role in the diagnosis, treatment, and recovery of MH. We report each AI model's development strategy, validation, tasks, performance, strengths, and limitations. We conducted a comprehensive search across 5 electronic databases, including Ovid MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Web of Science Core Collection, from inception to September 26, 2023. A total of 1,262 articles were retrieved, with 25 studies meeting inclusion criteria. The AI models were developed using a total of 209,443 images for training, 30,011 for validation, and 223,592 for testing. There were a total of 40 distinct AI algorithms. Supervised, unsupervised, and a combination of AI strategies were used in 22 (88%), 1 (4%), and 2 (8%) studies, respectively. Twenty studies (80%) used AI solely to analyze images, whereas 5 (20%) analyzed both images and clinical features including patient demographic data and morphological characteristics of the MH. Twelve studies (48%) implemented AI for diagnosis, 5 (20%) identified MH characteristics, and 5 (20%) focused on postoperative predictions of hole closure and vision recovery. No articles studied treatment planning. Of the 10 studies comparing AI performance to human graders, 5 (50%) noted equivalent or higher performance based on the quantitative performance metrics they collected. Overall, AI analysis of images and clinical characteristics in MH demonstrated high diagnostic and predictive accuracy, with 14 studies (56%) reporting performance metrics values-including accuracy, sensitivity, specificity, and precision-over 90%, along with areas under the curve above 0.9. Convolutional neural networks comprised the majority of included AI models, including those which were high performing. Future research may consider validating algorithms to propose personalized treatment plans and explore clinical use of the aforementioned algorithms. NARRATIVE ABSTRACT: This scoping review focuses on the utility of artificial intelligence (AI) in the management of macular hole (MH). This review synthesizes 25 studies, comprehensively reporting on each AI model's development strategy, validation, tasks, performance, strengths, and limitations. All models analyzed ophthalmic images, and 5 (20%) also analyzed clinical features. Study objectives were categorized based on three stages of MH care: diagnosis, identification of MH characteristics, and postoperative predictions of hole closure and vision recovery. Twenty-two (88%) AI models underwent supervised learning, and the models were most often deployed to determine a MH diagnosis. None of the articles applied AI to guiding treatment plans. AI model performance was compared to other algorithms and to human graders. Of the 10 studies comparing AI to human graders (i.e., retinal specialists, general ophthalmologists, and ophthalmology trainees), 5 (50%) reported equivalent or higher performance. Overall, AI analysis of images and clinical characteristics in MH demonstrated high diagnostic and predictive accuracy. Convolutional neural networks comprised the majority of included AI models, including those which were high performing. Future research may consider validating algorithms to propose personalized treatment plans and explore clinical use of the aforementioned algorithms.

16.
Spine J ; 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39357744

RESUMO

BACKGROUND CONTEXT: A deep learning (DL) model for degenerative cervical spondylosis on MRI could enhance reporting consistency and efficiency, addressing a significant global health issue. PURPOSE: Create a DL model to detect and classify cervical cord signal abnormalities, spinal canal and neural foraminal stenosis. STUDY DESIGN/SETTING: Retrospective study conducted from January 2013 to July 2021, excluding cases with instrumentation. PATIENT SAMPLE: Overall, 504 MRI cervical spines were analyzed (504 patients, mean=58 years±13.7[SD]; 202 women) with 454 for training (90%) and 50 (10%) for internal testing. In addition, 100 MRI cervical spines were available for external testing (100 patients, mean=60 years±13.0[SD];26 women). OUTCOME MEASURES: Automated detection and classification of spinal canal stenosis, neural foraminal stenosis, and cord signal abnormality using the DL model. Recall(%), inter-rater agreement (Gwet's kappa), sensitivity, and specificity were calculated. METHODS: Utilizing axial T2-weighted gradient echo and sagittal T2-weighted images, a transformer-based DL model was trained on data labeled by an experienced musculoskeletal radiologist (12 years of experience). Internal testing involved data labeled in consensus by two musculoskeletal radiologists (reference standard, both with 12-years-experience), two subspecialist radiologists, and two in-training radiologists. External testing was performed. RESULTS: The DL model exhibited substantial agreement surpassing all readers in all classes for spinal canal (κ=0.78, p<0.001 vs. κ range=0.57-0.70 for readers) and neural foraminal stenosis (κ=0.80, p<0.001 vs. κ range=0.63-0.69 for readers) classification. The DL model's recall for cord signal abnormality (92.3%) was similar to all readers (range: 92.3-100.0%). Nearly perfect agreement was demonstrated for binary classification (normal/mild vs. moderate/severe) (κ=0.95, p<0.001 for spinal canal; κ=0.90, p<0.001 for neural foramina). External testing showed substantial agreement using all classes (κ=0.76, p<0.001 for spinal canal; κ=0.66, p<0.001 for neural foramina) and high recall for cord signal abnormality (91.9%). The DL model demonstrated high sensitivities (range:83.7%-92.4%) and specificities (range:87.8%-98.3%) on both internal and external datasets for spinal canal and neural foramina classification. CONCLUSIONS: Our DL model for degenerative cervical spondylosis on MRI showed good performance, demonstrating substantial agreement with the reference standard. This tool could assist radiologists in improving the efficiency and consistency of MRI cervical spondylosis assessments in clinical practice.

17.
Brief Bioinform ; 25(6)2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39358035

RESUMO

High affinity is crucial for the efficacy and specificity of antibody. Due to involving high-throughput screens, biological experiments for antibody affinity maturation are time-consuming and have a low success rate. Precise computational-assisted antibody design promises to accelerate this process, but there is still a lack of effective computational methods capable of pinpointing beneficial mutations within the complementarity-determining region (CDR) of antibodies. Moreover, random mutations often lead to challenges in antibody expression and immunogenicity. In this study, to enhance the affinity of a human antibody against avian influenza virus, a CDR library was constructed and evolutionary information was acquired through sequence alignment to restrict the mutation positions and types. Concurrently, a statistical potential methodology was developed based on amino acid interactions between antibodies and antigens to calculate potential affinity-enhanced antibodies, which were further subjected to molecular dynamics simulations. Subsequently, experimental validation confirmed that a point mutation enhancing 2.5-fold affinity was obtained from 10 designs, resulting in the antibody affinity of 2 nM. A predictive model for antibody-antigen interactions based on the binding interface was also developed, achieving an Area Under the Curve (AUC) of 0.83 and a precision of 0.89 on the test set. Lastly, a novel approach involving combinations of affinity-enhancing mutations and an iterative mutation optimization scheme similar to the Monte Carlo method were proposed. This study presents computational methods that rapidly and accurately enhance antibody affinity, addressing issues related to antibody expression and immunogenicity.


Assuntos
Afinidade de Anticorpos , Regiões Determinantes de Complementaridade , Biologia Computacional , Humanos , Regiões Determinantes de Complementaridade/genética , Regiões Determinantes de Complementaridade/imunologia , Biologia Computacional/métodos , Simulação de Dinâmica Molecular , Anticorpos/imunologia , Anticorpos/química , Anticorpos/genética , Anticorpos Antivirais/imunologia , Mutação
18.
Sci Rep ; 14(1): 22885, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39358373

RESUMO

Predicting rock tunnel squeezing in underground projects is challenging due to its intricate and unpredictable nature. This study proposes an innovative approach to enhance the accuracy and reliability of tunnel squeezing prediction. The proposed method combines ensemble learning techniques with Q-learning and online Markov chain integration. A deep learning model is trained on a comprehensive database comprising tunnel parameters including diameter (D), burial depth (H), support stiffness (K), and tunneling quality index (Q). Multiple deep learning models are trained concurrently, leveraging ensemble learning to capture diverse patterns and improve prediction performance. Integration of the Q-learning-Online Markov Chain further refines predictions. The online Markov chain analyzes historical sequences of tunnel parameters and squeezing class transitions, establishing transition probabilities between different squeezing classes. The Q-learning algorithm optimizes decision-making by learning the optimal policy for transitioning between tunnel states. The proposed model is evaluated using a dataset from various tunnel construction projects, assessing performance through metrics like accuracy, precision, recall, and F1-score. Results demonstrate the efficiency of the ensemble deep learning model combined with Q-learning-Online Markov Chain in predicting surrounding rock tunnel squeezing. This approach offers insights into parameter interrelationships and dynamic squeezing characteristics, enabling proactive planning and support measures implementation to mitigate tunnel squeezing hazards and ensure underground structure safety. Experimental results show the model achieves a prediction accuracy of 98.11%, surpassing individual CNN and RNN models, with an AUC value of 0.98.

19.
Sci Rep ; 14(1): 22926, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39358428

RESUMO

The COVID-19 pandemic affected countries across the globe, demanding drastic public health policies to mitigate the spread of infection, which led to economic crises as a collateral damage. In this work, we investigate the impact of human mobility, described via international commercial flights, on COVID-19 infection dynamics on a global scale. We developed a graph neural network (GNN)-based framework called Dynamic Weighted GraphSAGE (DWSAGE), which operates over spatiotemporal graphs and is well-suited for dynamically changing flight information updated daily. This architecture is designed to be structurally sensitive, capable of learning the relationships between edge features and node features. To gain insights into the influence of air traffic on infection spread, we conducted local sensitivity analysis on our model through perturbation experiments. Our analyses identified Western Europe, the Middle East, and North America as leading regions in fueling the pandemic due to the high volume of air traffic originating or transiting through these areas. We used these observations to propose air traffic reduction strategies that can significantly impact controlling the pandemic with minimal disruption to human mobility. Our work provides a robust deep learning-based tool to study global pandemics and is of key relevance to policymakers for making informed decisions regarding air traffic restrictions during future outbreaks.


Assuntos
Aviação , COVID-19 , Aprendizado Profundo , Pandemias , Humanos , COVID-19/epidemiologia , COVID-19/prevenção & controle , Pandemias/prevenção & controle , SARS-CoV-2/isolamento & purificação , Redes Neurais de Computação
20.
Sci Rep ; 14(1): 22941, 2024 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-39358456

RESUMO

High-sensitivity acceleration sensors have been independently developed by our research group to detect vibrations that are > 10 dB smaller than those detected by conventional commercial sensors. This study is the first to measure high-frequency micro-vibrations in muscle fibers, termed micro-mechanomyogram (MMG) in patients with Parkinson's disease (PwPD) using a high-sensitivity acceleration sensor. We specifically measured the extensor pollicis brevis muscle at the base of the thumb in PwPD and healthy controls (HC) and detected not only low-frequency MMG (< 15 Hz) but also micro-MMG (≥ 15 Hz), which was preciously undetectable using commercial acceleration sensors. Analysis revealed remarkable differences in the frequency characteristics of micro-MMG between PwPD and HC. Specifically, during muscle power output, the low-frequency MMG energy was greater in PwPD than in HC, while the micro-MMG energy was smaller in PwPD compared to HC. These results suggest that micro-MMG detected by the high-sensitivity acceleration sensor provides crucial information for distinguishing between PwPD and HC. Moreover, a deep learning model trained on both low-frequency MMG and micro-MMG achieved a high accuracy (92.19%) in classifying PwPD and HC, demonstrating the potential for a diagnostic system for PwPD using micro-MMG.


Assuntos
Aprendizado Profundo , Doença de Parkinson , Doença de Parkinson/diagnóstico , Doença de Parkinson/fisiopatologia , Humanos , Masculino , Idoso , Feminino , Pessoa de Meia-Idade , Miografia/métodos , Vibração , Acelerometria/métodos , Acelerometria/instrumentação , Aceleração , Estudos de Casos e Controles , Músculo Esquelético/fisiopatologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA