Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 13(1): 22898, 2023 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-38129508

RESUMO

Recovery after spinal cord injury (SCI) may be propagated by plasticity-enhancing treatments. The myelin-associated nerve outgrowth inhibitor Nogo-A (Reticulon 4, RTN4) pathway has been shown to restrict neuroaxonal plasticity in experimental SCI models. Early randomized controlled trials are underway to investigate the effect of Nogo-A/Nogo-Receptor (NgR1) pathway blockers. This systematic review and meta-analysis of therapeutic approaches blocking the Nogo-A pathway interrogated the efficacy of functional locomotor recovery after experimental SCI according to a pre-registered study protocol. A total of 51 manuscripts reporting 76 experiments in 1572 animals were identified for meta-analysis. Overall, a neurobehavioral improvement by 18.9% (95% CI 14.5-23.2) was observed. Subgroup analysis (40 experiments, N = 890) revealed SCI-modelling factors associated with outcome variability. Lack of reported randomization and smaller group sizes were associated with larger effect sizes. Delayed treatment start was associated with lower effect sizes. Trim and Fill assessment as well as Egger regression suggested the presence of publication bias. Factoring in theoretically missing studies resulted in a reduced effect size [8.8% (95% CI 2.6-14.9)]. The available data indicates that inhibition of the Nogo-A/NgR1pathway alters functional recovery after SCI in animal studies although substantial differences appear for the applied injury mechanisms and other study details. Mirroring other SCI interventions assessed earlier we identify similar factors associated with outcome heterogeneity.


Assuntos
Traumatismos da Medula Espinal , Animais , Proteínas Nogo , Bainha de Mielina/metabolismo , Modelos Animais de Doenças , Receptores Nogo , Medula Espinal/metabolismo , Recuperação de Função Fisiológica
2.
Sci Rep ; 12(1): 5867, 2022 04 07.
Artigo em Inglês | MEDLINE | ID: mdl-35393450

RESUMO

SARS-CoV-2 pandemic first emerged in late 2019 in China. It has since infected more than 298 million individuals and caused over 5 million deaths globally. The identification of essential proteins in a protein-protein interaction network (PPIN) is not only crucial in understanding the process of cellular life but also useful in drug discovery. There are many centrality measures to detect influential nodes in complex networks. Since SARS-CoV-2 and (H1N1) influenza PPINs pose 553 common human proteins. Analyzing influential proteins and comparing these networks together can be an effective step in helping biologists for drug-target prediction. We used 21 centrality measures on SARS-CoV-2 and (H1N1) influenza PPINs to identify essential proteins. We applied principal component analysis and unsupervised machine learning methods to reveal the most informative measures. Appealingly, some measures had a high level of contribution in comparison to others in both PPINs, namely Decay, Residual closeness, Markov, Degree, closeness (Latora), Barycenter, Closeness (Freeman), and Lin centralities. We also investigated some graph theory-based properties like the power law, exponential distribution, and robustness. Both PPINs tended to properties of scale-free networks that expose their nature of heterogeneity. Dimensionality reduction and unsupervised learning methods were so effective to uncover appropriate centrality measures.


Assuntos
COVID-19 , Vírus da Influenza A Subtipo H1N1 , Influenza Humana , Humanos , Vírus da Influenza A Subtipo H1N1/metabolismo , Mapas de Interação de Proteínas , Proteínas/metabolismo , SARS-CoV-2
3.
IEEE/ACM Trans Comput Biol Bioinform ; 19(3): 1545-1557, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-33119511

RESUMO

Previous efforts in gene network reconstruction have mainly focused on data-driven modeling, with little attention paid to knowledge-based approaches. Leveraging prior knowledge, however, is a promising paradigm that has been gaining momentum in network reconstruction and computational biology research communities. This paper proposes two new algorithms for reconstructing a gene network from expression profiles with and without prior knowledge in small sample and high-dimensional settings. First, using tools from the statistical estimation theory, particularly the empirical Bayesian approach, the current research estimates a covariance matrix via the shrinkage method. Second, estimated covariance matrix is employed in the penalized normal likelihood method to select the Gaussian graphical model. This formulation allows the application of prior knowledge in the covariance estimation, as well as in the Gaussian graphical model selection. Experimental results on simulated and real datasets show that, compared to state-of-the-art methods, the proposed algorithms achieve better results in terms of both PR and ROC curves. Finally, the present work applies its method on the RNA-seq data of human gastric atrophy patients, which was obtained from the EMBL-EBI database. The source codes and relevant data can be downloaded from: https://github.com/AbbaszadehO/DKGN.


Assuntos
Algoritmos , Redes Reguladoras de Genes , Teorema de Bayes , Biologia Computacional/métodos , Redes Reguladoras de Genes/genética , Humanos , Distribuição Normal
4.
Front Bioinform ; 2: 1001131, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36710911

RESUMO

Clustered regularly interspaced short palindromic repeats (CRISPR)-based gene editing has been widely used in various cell types and organisms. To make genome editing with Clustered regularly interspaced short palindromic repeats far more precise and practical, we must concentrate on the design of optimal gRNA and the selection of appropriate Cas enzymes. Numerous computational tools have been created in recent years to help researchers design the best gRNA for Clustered regularly interspaced short palindromic repeats researches. There are two approaches for designing an appropriate gRNA sequence (which targets our desired sites with high precision): experimental and predicting-based approaches. It is essential to reduce off-target sites when designing an optimal gRNA. Here we review both traditional and machine learning-based approaches for designing an appropriate gRNA sequence and predicting off-target sites. In this review, we summarize the key characteristics of all available tools (as far as possible) and compare them together. Machine learning-based tools and web servers are believed to become the most effective and reliable methods for predicting on-target and off-target activities of Clustered regularly interspaced short palindromic repeats in the future. However, these predictions are not so precise now and the performance of these algorithms -especially deep learning one's-depends on the amount of data used during training phase. So, as more features are discovered and incorporated into these models, predictions become more in line with experimental observations. We must concentrate on the creation of ideal gRNA and the choice of suitable Cas enzymes in order to make genome editing with Clustered regularly interspaced short palindromic repeats far more accurate and feasible.

5.
F1000Res ; 10: 897, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34804501

RESUMO

Scientific data analyses often combine several computational tools in automated pipelines, or workflows. Thousands of such workflows have been used in the life sciences, though their composition has remained a cumbersome manual process due to a lack of standards for annotation, assembly, and implementation. Recent technological advances have returned the long-standing vision of automated workflow composition into focus. This article summarizes a recent Lorentz Center workshop dedicated to automated composition of workflows in the life sciences. We survey previous initiatives to automate the composition process, and discuss the current state of the art and future perspectives. We start by drawing the "big picture" of the scientific workflow development life cycle, before surveying and discussing current methods, technologies and practices for semantic domain modelling, automation in workflow development, and workflow assessment. Finally, we derive a roadmap of individual and community-based actions to work toward the vision of automated workflow development in the forthcoming years. A central outcome of the workshop is a general description of the workflow life cycle in six stages: 1) scientific question or hypothesis, 2) conceptual workflow, 3) abstract workflow, 4) concrete workflow, 5) production workflow, and 6) scientific results. The transitions between stages are facilitated by diverse tools and methods, usually incorporating domain knowledge in some form. Formal semantic domain modelling is hard and often a bottleneck for the application of semantic technologies. However, life science communities have made considerable progress here in recent years and are continuously improving, renewing interest in the application of semantic technologies for workflow exploration, composition and instantiation. Combined with systematic benchmarking with reference data and large-scale deployment of production-stage workflows, such technologies enable a more systematic process of workflow development than we know today. We believe that this can lead to more robust, reusable, and sustainable workflows in the future.


Assuntos
Disciplinas das Ciências Biológicas , Biologia Computacional , Benchmarking , Software , Fluxo de Trabalho
6.
PLoS Comput Biol ; 17(6): e1009014, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34061826

RESUMO

Supervised machine learning is an essential but difficult to use approach in biomedical data analysis. The Galaxy-ML toolkit (https://galaxyproject.org/community/machine-learning/) makes supervised machine learning more accessible to biomedical scientists by enabling them to perform end-to-end reproducible machine learning analyses at large scale using only a web browser. Galaxy-ML extends Galaxy (https://galaxyproject.org), a biomedical computational workbench used by tens of thousands of scientists across the world, with a suite of tools for all aspects of supervised machine learning.


Assuntos
Biologia Computacional/métodos , Aprendizado de Máquina , Reprodutibilidade dos Testes , Software
7.
J Bioinform Comput Biol ; 19(2): 2150002, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33657986

RESUMO

A central problem of systems biology is the reconstruction of Gene Regulatory Networks (GRNs) by the use of time series data. Although many attempts have been made to design an efficient method for GRN inference, providing a best solution is still a challenging task. Existing noise, low number of samples, and high number of nodes are the main reasons causing poor performance of existing methods. The present study applies the ensemble Kalman filter algorithm to model a GRN from gene time series data. The inference of a GRN is decomposed with p genes into p subproblems. In each subproblem, the ensemble Kalman filter algorithm identifies the weight of interactions for each target gene. With the use of the ensemble Kalman filter, the expression pattern of the target gene is predicted from the expression patterns of all the remaining genes. The proposed method is compared with several well-known approaches. The results of the evaluation indicate that the proposed method improves inference accuracy and demonstrates better regulatory relations with noisy data.


Assuntos
Algoritmos , Redes Reguladoras de Genes , Biologia de Sistemas , Fatores de Tempo
8.
PLoS One ; 15(10): e0241291, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33120403

RESUMO

Decreasing the cost of high-throughput DNA sequencing technologies, provides a huge amount of data that enables researchers to determine haplotypes for diploid and polyploid organisms. Although various methods have been developed to reconstruct haplotypes in diploid form, their accuracy is still a challenging task. Also, most of the current methods cannot be applied to polyploid form. In this paper, an iterative method is proposed, which employs hypergraph to reconstruct haplotype. The proposed method by utilizing chaotic viewpoint can enhance the obtained haplotypes. For this purpose, a haplotype set was randomly generated as an initial estimate, and its consistency with the input fragments was described by constructing a weighted hypergraph. Partitioning the hypergraph specifies those positions in the haplotype set that need to be corrected. This procedure is repeated until no further improvement could be achieved. Each element of the finalized haplotype set is mapped to a line by chaos game representation, and a coordinate series is defined based on the position of mapped points. Then, some positions with low qualities can be assessed by applying a local projection. Experimental results on both simulated and real datasets demonstrate that this method outperforms most other approaches, and is promising to perform the haplotype assembly.


Assuntos
Algoritmos , Genoma Humano , Haplótipos , Modelos Genéticos , Análise de Sequência de DNA , Humanos
9.
BMC Bioinformatics ; 21(1): 475, 2020 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-33092523

RESUMO

BACKGROUND: Single individual haplotype problem refers to reconstructing haplotypes of an individual based on several input fragments sequenced from a specified chromosome. Solving this problem is an important task in computational biology and has many applications in the pharmaceutical industry, clinical decision-making, and genetic diseases. It is known that solving the problem is NP-hard. Although several methods have been proposed to solve the problem, it is found that most of them have low performances in dealing with noisy input fragments. Therefore, proposing a method which is accurate and scalable, is a challenging task. RESULTS: In this paper, we introduced a method, named NCMHap, which utilizes the Neutrosophic c-means (NCM) clustering algorithm. The NCM algorithm can effectively detect the noise and outliers in the input data. In addition, it can reduce their effects in the clustering process. The proposed method has been evaluated by several benchmark datasets. Comparing with existing methods indicates when NCM is tuned by suitable parameters, the results are encouraging. In particular, when the amount of noise increases, it outperforms the comparing methods. CONCLUSION: The proposed method is validated using simulated and real datasets. The achieved results recommend the application of NCMHap on the datasets which involve the fragments with a huge amount of gaps and noise.


Assuntos
Algoritmos , Biologia Computacional/métodos , Haplótipos/genética , Sequência de Bases , Análise por Conglomerados , Simulação por Computador , Bases de Dados Genéticas , Humanos , Polimorfismo de Nucleotídeo Único/genética
10.
Data Brief ; 32: 106144, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32835040

RESUMO

Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is responsible for the COVID-19 pandemic. It was first detected in China and was rapidly spread to other countries. Several thousands of whole genome sequences of SARS-CoV-2 have been reported and it is important to compare them and identify distinctive evolutionary/mutant markers. Utilizing chaos game representation (CGR) as well as recurrence quantification analysis (RQA) as a powerful nonlinear analysis technique, we proposed an effective process to extract several valuable features from genomic sequences of SARS-CoV-2. The represented features enable us to compare genomic sequences with different lengths. The provided dataset involves totally 18 RQA-based features for 4496 instances of SARS-CoV-2.

11.
Sci Rep ; 9(1): 10361, 2019 07 17.
Artigo em Inglês | MEDLINE | ID: mdl-31316124

RESUMO

Sequence data are deposited in the form of unphased genotypes and it is not possible to directly identify the location of a particular allele on a specific parental chromosome or haplotype. This study employed nonlinear time series modeling approaches to analyze the haplotype sequences obtained from the NGS sequencing method. To evaluate the chaotic behavior of haplotypes, we analyzed their whole sequences, as well as several subsequences from distinct haplotypes, in terms of the SNP distribution on their chromosomes. This analysis utilized chaos game representation (CGR) followed by the application of two different scaling methods. It was found that chaotic behavior clearly exists in most haplotype subsequences. For testing the applicability of the proposed model, the present research determined the alleles in gap positions and positions with low coverage by using chromosome subsequences in which 10% of each subsequence's alleles are replaced by gaps. After conversion of the subsequences' CGR into the coordinate series, a Local Projection (LP) method predicted the measure of ambiguous positions in the coordinate series. It was discovered that the average reconstruction rate for all input data is more than 97%, demonstrating that applying this knowledge can effectively improve the reconstruction rate of given haplotypes.


Assuntos
Mapeamento Cromossômico/métodos , Biologia Computacional/métodos , Haplótipos , Dinâmica não Linear , Polimorfismo de Nucleotídeo Único , Algoritmos , Alelos , Cromossomos Humanos/genética , Conjuntos de Dados como Assunto , Fractais , Genoma Humano , Humanos
12.
Comput Biol Chem ; 72: 1-10, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29289750

RESUMO

In this paper, a method for single individual haplotype (SIH) reconstruction using Asexual reproduction optimization (ARO) is proposed. Haplotypes, as a set of genetic variations in each chromosome, contain vital information such as the relationship between human genome and diseases. Finding haplotypes in diploid organisms is a challenging task. Experimental methods are expensive and require special equipment. In SIH problem, we encounter with several fragments and each fragment covers some parts of desired haplotype. The main goal is bi-partitioning of the fragments with minimum error correction (MEC). This problem is addressed as NP-hard and several attempts have been made in order to solve it using heuristic methods. The current method, AROHap, has two main phases. In the first phase, most of the fragments are clustered based on a practical metric distance. In the second phase, ARO algorithm as a fast convergence bio-inspired method is used to improve the initial bi-partitioning of the fragments in the previous step. AROHap is implemented with several benchmark datasets. The experimental results demonstrate that satisfactory results were obtained, proving that AROHap can be used for SIH reconstruction problem.


Assuntos
Algoritmos , Haplótipos , Modelos Biológicos , Biologia Computacional , Humanos , Reprodução Assexuada
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...