Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Brief Bioinform ; 24(5)2023 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-37738400

RESUMO

Implementing a specific cloud resource to analyze extensive genomic data on severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) poses a challenge when resources are limited. To overcome this, we repurposed a cloud platform initially designed for use in research on cancer genomics (https://cgc.sbgenomics.com) to enable its use in research on SARS-CoV-2 to build Cloud Workflow for Viral and Variant Identification (COWID). COWID is a workflow based on the Common Workflow Language that realizes the full potential of sequencing technology for use in reliable SARS-CoV-2 identification and leverages cloud computing to achieve efficient parallelization. COWID outperformed other contemporary methods for identification by offering scalable identification and reliable variant findings with no false-positive results. COWID typically processed each sample of raw sequencing data within 5 min at a cost of only US$0.01. The COWID source code is publicly available (https://github.com/hendrick0403/COWID) and can be accessed on any computer with Internet access. COWID is designed to be user-friendly; it can be implemented without prior programming knowledge. Therefore, COWID is a time-efficient tool that can be used during a pandemic.


Assuntos
COVID-19 , Humanos , COVID-19/diagnóstico , Computação em Nuvem , SARS-CoV-2/genética , Fluxo de Trabalho , Genômica
2.
Brief Bioinform ; 22(1): 557-567, 2021 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-32031567

RESUMO

Microbiome samples are accumulating at an unprecedented speed. As a result, a massive amount of samples have become available for the mining of the intrinsic patterns among them. However, due to the lack of advanced computational tools, fast yet accurate comparisons and searches among thousands to millions of samples are still in urgent need. In this work, we proposed the Meta-Prism method for comparing and searching the microbial community structures amongst tens of thousands of samples. Meta-Prism is at least 10 times faster than contemporary methods serving the same purpose and can provide very accurate search results. The method is based on three computational techniques: dual-indexing approach for sample subgrouping, refined scoring function that could scrutinize the minute differences among samples, and parallel computation on CPU or GPU. The superiority of Meta-Prism on speed and accuracy for multiple sample searches is proven based on searching against ten thousand samples derived from both human and environments. Therefore, Meta-Prism could facilitate similarity search and in-depth understanding among massive number of heterogenous samples in the microbiome universe. The codes of Meta-Prism are available at: https://github.com/HUST-NingKang-Lab/metaPrism.


Assuntos
Metagenômica/métodos , Microbiota , Humanos , Metagenômica/normas , RNA Ribossômico 16S/genética , Sensibilidade e Especificidade , Software/normas
3.
Behav Res Methods ; 53(3): 1148-1165, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33001382

RESUMO

Recent advances in Markov chain Monte Carlo (MCMC) extend the scope of Bayesian inference to models for which the likelihood function is intractable. Although these developments allow us to estimate model parameters, other basic problems such as estimating the marginal likelihood, a fundamental tool in Bayesian model selection, remain challenging. This is an important scientific limitation because testing psychological hypotheses with hierarchical models has proven difficult with current model selection methods. We propose an efficient method for estimating the marginal likelihood for models where the likelihood is intractable, but can be estimated unbiasedly. It is based on first running a sampling method such as MCMC to obtain samples for the model parameters, and then using these samples to construct the proposal density in an importance sampling (IS) framework with an unbiased estimate of the likelihood. Our method has several attractive properties: it generates an unbiased estimate of the marginal likelihood, it is robust to the quality and target of the sampling method used to form the IS proposals, and it is computationally cheap to estimate the variance of the marginal likelihood estimator. We also obtain the convergence properties of the method and provide guidelines on maximizing computational efficiency. The method is illustrated in two challenging cases involving hierarchical models: identifying the form of individual differences in an applied choice scenario, and evaluating the best parameterization of a cognitive model in a speeded decision making context. Freely available code to implement the methods is provided. Extensions to posterior moment estimation and parallelization are also discussed.


Assuntos
Cognição , Teorema de Bayes , Humanos , Funções Verossimilhança , Cadeias de Markov , Método de Monte Carlo
4.
J Comput Chem ; 39(15): 909-916, 2018 06 05.
Artigo em Inglês | MEDLINE | ID: mdl-29399822

RESUMO

In linear-scaling divide-and-conquer (DC) electronic structure calculations, a buffer region is used to control the error introduced by the DC approximation. In this study, an energy-based error estimation scheme is proposed for the DC self-consistent field method with a two-layer buffer region scheme. Based on this scheme, a procedure to automatically determine the appropriate buffer region in the DC method is proposed. It was confirmed that the present method works satisfactorily in calculations of water clusters and proteins, although its performance was insufficient for the calculation of a delocalized graphene system. © 2018 Wiley Periodicals, Inc.

5.
J Environ Manage ; 206: 1211-1223, 2018 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-28988063

RESUMO

Due to urbanization and population growth, the degradation of natural forests and associated biodiversity are now widely recognized as a global environmental concern. Hence, there is an urgent need for rapid assessment and monitoring of biodiversity on priority using state-of-art tools and technologies. The main purpose of this research article is to develop and implement a new methodological approach to characterize biological diversity using spatial model developed during the study viz. Spatial Biodiversity Model (SBM). The developed model is scale, resolution and location independent solution for spatial biodiversity richness modelling. The platform-independent computation model is based on parallel computation. The biodiversity model based on open-source software has been implemented on R statistical computing platform. It provides information on high disturbance and high biological richness areas through different landscape indices and site specific information (e.g. forest fragmentation (FR), disturbance index (DI) etc.). The model has been developed based on the case study of Indian landscape; however it can be implemented in any part of the world. As a case study, SBM has been tested for Uttarakhand state in India. Inputs for landscape ecology are derived through multi-criteria decision making (MCDM) techniques in an interactive command line environment. MCDM with sensitivity analysis in spatial domain has been carried out to illustrate the model stability and robustness. Furthermore, spatial regression analysis has been made for the validation of the output.


Assuntos
Biodiversidade , Conservação dos Recursos Naturais , Florestas , Índia , Computação Matemática
6.
Biostatistics ; 17(2): 291-303, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26553916

RESUMO

We propose a spatial Bayesian variable selection method for detecting blood oxygenation level dependent activation in functional magnetic resonance imaging (fMRI) data. Typical fMRI experiments generate large datasets that exhibit complex spatial and temporal dependence. Fitting a full statistical model to such data can be so computationally burdensome that many practitioners resort to fitting oversimplified models, which can lead to lower quality inference. We develop a full statistical model that permits efficient computation. Our approach eases the computational burden in two ways. We partition the brain into 3D parcels, and fit our model to the parcels in parallel. Voxel-level activation within each parcel is modeled as regressions located on a lattice. Regressors represent the magnitude of change in blood oxygenation in response to a stimulus, while a latent indicator for each regressor represents whether the change is zero or non-zero. A sparse spatial generalized linear mixed model captures the spatial dependence among indicator variables within a parcel and for a given stimulus. The sparse SGLMM permits considerably more efficient computation than does the spatial model typically employed in fMRI. Through simulation we show that our parcellation scheme performs well in various realistic scenarios. Importantly, indicator variables on the boundary between parcels do not exhibit edge effects. We conclude by applying our methodology to data from a task-based fMRI experiment.


Assuntos
Teorema de Bayes , Mapeamento Encefálico/métodos , Imageamento por Ressonância Magnética/métodos , Modelos Estatísticos , Análise Espaço-Temporal , Humanos
7.
Proc Natl Acad Sci U S A ; 111(49): 17408-13, 2014 Dec 09.
Artigo em Inglês | MEDLINE | ID: mdl-25422442

RESUMO

Markov chain Monte Carlo methods (MCMC) are essential tools for solving many modern-day statistical and computational problems; however, a major limitation is the inherently sequential nature of these algorithms. In this paper, we propose a natural generalization of the Metropolis-Hastings algorithm that allows for parallelizing a single chain using existing MCMC methods. We do so by proposing multiple points in parallel, then constructing and sampling from a finite-state Markov chain on the proposed points such that the overall procedure has the correct target density as its stationary distribution. Our approach is generally applicable and straightforward to implement. We demonstrate how this construction may be used to greatly increase the computational speed and statistical efficiency of a variety of existing MCMC methods, including Metropolis-Adjusted Langevin Algorithms and Adaptive MCMC. Furthermore, we show how it allows for a principled way of using every integration step within Hamiltonian Monte Carlo methods; our approach increases robustness to the choice of algorithmic parameters and results in increased accuracy of Monte Carlo estimates with little extra computational cost.

8.
BMC Bioinformatics ; 17(1): 203, 2016 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-27153986

RESUMO

BACKGROUND: RNA secondary structure around splice sites is known to assist normal splicing by promoting spliceosome recognition. However, analyzing the structural properties of entire intronic regions or pre-mRNA sequences has been difficult hitherto, owing to serious experimental and computational limitations, such as low read coverage and numerical problems. RESULTS: Our novel software, "ParasoR", is designed to run on a computer cluster and enables the exact computation of various structural features of long RNA sequences under the constraint of maximal base-pairing distance. ParasoR divides dynamic programming (DP) matrices into smaller pieces, such that each piece can be computed by a separate computer node without losing the connectivity information between the pieces. ParasoR directly computes the ratios of DP variables to avoid the reduction of numerical precision caused by the cancellation of a large number of Boltzmann factors. The structural preferences of mRNAs computed by ParasoR shows a high concordance with those determined by high-throughput sequencing analyses. Using ParasoR, we investigated the global structural preferences of transcribed regions in the human genome. A genome-wide folding simulation indicated that transcribed regions are significantly more structural than intergenic regions after removing repeat sequences and k-mer frequency bias. In particular, we observed a highly significant preference for base pairing over entire intronic regions as compared to their antisense sequences, as well as to intergenic regions. A comparison between pre-mRNAs and mRNAs showed that coding regions become more accessible after splicing, indicating constraints for translational efficiency. Such changes are correlated with gene expression levels, as well as GC content, and are enriched among genes associated with cytoskeleton and kinase functions. CONCLUSIONS: We have shown that ParasoR is very useful for analyzing the structural properties of long RNA sequences such as mRNAs, pre-mRNAs, and long non-coding RNAs whose lengths can be more than a million bases in the human genome. In our analyses, transcribed regions including introns are indicated to be subject to various types of structural constraints that cannot be explained from simple sequence composition biases. ParasoR is freely available at https://github.com/carushi/ParasoR .


Assuntos
Biologia Computacional/métodos , Genoma Humano , Conformação de Ácido Nucleico , RNA/química , RNA/genética , Animais , Área Sob a Curva , Sequência de Bases , Simulação por Computador , Ontologia Genética , Humanos , Camundongos , Proteínas dos Microfilamentos/metabolismo , Pontuação de Propensão , Precursores de RNA/genética , Precursores de RNA/metabolismo , Splicing de RNA/genética , RNA Mensageiro/genética , RNA Mensageiro/metabolismo , Reprodutibilidade dos Testes , Software , Transcrição Gênica
9.
J Comput Chem ; 37(21): 1983-92, 2016 08 05.
Artigo em Inglês | MEDLINE | ID: mdl-27317328

RESUMO

The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc.

10.
Appl Numer Math ; 79(100): 3-17, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-24829517

RESUMO

An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.

11.
Distrib Comput ; 37(1): 35-64, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38370529

RESUMO

In this paper, we study the power and limitations of component-stable algorithms in the low-space model of massively parallel computation (MPC). Recently Ghaffari, Kuhn and Uitto (FOCS 2019) introduced the class of component-stable low-space MPC algorithms, which are, informally, those algorithms for which the outputs reported by the nodes in different connected components are required to be independent. This very natural notion was introduced to capture most (if not all) of the known efficient MPC algorithms to date, and it was the first general class of MPC algorithms for which one can show non-trivial conditional lower bounds. In this paper we enhance the framework of component-stable algorithms and investigate its effect on the complexity of randomized and deterministic low-space MPC. Our key contributions include: 1. We revise and formalize the lifting approach of Ghaffari, Kuhn and Uitto. This requires a very delicate amendment of the notion of component stability, which allows us to fill in gaps in the earlier arguments. 2. We also extend the framework to obtain conditional lower bounds for deterministic algorithms and fine-grained lower bounds that depend on the maximum degree Δ. 3. We demonstrate a collection of natural graph problems for which deterministic component-unstable algorithms break the conditional lower bound obtained for component-stable algorithms. This implies that, in the context of deterministic algorithms, component-stable algorithms are conditionally weaker than the component-unstable ones. 4. We also show that the restriction to component-stable algorithms has an impact in the randomized setting. We present a natural problem which can be solved in O(1) rounds by a component-unstable MPC algorithm, but requires Ω(loglog∗n) rounds for any component-stable algorithm, conditioned on the connectivity conjecture. Altogether our results imply that component-stability might limit the computational power of the low-space MPC model, at least in certain contexts, paving the way for improved upper bounds that escape the conditional lower bound setting of Ghaffari, Kuhn, and Uitto.

12.
Radiol Phys Technol ; 17(2): 402-411, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38546970

RESUMO

The projection data generated via the forward projection of a computed tomography (CT) image (FP-data) have useful potentials in cases where only image data are available. However, there is a question of whether the FP-data generated from an image severely corrupted by metal artifacts can be used for the metal artifact reduction (MAR). The aim of this study was to investigate the feasibility of a MAR technique using FP-data by comparing its performance with that of a conventional robust MAR using projection data normalization (NMARconv). The NMARconv was modified to make use of FP-data (FPNMAR). A graphics processing unit was used to reduce the time required to generate FP-data and subsequent processes. The performances of FPNMAR and NMARconv were quantitatively compared using a normalized artifact index (AIn) for two cases each of hip prosthesis and dental fillings. Several clinical CT images with metal artifacts were processed by FPNMAR. The AIn values of FPNMAR and NMARconv were not significantly different from each other, showing almost the same performance between these two techniques. For all the clinical cases tested, FPNMAR significantly reduced the metal artifacts; thereby, the images of the soft tissues and bones obscured by the artifacts were notably recovered. The computation time per image was ~ 56 ms. FPNMAR, which can be applied to CT images without accessing the projection data, exhibited almost the same performance as that of NMARconv, while consuming significantly shorter processing time. This capability testifies the potential of FPNMAR for wider use in clinical settings.


Assuntos
Artefatos , Metais , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Prótese de Quadril , Imagens de Fantasmas
13.
Water Res ; 266: 122318, 2024 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-39236501

RESUMO

As the size of water distribution network (WDN) models continues to grow, developing and applying real-time models or digital twins to simulate hydraulic behaviors in large-scale WDNs is becoming increasingly challenging. The long response time incurred when performing multiple hydraulic simulations in large-scale WDNs can no longer meet the current requirements for the efficient and real-time application of WDN models. To address this issue, there is a rising interest in accelerating hydraulic calculations in WDN models by integrating new model structures with abundant computational resources and mature parallel computing frameworks. This paper presents a novel and efficient framework for steady-state hydraulic calculations, comprising a joint topology-calculation decomposition method that decomposes the hydraulic calculation process and a high-performance decomposed gradient algorithm that integrates with parallel computation. Tests in four WDNs of different sizes with 8 to 85,118 nodes demonstrate that the framework maintains high calculation accuracy consistent with EPANET and can reduce calculation time by up to 51.93 % compared to EPANET in the largest WDN model. Further investigation found that factors affecting the acceleration include the decomposition level, consistency of sub-model sizes and sub-model structures. The framework aims to help develop rapid-responding models for large-scale WDNs and improve their efficiency in integrating multiple application algorithms, thereby supporting the water supply industry in achieving more adaptive and intelligent management of large-scale WDNs.

14.
Magn Reson Imaging ; 109: 271-285, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38537891

RESUMO

Functional magnetic resonance imaging (fMRI) plays a crucial role in neuroimaging, enabling the exploration of brain activity through complex-valued signals. These signals, composed of magnitude and phase, offer a rich source of information for understanding brain functions. Traditional fMRI analyses have largely focused on magnitude information, often overlooking the potential insights offered by phase data. In this paper, we propose a novel fully Bayesian model designed for analyzing single-subject complex-valued fMRI (cv-fMRI) data. Our model, which we refer to as the CV-M&P model, is distinctive in its comprehensive utilization of both magnitude and phase information in fMRI signals, allowing for independent prediction of different types of activation maps. We incorporate Gaussian Markov random fields (GMRFs) to capture spatial correlations within the data, and employ image partitioning and parallel computation to enhance computational efficiency. Our model is rigorously tested through simulation studies, and then applied to a real dataset from a unilateral finger-tapping experiment. The results demonstrate the model's effectiveness in accurately identifying brain regions activated in response to specific tasks, distinguishing between magnitude and phase activation.


Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Teorema de Bayes , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Simulação por Computador
15.
Phys Med Biol ; 68(24)2023 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-37890461

RESUMO

Objective. Real-time reconstruction of magnetic particle imaging (MPI) shows promising clinical applications. However, prevalent reconstruction methods are mainly based on serial iteration, which causes large delay in real-time reconstruction. In order to achieve lower latency in real-time MPI reconstruction, we propose a parallel method for accelerating the speed of reconstruction methods.Approach. The proposed method, named adaptive multi-frame parallel iterative method (AMPIM), enables the processing of multi-frame signals to multi-frame MPI images in parallel. To facilitate parallel computing, we further propose an acceleration strategy for parallel computation to improve the computational efficiency of our AMPIM.Main results. OpenMPIData was used to evaluate our AMPIM, and the results show that our AMPIM improves the reconstruction frame rate per second of real-time MPI reconstruction by two orders of magnitude compared to prevalent iterative algorithms including the Kaczmarz algorithm, the conjugate gradient normal residual algorithm, and the alternating direction method of multipliers algorithm. The reconstructed image using AMPIM has high contrast-to-noise with reducing artifacts.Significance. The AMPIM can parallelly optimize least squares problems with multiple right-hand sides by exploiting the dimension of the right-hand side. AMPIM has great potential for application in real-time MPI imaging with high imaging frame rate.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Diagnóstico por Imagem , Imagens de Fantasmas , Fenômenos Magnéticos
16.
Methods Mol Biol ; 2586: 35-48, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36705897

RESUMO

The information of RNA secondary structure has been widely applied to the inference of RNA function. However, a classical prediction method is not feasible to long RNAs such as mRNA due to the problems of computational time and numerical errors. To overcome those problems, sliding window methods have been applied while their results are not directly comparable to global RNA structure prediction. In this chapter, we introduce ParasoR, a method designed for parallel computation of genome-wide RNA secondary structures. To enable genome-wide prediction, ParasoR distributes dynamic programming (DP) matrices required for structure prediction to multiple computational nodes. Using the database of not the original DP variable but the ratio of variables, ParasoR can locally compute the structure scores such as stem probability or accessibility on demand. A comprehensive analysis of local secondary structures by ParasoR is expected to be a promising way to detect the statistical constraints on long RNAs.


Assuntos
Algoritmos , RNA , RNA/genética , RNA/química , Conformação de Ácido Nucleico , Biologia Computacional/métodos , RNA Mensageiro
17.
Magn Reson Imaging ; 97: 13-23, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36581213

RESUMO

Magnetic Resonance Fingerprinting (MRF) is a new quantitative technique of Magnetic Resonance Imaging (MRI). Conventionally, MRF requires sequential correlation of the acquired MRF signals with all the signals of (a large sized) MRF dictionary. This is a computationally intensive matching process and is a major challenge in MRF image reconstruction. This paper introduces the use of clustering techniques (to reduce the effective size of MRF dictionary) by splitting MRF dictionary into multiple small sized MRF dictionary components called MRF signal groups. The proposed method has been further optimized for parallel processing to reduce the computation time of MRF pattern matching. A multi-core GPU based parallel framework has been developed that enables the MRF algorithm to process multiple MRF signals simultaneously. Experiments have been performed on human head and phantom datasets. The results show that the proposed method accelerates the conventional MRF (MATLAB based) reconstruction time up to 25× with single-core CPU implementation, 300× with multi- core CPU implementation and 1035× with the proposed multi-core GPU based framework by keeping the SNR of the resulting images in a clinically acceptable range. Furthermore, experimental results show that the memory requirements of MRF dictionary get significantly reduced (due to efficient memory utilization) in the proposed method.


Assuntos
Encéfalo , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Espectroscopia de Ressonância Magnética , Algoritmos , Imagens de Fantasmas
18.
Biosystems ; 226: 104887, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36990379

RESUMO

Although there have been many studies revealing that biomarker genes for early cancer detection can be found in biomolecular networks, no proper tool exists to discover the cancer biomarker genes from various biomolecular networks. Accordingly, we developed a novel Cytoscape app called C-Biomarker.net, which can identify cancer biomarker genes from cores of various biomolecular networks. Derived from recent research, we designed and implemented the software based on parallel algorithms proposed in this study for working on high-performance computing devices. We tested our software on various network sizes and found the suitable size for each running mode on CPU or GPU. Interestingly, using the software for 17 cancer signaling pathways, we found that on average 70.59% of the top three nodes residing at the innermost core of each pathway are biomarker genes of the cancer respectively to the pathway. Similarly, by the software, we also found 100% of the top ten nodes at both cores of Human Gene Regulatory (HGR) network and Human Protein-Protein Interaction (HPPI) network are multi-cancer biomarkers. These case studies are reliable evidence for performance of cancer biomarker prediction function in the software. Through the case studies, we also suggest that true cores of directed complex networks should be identified by the algorithm of R-core rather than K-core as usual. Finally, we compared the prediction result of our software with those of other researchers and confirmed that our prediction method outperforms the other methods. Taken together, C-Biomarker.net is a reliable tool that efficiently detects biomarker nodes from cores of various large biomolecular networks. The software is available at https://github.com/trantd/C-Biomarker.net.


Assuntos
Aplicativos Móveis , Neoplasias , Humanos , Biomarcadores Tumorais/genética , Software , Algoritmos , Mapas de Interação de Proteínas/genética , Redes Reguladoras de Genes/genética , Neoplasias/diagnóstico , Neoplasias/genética , Biologia Computacional/métodos
19.
Neurophotonics ; 10(4): 045007, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38076725

RESUMO

Significance: Frequent assessment of cerebral blood flow (CBF) is crucial for the diagnosis and management of cerebral vascular diseases. In contrast to large and expensive imaging modalities, such as nuclear medicine and magnetic resonance imaging, optical imaging techniques are portable and inexpensive tools for continuous measurements of cerebral hemodynamics. The recent development of an innovative noncontact speckle contrast diffuse correlation tomography (scDCT) enables three-dimensional (3D) imaging of CBF distributions. However, scDCT requires complex and time-consuming 3D reconstruction, which limits its ability to achieve high spatial resolution without sacrificing temporal resolution and computational efficiency. Aim: We investigate a new diffuse speckle contrast topography (DSCT) method with parallel computation for analyzing scDCT data to achieve fast and high-density two-dimensional (2D) mapping of CBF distributions at different depths without the need for 3D reconstruction. Approach: A new moving window method was adapted to improve the sampling rate of DSCT. A fast computation method utilizing MATLAB functions in the Image Processing Toolbox™ and Parallel Computing Toolbox™ was developed to rapidly generate high-density CBF maps. The new DSCT method was tested for spatial resolution and depth sensitivity in head-simulating layered phantoms and in-vivo rodent models. Results: DSCT enables 2D mapping of the particle flow in the phantom at different depths through the top layer with varied thicknesses. Both DSCT and scDCT enable the detection of global and regional CBF changes in deep brains of adult rats. However, DSCT achieves fast and high-density 2D mapping of CBF distributions at different depths without the need for complex and time-consuming 3D reconstruction. Conclusions: The depth-sensitive DSCT method has the potential to be used as a noninvasive, noncontact, fast, high resolution, portable, and inexpensive brain imager for basic neuroscience research in small animal models and for translational studies in human neonates.

20.
J Comput Chem ; 33(16): 1421-32, 2012 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-22496038

RESUMO

We present a method, named DCMB, for the calculations of large molecules. It is a combination of a parallel divide-and-conquer (DC) method and a mixed-basis (MB) set scheme. In this approach, atomic forces, total energy and vibrational frequencies are obtained from a series of MB calculations, which are derived from the target system utilizing the DC concept. Unlike the fragmentation based methods, all DCMB calculations are performed over the whole target system and no artificial caps are introduced so that it is particularly useful for charged and/or delocalized systems. By comparing the DCMB results with those from the conventional method, we demonstrate that DCMB is capable of providing accurate prediction of molecular geometries, total energies, and vibrational frequencies of molecules of general interest. We also demonstrate that the high efficiency of the parallel DCMB code holds the promise for a routine geometry optimization of large complex systems.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA