Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 292
Filtrar
1.
J Cell Sci ; 136(5)2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36861886

RESUMO

Since the late 1990s, efforts have been made to utilize cytoskeletal filaments, propelled by molecular motors, for nanobiotechnological applications, for example, in biosensing and parallel computation. This work has led to in-depth insights into the advantages and challenges of such motor-based systems, and has yielded small-scale, proof-of-principle applications but, to date, no commercially viable devices. Additionally, these studies have also elucidated fundamental motor and filament properties, as well as providing other insights obtained from biophysical assays in which molecular motors and other proteins are immobilized on artificial surfaces. In this Perspective, I discuss the progress towards practically viable applications achieved so far using the myosin II-actin motor-filament system. I also highlight several fundamental pieces of insights derived from the studies. Finally, I consider what may be required to achieve real devices in the future or at least to allow future studies with a satisfactory cost-benefit ratio.


Assuntos
Actinas , Miosinas , Citoesqueleto , Bioensaio , Biofísica
2.
Biostatistics ; 25(4): 1254-1272, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-38649751

RESUMO

CRISPR genome engineering and single-cell RNA sequencing have accelerated biological discovery. Single-cell CRISPR screens unite these two technologies, linking genetic perturbations in individual cells to changes in gene expression and illuminating regulatory networks underlying diseases. Despite their promise, single-cell CRISPR screens present considerable statistical challenges. We demonstrate through theoretical and real data analyses that a standard method for estimation and inference in single-cell CRISPR screens-"thresholded regression"-exhibits attenuation bias and a bias-variance tradeoff as a function of an intrinsic, challenging-to-select tuning parameter. To overcome these difficulties, we introduce GLM-EIV ("GLM-based errors-in-variables"), a new method for single-cell CRISPR screen analysis. GLM-EIV extends the classical errors-in-variables model to responses and noisy predictors that are exponential family-distributed and potentially impacted by the same set of confounding variables. We develop a computational infrastructure to deploy GLM-EIV across hundreds of processors on clouds (e.g. Microsoft Azure) and high-performance clusters. Leveraging this infrastructure, we apply GLM-EIV to analyze two recent, large-scale, single-cell CRISPR screen datasets, yielding several new insights.


Assuntos
Análise de Célula Única , Análise de Célula Única/métodos , Humanos , Repetições Palindrômicas Curtas Agrupadas e Regularmente Espaçadas/genética , Sistemas CRISPR-Cas/genética , Modelos Estatísticos
3.
Brief Bioinform ; 24(1)2023 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-36534961

RESUMO

The inference of large-scale gene regulatory networks is essential for understanding comprehensive interactions among genes. Most existing methods are limited to reconstructing networks with a few hundred nodes. Therefore, parallel computing paradigms must be leveraged to construct large networks. We propose a generic parallel framework that enables any existing method, without re-engineering, to infer large networks in parallel, guaranteeing quality output. The framework is tested on 15 inference methods (not limited to) employing in silico benchmarks and real-world large expression matrices, followed by qualitative and speedup assessment. The framework does not compromise the quality of the base serial inference method. We rank the candidate methods and use the top-performing method to infer an Alzheimer's Disease (AD) affected network from large expression profiles of a triple transgenic mouse model consisting of 45,101 genes. The resultant network is further explored to obtain hub genes that emerge functionally related to the disease. We partition the network into 41 modules and conduct pathway enrichment analysis, revealing that a good number of participating genes are collectively responsible for several brain disorders, including AD. Finally, we extract the interactions of a few known AD genes and observe that they are periphery genes connected to the network's hub genes. Availability: The R implementation of the framework is downloadable from https://github.com/Netralab/GenericParallelFramework.


Assuntos
Doença de Alzheimer , Redes Reguladoras de Genes , Animais , Camundongos , Doença de Alzheimer/genética , Encéfalo , Animais Geneticamente Modificados , Algoritmos
4.
J Comput Chem ; 45(8): 498-505, 2024 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-37966727

RESUMO

The rapid increase in computational power with the latest supercomputers has enabled atomistic molecular dynamics (MDs) simulations of biomolecules in biological membrane, cytoplasm, and other cellular environments. These environments often contain a million or more atoms to be simulated simultaneously. Therefore, their trajectory analyses involve heavy computations that can become a bottleneck in the computational studies. Spatial decomposition analysis (SPANA) is a set of analysis tools in the Generalized-Ensemble Simulation System (GENESIS) software package that can carry out MD trajectory analyses of large-scale biological simulations using multiple CPU cores in parallel. SPANA applies the spatial decomposition of a large biological system to distribute structural and dynamical analyses into individual CPU cores, which reduces the computational time and the memory size, significantly. SPANA opens new possibilities for detailed atomistic analyses of biomacromolecules as well as solvent water molecules, ions, and metabolites in MD simulation trajectories of very large biological systems containing more than millions of atoms in cellular environments.


Assuntos
Simulação de Dinâmica Molecular , Software , Computadores
5.
J Synchrotron Radiat ; 31(Pt 5): 1234-1240, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39172093

RESUMO

The development of hard X-ray nanoprobe techniques has given rise to a number of experimental methods, like nano-XAS, nano-XRD, nano-XRF, ptychography and tomography. Each method has its own unique data processing algorithms. With the increase in data acquisition rate, the large amount of generated data is now a big challenge to these algorithms. In this work, an intuitive, user-friendly software system is introduced to integrate and manage these algorithms; by taking advantage of the loosely coupled, component-based design approach of the system, the data processing speed of the imaging algorithm is enhanced through optimization of the parallelism efficiency. This study provides meaningful solutions to tackle complexity challenges faced in synchrotron data processing.

6.
Brief Bioinform ; 23(1)2022 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-34849567

RESUMO

MOTIVATION: Understanding chemical-gene interactions (CGIs) is crucial for screening drugs. Wet experiments are usually costly and laborious, which limits relevant studies to a small scale. On the contrary, computational studies enable efficient in-silico exploration. For the CGI prediction problem, a common method is to perform systematic analyses on a heterogeneous network involving various biomedical entities. Recently, graph neural networks become popular in the field of relation prediction. However, the inherent heterogeneous complexity of biological interaction networks and the massive amount of data pose enormous challenges. This paper aims to develop a data-driven model that is capable of learning latent information from the interaction network and making correct predictions. RESULTS: We developed BioNet, a deep biological networkmodel with a graph encoder-decoder architecture. The graph encoder utilizes graph convolution to learn latent information embedded in complex interactions among chemicals, genes, diseases and biological pathways. The learning process is featured by two consecutive steps. Then, embedded information learnt by the encoder is then employed to make multi-type interaction predictions between chemicals and genes with a tensor decomposition decoder based on the RESCAL algorithm. BioNet includes 79 325 entities as nodes, and 34 005 501 relations as edges. To train such a massive deep graph model, BioNet introduces a parallel training algorithm utilizing multiple Graphics Processing Unit (GPUs). The evaluation experiments indicated that BioNet exhibits outstanding prediction performance with a best area under Receiver Operating Characteristic (ROC) curve of 0.952, which significantly surpasses state-of-theart methods. For further validation, top predicted CGIs of cancer and COVID-19 by BioNet were verified by external curated data and published literature.


Assuntos
Biologia Computacional , Simulação por Computador , Modelos Biológicos , Redes Neurais de Computação
7.
Sensors (Basel) ; 24(4)2024 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-38400470

RESUMO

Cardiac CINE, a form of dynamic cardiac MRI, is indispensable in the diagnosis and treatment of heart conditions, offering detailed visualization essential for the early detection of cardiac diseases. As the demand for higher-resolution images increases, so does the volume of data requiring processing, presenting significant computational challenges that can impede the efficiency of diagnostic imaging. Our research presents an approach that takes advantage of the computational power of multiple Graphics Processing Units (GPUs) to address these challenges. GPUs are devices capable of performing large volumes of computations in a short period, and have significantly improved the cardiac MRI reconstruction process, allowing images to be produced faster. The innovation of our work resides in utilizing a multi-device system capable of processing the substantial data volumes demanded by high-resolution, five-dimensional cardiac MRI. This system surpasses the memory capacity limitations of single GPUs by partitioning large datasets into smaller, manageable segments for parallel processing, thereby preserving image integrity and accelerating reconstruction times. Utilizing OpenCL technology, our system offers adaptability and cross-platform functionality, ensuring wider applicability. The proposed multi-device approach offers an advancement in medical imaging, accelerating the reconstruction process and facilitating faster and more effective cardiac health assessment.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Coração/diagnóstico por imagem , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos
8.
J Comput Chem ; 44(20): 1740-1749, 2023 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-37141320

RESUMO

Generalized replica exchange with solute tempering (gREST) is one of the enhanced sampling algorithms for proteins or other systems with rugged energy landscapes. Unlike the replica-exchange molecular dynamics (REMD) method, solvent temperatures are the same in all replicas, while solute temperatures are different and are exchanged frequently between replicas for exploring various solute structures. Here, we apply the gREST scheme to large biological systems containing over one million atoms using a large number of processors in a supercomputer. First, communication time on a multi-dimensional torus network is reduced by matching each replica to MPI processors optimally. This is applicable not only to gREST but also to other multi-copy algorithms. Second, energy evaluations, which are necessary for the multistate bennet acceptance ratio (MBAR) method for free energy estimations, are performed on-the-fly during the gREST simulations. Using these two advanced schemes, we observed 57.72 ns/day performance in 128-replica gREST calculations with 1.5 million atoms system using 16,384 nodes in Fugaku. These schemes implemented in the latest version of GENESIS software could open new possibilities to answer unresolved questions on large biomolecular complex systems with slow conformational dynamics.


Assuntos
Simulação de Dinâmica Molecular , Proteínas , Proteínas/química , Software , Temperatura , Aceleração
9.
Brief Bioinform ; 22(5)2021 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-33822883

RESUMO

The rapid increase of genome data brought by gene sequencing technologies poses a massive challenge to data processing. To solve the problems caused by enormous data and complex computing requirements, researchers have proposed many methods and tools which can be divided into three types: big data storage, efficient algorithm design and parallel computing. The purpose of this review is to investigate popular parallel programming technologies for genome sequence processing. Three common parallel computing models are introduced according to their hardware architectures, and each of which is classified into two or three types and is further analyzed with their features. Then, the parallel computing for genome sequence processing is discussed with four common applications: genome sequence alignment, single nucleotide polymorphism calling, genome sequence preprocessing, and pattern detection and searching. For each kind of application, its background is firstly introduced, and then a list of tools or algorithms are summarized in the aspects of principle, hardware platform and computing efficiency. The programming model of each hardware and application provides a reference for researchers to choose high-performance computing tools. Finally, we discuss the limitations and future trends of parallel computing technologies.


Assuntos
Processamento Eletrônico de Dados/métodos , Genoma Humano , Genômica/métodos , Polimorfismo de Nucleotídeo Único , Alinhamento de Sequência/métodos , Algoritmos , Sequência de Bases/genética , Mapeamento Cromossômico/métodos , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Humanos , Armazenamento e Recuperação da Informação , Software , Sequenciamento Completo do Genoma/métodos
10.
Brief Bioinform ; 22(6)2021 11 05.
Artigo em Inglês | MEDLINE | ID: mdl-34254998

RESUMO

Statistical analysis of ultrahigh-dimensional omics scale data has long depended on univariate hypothesis testing. With growing data features and samples, the obvious next step is to establish multivariable association analysis as a routine method to describe genotype-phenotype association. Here we present ParProx, a state-of-the-art implementation to optimize overlapping and non-overlapping group lasso regression models for time-to-event and classification analysis, with selection of variables grouped by biological priors. ParProx enables multivariable model fitting for ultrahigh-dimensional data within an architecture for parallel or distributed computing via latent variable group representation. It thereby aims to produce interpretable regression models consistent with known biological relationships among independent variables, a property often explored post hoc, not during model estimation. Simulation studies clearly demonstrate the scalability of ParProx with graphics processing units in comparison to existing implementations. We illustrate the tool using three different omics data sets featuring moderate to large numbers of variables, where we use genomic regions and biological pathways as variable groups, rendering the selected independent variables directly interpretable with respect to those groups. ParProx is applicable to a wide range of studies using ultrahigh-dimensional omics data, from genome-wide association analysis to multi-omics studies where model estimation is computationally intractable with existing implementation.


Assuntos
Algoritmos , Biologia Computacional/métodos , Genômica/métodos , Análise de Regressão , Software , Biomarcadores , Suscetibilidade a Doenças , Perfilação da Expressão Gênica , Humanos , Mutação , Prognóstico , Modelos de Riscos Proporcionais , Mapeamento de Interação de Proteínas
11.
Sensors (Basel) ; 23(6)2023 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-36991663

RESUMO

Traditional parallel computing for power management systems has prime challenges such as execution time, computational complexity, and efficiency like process time and delays in power system condition monitoring, particularly consumer power consumption, weather data, and power generation for detecting and predicting data mining in the centralized parallel processing and diagnosis. Due to these constraints, data management has become a critical research consideration and bottleneck. To cope with these constraints, cloud computing-based methodologies have been introduced for managing data efficiently in power management systems. This paper reviews the concept of cloud computing architecture that can meet the multi-level real-time requirements to improve monitoring and performance which is designed for different application scenarios for power system monitoring. Then, cloud computing solutions are discussed under the background of big data, and emerging parallel programming models such as Hadoop, Spark, and Storm are briefly described to analyze the advancement, constraints, and innovations. The key performance metrics of cloud computing applications such as core data sampling, modeling, and analyzing the competitiveness of big data was modeled by applying related hypotheses. Finally, it introduces a new design concept with cloud computing and eventually some recommendations focusing on cloud computing infrastructure, and methods for managing real-time big data in the power management system that solve the data mining challenges.

12.
Behav Res Methods ; 55(8): 4403-4418, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-36627436

RESUMO

Item parameter estimation is a crucial step when conducting item factor analysis (IFA). From the view of frequentist estimation, marginal maximum likelihood (MML) seems to be the gold standard. However, fitting a high-dimensional IFA model by MML is still a challenging task. The current study demonstrates that with the help of a GPU (graphics processing unit) and carefully designed vectorization, the computational time of MML could be largely reduced for large-scale IFA applications. In particular, a Python package called xifa (accelerated item factor analysis) is developed, which implements a vectorized Metropolis-Hastings Robbins-Monro (VMHRM) algorithm. Our numerical experiments show that the VMHRM on a GPU may run 33 times faster than its CPU version. When the number of factors is at least five, VMHRM (on GPU) is much faster than the Bock-Aitkin expectation maximization, MHRM implemented by mirt (on CPU), and the importance-weighted autoencoder (on GPU). The GPU-implemented VMHRM is most appropriate for high-dimensional IFA with large data sets. We believe that GPU computing will play a central role in large-scale psychometric modeling in the near future.


Assuntos
Algoritmos , Gráficos por Computador , Humanos
13.
BMC Bioinformatics ; 23(1): 18, 2022 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-34991448

RESUMO

BACKGROUND: The function of non-coding RNA sequences is largely determined by their spatial conformation, namely the secondary structure of the molecule, formed by Watson-Crick interactions between nucleotides. Hence, modern RNA alignment algorithms routinely take structural information into account. In order to discover yet unknown RNA families and infer their possible functions, the structural alignment of RNAs is an essential task. This task demands a lot of computational resources, especially for aligning many long sequences, and it therefore requires efficient algorithms that utilize modern hardware when available. A subset of the secondary structures contains overlapping interactions (called pseudoknots), which add additional complexity to the problem and are often ignored in available software. RESULTS: We present the SeqAn-based software LaRA 2 that is significantly faster than comparable software for accurate pairwise and multiple alignments of structured RNA sequences. In contrast to other programs our approach can handle arbitrary pseudoknots. As an improved re-implementation of the LaRA tool for structural alignments, LaRA 2 uses multi-threading and vectorization for parallel execution and a new heuristic for computing a lower boundary of the solution. Our algorithmic improvements yield a program that is up to 130 times faster than the previous version. CONCLUSIONS: With LaRA 2 we provide a tool to analyse large sets of RNA secondary structures in relatively short time, based on structural alignment. The produced alignments can be used to derive structural motifs for the search in genomic databases.


Assuntos
RNA , Software , Algoritmos , Sequência de Bases , Humanos , Conformação de Ácido Nucleico , RNA/genética , Alinhamento de Sequência , Análise de Sequência de RNA
14.
Brief Bioinform ; 21(6): 1875-1885, 2020 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-31745550

RESUMO

While elementary flux mode (EFM) analysis is now recognized as a cornerstone computational technique for cellular pathway analysis and engineering, EFM application to genome-scale models remains computationally prohibitive. This article provides a review of aspects of EFM computation that elucidates bottlenecks in scaling EFM computation. First, algorithms for computing EFMs are reviewed. Next, the impact of redundant constraints, sensitivity to constraint ordering and network compression are evaluated. Then, the advantages and limitations of recent parallelization and GPU-based efforts are highlighted. The article then reviews alternative pathway analysis approaches that aim to reduce the EFM solution space. Despite advances in EFM computation, our review concludes that continued scaling of EFM computation is necessary to apply EFM to genome-scale models. Further, our review concludes that pathway analysis methods that target specific pathway properties can provide powerful alternatives to EFM analysis.


Assuntos
Algoritmos , Análise do Fluxo Metabólico , Análise do Fluxo Metabólico/métodos , Redes e Vias Metabólicas , Projetos de Pesquisa , Biologia de Sistemas/métodos
15.
Stat Med ; 41(27): 5448-5462, 2022 11 30.
Artigo em Inglês | MEDLINE | ID: mdl-36117143

RESUMO

Cancer heterogeneity plays an important role in the understanding of tumor etiology, progression, and response to treatment. To accommodate heterogeneity, cancer subgroup analysis has been extensively conducted. However, most of the existing studies share the limitation that they cannot accommodate heavy-tailed or contaminated outcomes and also high dimensional covariates, both of which are not uncommon in biomedical research. In this study, we propose a robust subgroup identification approach based on M-estimators together with concave and pairwise fusion penalties, which advances from existing studies by effectively accommodating high-dimensional data containing some outliers. The penalties are applied on both latent heterogeneity factors and covariates, where the estimation is expected to achieve subgroup identification and variable selection simultaneously, with the number of subgroups being apriori unknown. We innovatively develop an algorithm based on parallel computing strategy, with a significant advantage of capable of processing large-scale data. The convergence property of the proposed algorithm, oracle property of the penalized M-estimators, and selection consistency of the proposed BIC criterion are carefully established. Simulation and analysis of TCGA breast cancer data demonstrate that the proposed approach is promising to efficiently identify underlying subgroups in high-dimensional data.


Assuntos
Algoritmos , Neoplasias , Humanos , Simulação por Computador , Neoplasias/genética
16.
Stat Med ; 41(15): 2840-2853, 2022 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-35318706

RESUMO

Provider profiling has been recognized as a useful tool in monitoring health care quality, facilitating inter-provider care coordination, and improving medical cost-effectiveness. Existing methods often use generalized linear models with fixed provider effects, especially when profiling dialysis facilities. As the number of providers under evaluation escalates, the computational burden becomes formidable even for specially designed workstations. To address this challenge, we introduce a serial blockwise inversion Newton algorithm exploiting the block structure of the information matrix. A shared-memory divide-and-conquer algorithm is proposed to further boost computational efficiency. In addition to the computational challenge, the current literature lacks an appropriate inferential approach to detecting providers with outlying performance especially when small providers with extreme outcomes are present. In this context, traditional score and Wald tests relying on large-sample distributions of the test statistics lead to inaccurate approximations of the small-sample properties. In light of the inferential issue, we develop an exact test of provider effects using exact finite-sample distributions, with the Poisson-binomial distribution as a special case when the outcome is binary. Simulation analyses demonstrate improved estimation and inference over existing methods. The proposed methods are applied to profiling dialysis facilities based on emergency department encounters using a dialysis patient database from the Centers for Medicare & Medicaid Services.


Assuntos
Medicare , Qualidade da Assistência à Saúde , Idoso , Pessoal de Saúde , Humanos , Estados Unidos
17.
Stat Appl Genet Mol Biol ; 20(4-6): 145-153, 2021 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-34757703

RESUMO

Biomolecular networks are often assumed to be scale-free hierarchical networks. The weighted gene co-expression network analysis (WGCNA) treats gene co-expression networks as undirected scale-free hierarchical weighted networks. The WGCNA R software package uses an Adjacency Matrix to store a network, next calculates the topological overlap matrix (TOM), and then identifies the modules (sub-networks), where each module is assumed to be associated with a certain biological function. The most time-consuming step of WGCNA is to calculate TOM from the Adjacency Matrix in a single thread. In this paper, the single-threaded algorithm of the TOM has been changed into a multi-threaded algorithm (the parameters are the default values of WGCNA). In the multi-threaded algorithm, Rcpp was used to make R call a C++ function, and then C++ used OpenMP to start multiple threads to calculate TOM from the Adjacency Matrix. On shared-memory MultiProcessor systems, the calculation time decreases as the number of CPU cores increases. The algorithm of this paper can promote the application of WGCNA on large data sets, and help other research fields to identify sub-networks in undirected scale-free hierarchical weighted networks. The source codes and usage are available at https://github.com/do-somethings-haha/multi-threaded_calculate_unsigned_TOM_from_unsigned_or_signed_Adjacency_Matrix_of_WGCNA.


Assuntos
Redes Reguladoras de Genes , Software , Algoritmos
18.
Artigo em Inglês | MEDLINE | ID: mdl-39257897

RESUMO

In longitudinal cohort studies, it is often of interest to predict the risk of a terminal clinical event using longitudinal predictor data among subjects at risk by the time of the prediction. The at-risk population changes over time; so does the association between predictors and the outcome, as well as the accumulating longitudinal predictor history. The dynamic nature of this prediction problem has received increasing interest in the literature, but computation often poses a challenge. The widely used joint model of longitudinal and survival data often comes with intensive computation and excessive model fitting time, due to numerical optimization and the analytically intractable high-dimensional integral in the likelihood function. This problem is exacerbated when the model is fit to a large dataset or the model involves multiple longitudinal predictors with nonlinear trajectories. This challenge can be addressed from an algorithmic perspective, by a novel two-stage estimation procedure, and from a computing perspective, by Graphics Processing Unit (GPU) programming. The latter is implemented through PyTorch, an emerging deep learning framework. The numerical studies demonstrate that the proposed algorithm and software can substantially speed up the estimation of the joint model, particularly with large datasets. The numerical studies also concluded that accounting for nonlinearity in longitudinal predictor trajectories can improve the prediction accuracy in comparison to joint modeling that ignore nonlinearity.

19.
Sensors (Basel) ; 22(13)2022 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-35808276

RESUMO

This paper presents a parallel motion planner for mobile robots and autonomous vehicles based on lattices created in the sensor space of planar range finders. The planner is able to compute paths in a few milliseconds, thus allowing obstacle avoidance in real time. The proposed sensor-space lattice (SSLAT) motion planner uses a lattice to tessellate the area covered by the sensor and to rapidly compute collision-free paths in the robot surroundings by optimizing a cost function. The cost function guides the vehicle to follow a vector field, which encodes the desired vehicle path. We evaluated our method in challenging cluttered static environments, such as warehouses and forests, and in the presence of moving obstacles, both in simulations and real experiments. In these experiments, we show that our algorithm performs collision checking and path planning faster than baseline methods. Since the method can have sequential or parallel implementations, we also compare the two versions of SSLAT and show that the run time for its parallel implementation, which is independent of the number and shape of the obstacles found in the environment, provides a speedup greater than 25.

20.
Sensors (Basel) ; 22(21)2022 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-36365819

RESUMO

Speech recognition refers to the capability of software or hardware to receive a speech signal, identify the speaker's features in the speech signal, and recognize the speaker thereafter. In general, the speech recognition process involves three main steps: acoustic processing, feature extraction, and classification/recognition. The purpose of feature extraction is to illustrate a speech signal using a predetermined number of signal components. This is because all information in the acoustic signal is excessively cumbersome to handle, and some information is irrelevant in the identification task. This study proposes a machine learning-based approach that performs feature parameter extraction from speech signals to improve the performance of speech recognition applications in real-time smart city environments. Moreover, the principle of mapping a block of main memory to the cache is used efficiently to reduce computing time. The block size of cache memory is a parameter that strongly affects the cache performance. In particular, the implementation of such processes in real-time systems requires a high computation speed. Processing speed plays an important role in speech recognition in real-time systems. It requires the use of modern technologies and fast algorithms that increase the acceleration in extracting the feature parameters from speech signals. Problems with overclocking during the digital processing of speech signals have yet to be completely resolved. The experimental results demonstrate that the proposed method successfully extracts the signal features and achieves seamless classification performance compared to other conventional speech recognition algorithms.


Assuntos
Aprendizado de Máquina , Fala , Algoritmos , Acústica , Reconhecimento Psicológico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA