Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Bioinformatics ; 31(1): 40-7, 2015 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-25178462

RESUMO

MOTIVATION: Insertion/deletion (indel) and amino acid substitution are two common events that lead to the evolution of and variations in protein sequences. Further, many of the human diseases and functional divergence between homologous proteins are more related to indel mutations, even though they occur less often than the substitution mutations do. A reliable identification of indels and their flanking regions is a major challenge in research related to protein evolution, structures and functions. RESULTS: In this article, we propose a novel scheme to predict indel flanking regions in a protein sequence for a given protein fold, based on a variable-order Markov model. The proposed indel flanking region (IndelFR) predictors are designed based on prediction by partial match (PPM) and probabilistic suffix tree (PST), which are referred to as the PPM IndelFR and PST IndelFR predictors, respectively. The overall performance evaluation results show that the proposed predictors are able to predict IndelFRs in the protein sequences with a high accuracy and F1 measure. In addition, the results show that if one is interested only in predicting IndelFRs in protein sequences, it would be preferable to use the proposed predictors instead of HMMER 3.0 in view of the substantially superior performance of the former.


Assuntos
Algoritmos , Bases de Dados de Proteínas , Mutação INDEL/genética , Cadeias de Markov , Proteínas/genética , Substituição de Aminoácidos , Humanos , Proteínas/química
2.
Sensors (Basel) ; 16(9)2016 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-27657080

RESUMO

The direction of arrival (DOA) estimation problem is formulated in a compressive sensing (CS) framework, and an extended array aperture is presented to increase the number of degrees of freedom of the array. The ordinary least square adaptable least absolute shrinkage and selection operator (OLS A-LASSO) is applied for the first time for DOA estimation. Furthermore, a new LASSO algorithm, the minimum variance distortionless response (MVDR) A-LASSO, which solves the DOA problem in the CS framework, is presented. The proposed algorithm does not depend on the singular value decomposition nor on the orthogonality of the signal and the noise subspaces. Hence, the DOA estimation can be done without a priori knowledge of the number of sources. The proposed algorithm can estimate up to ( ( M 2 - 2 ) / 2 + M - 1 ) / 2 sources using M sensors without any constraints or assumptions about the nature of the signal sources. Furthermore, the proposed algorithm exhibits performance that is superior compared to that of the classical DOA estimation methods, especially for low signal to noise ratios (SNR), spatially-closed sources and coherent scenarios.

3.
BMC Bioinformatics ; 16: 393, 2015 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-26597571

RESUMO

BACKGROUND: The alignment of multiple protein sequences is one of the most commonly performed tasks in bioinformatics. In spite of considerable research and efforts that have been recently deployed for improving the performance of multiple sequence alignment (MSA) algorithms, finding a highly accurate alignment between multiple protein sequences is still a challenging problem. RESULTS: We propose a novel and efficient algorithm called, MSAIndelFR, for multiple sequence alignment using the information on the predicted locations of IndelFRs and the computed average log-loss values obtained from IndelFR predictors, each of which is designed for a different protein fold. We demonstrate that the introduction of a new variable gap penalty function based on the predicted locations of the IndelFRs and the computed average log-loss values into the proposed algorithm substantially improves the protein alignment accuracy. This is illustrated by evaluating the performance of the algorithm in aligning sequences belonging to the protein folds for which the IndelFR predictors already exist and by using the reference alignments of the four popular benchmarks, BAliBASE 3.0, OXBENCH, PREFAB 4.0, and SABRE (SABmark 1.65). CONCLUSIONS: We have proposed a novel and efficient algorithm, the MSAIndelFR algorithm, for multiple protein sequence alignment incorporating a new variable gap penalty function. It is shown that the performance of the proposed algorithm is superior to that of the most-widely used alignment algorithms, Clustal W2, Clustal Omega, Kalign2, MSAProbs, MAFFT, MUSCLE, ProbCons and Probalign, in terms of both the sum-of-pairs and total column metrics.


Assuntos
Algoritmos , Biologia Computacional/métodos , Mutação INDEL/genética , Proteínas/química , Alinhamento de Sequência/métodos , Análise de Sequência de Proteína/métodos , Humanos
4.
Comput Methods Programs Biomed ; 248: 108122, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38507960

RESUMO

BACKGROUND AND OBJECTIVE: Most of the existing machine learning-based heart sound classification methods achieve limited accuracy. Since they primarily depend on single domain feature information and tend to focus equally on each part of the signal rather than employing a selective attention mechanism. In addition, they fail to exploit convolutional neural network (CNN) - based features with an effective fusion strategy. METHODS: In order to overcome these limitations, a novel multimodal attention convolutional neural network (MACNN) with a feature-level fusion strategy, in which Mel-cepstral domain as well as general frequency domain features are incorporated to increase the diversity of the features, is proposed in this paper. In the proposed method, DilationAttenNet is first utilized to construct attention-based CNN feature extractors and then these feature extractors are jointly optimized in MACNN at the feature-level. The attention mechanism aims to suppress irrelevant information and focus on crucial diverse features extracted from the CNN. RESULTS: Extensive experiments are carried out to study the efficacy of the feature level fusion in comparison to that with early fusion. The results show that the proposed MACNN method significantly outperforms the state-of-the-art approaches in terms of accuracy and score for the two publicly available Github and Physionet datasets. CONCLUSION: The findings of our experiments demonstrated the high performance for heart sound classification based on the proposed MACNN, and hence have potential clinical usefulness in the identification of heart diseases. This technique can assist cardiologists and researchers in the design and development of heart sound classification methods.


Assuntos
Cardiopatias , Ruídos Cardíacos , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
5.
IEEE Trans Image Process ; 30: 7527-7540, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34403342

RESUMO

In this paper, a new regularization term in the form of L1-norm based fractional gradient vector flow (LF-GGVF) is presented for the task of image denoising. A fractional order variational method is formulated, which is then utilized for estimating the proposed LF-GGVF. Overlapping group sparsity along with LF-GGVF is used as priors in image denoising optimization framework. The Riemann-Liouville derivative is used for approximating the fractional order derivatives present in the optimization framework. Its role in the framework helps in boosting the denoising performance. The numerical optimization is performed in an alternating manner using the well-known alternating direction method of multipliers (ADMM) and split Bregman techniques. The resulting system of linear equations is then solved using an efficient numerical scheme. A variety of simulated data that includes test images contaminated by additive white Gaussian noise are used for experimental validation. The results of numerical solutions obtained from experimental work demonstrate that the performance of the proposed approach in terms of noise suppression and edge preservation is better when compared with that of several other methods.

6.
IEEE Trans Med Imaging ; 40(5): 1363-1376, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33507867

RESUMO

To better understand early brain development in health and disorder, it is critical to accurately segment infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). Deep learning-based methods have achieved state-of-the-art performance; h owever, one of the major limitations is that the learning-based methods may suffer from the multi-site issue, that is, the models trained on a dataset from one site may not be applicable to the datasets acquired from other sites with different imaging protocols/scanners. To promote methodological development in the community, the iSeg-2019 challenge (http://iseg2019.web.unc.edu) provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods. T raining/validation subjects are from UNC (MAP) and testing subjects are from UNC/UMN (BCP), Stanford University, and Emory University. By the time of writing, there are 30 automatic segmentation methods participated in the iSeg-2019. In this article, 8 top-ranked methods were reviewed by detailing their pipelines/implementations, presenting experimental results, and evaluating performance across different sites in terms of whole brain, regions of interest, and gyral landmark curves. We further pointed out their limitations and possible directions for addressing the multi-site issue. We find that multi-site consistency is still an open issue. We hope that the multi-site dataset in the iSeg-2019 and this review article will attract more researchers to address the challenging and critical multi-site issue in practice.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Substância Cinzenta , Humanos , Lactente
7.
IEEE Trans Image Process ; 18(8): 1782-96, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19389696

RESUMO

Traditional statistical detectors of the discrete wavelet transform (DWT)-based image watermarking use probability density functions (PDFs) that show inadequate matching with the empirical PDF of image coefficients in view of the fact that they use a fixed number of parameters. Hence, the decision values obtained from the estimated thresholds of these detectors provide substandard detection performance. In this paper, a new detector is proposed for the DWT-based additive image watermarking, wherein a PDF based on the Gauss-Hermite expansion is used, in view of the fact that this PDF provides a better statistical match to the empirical PDF by utilizing an appropriate number of parameters estimated from higher-order moments of the image coefficients. The decision threshold and the receiver operating characteristics are derived for the proposed detector. Experimental results on test images demonstrate that the proposed watermark detector performs better than other standard detectors such as the Gaussian and generalized Gaussian (GG), in terms of the probabilities of detection and false alarm as well as the efficacy. It is also shown that detection performance of the proposed detector is more robust than the competitive GG detector in the case of compression, additive white Gaussian noise, filtering, or geometric attack.

8.
Artigo em Inglês | MEDLINE | ID: mdl-30668499

RESUMO

Structural information, in particular, the edges present in an image are the most important part that get noticed by human eyes. Therefore, it is important to denoise this information effectively for better visualization. Recently, research work has been carried out to characterize the structural information into plain and edge patches and denoise them separately. However, the information about the geometrical orientation of the edges are not considered leading to sub-optimal denoising results. This has motivated us to introduce in this paper an adaptive steerable total variation regularizer (ASTV) based on geometric moments. The proposed ASTV regularizer is capable of denoising the edges based on their geometrical orientation, thus boosting the denoising performance. Further, earlier works exploited the sparsity of the natural images in DCT and wavelet domains which help in improving the denoising performance. Based on this observation, we introduce the sparsity of an image in orthogonal moment domain, in particular, the Tchebichef moment. Then, we propose a new sparse regularizer, which is a combination of the Tchebichef moment and ASTVbased regularizers. The overall denoising framework is optimized using split Bregman-based multivariable minimization technique. Experimental results demonstrate the competitiveness of the proposed method with the existing ones in terms of both the objective and subjective image qualities.

9.
ISA Trans ; 85: 293-304, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30392726

RESUMO

Recently, sparse representation has attracted a great deal of interest in many of the image processing applications. However, the idea of self-similarity, which is inherently present in an image, has not been considered in standard sparse representation. Moreover, if the dictionary atoms are not constrained to be correlated, the redundancy present in the dictionary may not improve the performance of sparse coding. This paper addresses these issues by using orthogonal moments to extract the correlations among the atoms and group them together by extracting the characteristics of the noisy image patches. Most of the existing sparsity-based image denoising methods utilize an over-complete dictionary, for example, the K-SVD method that requires solving a minimization problem which is computationally challenging. In order to improve the computational efficiency and the correlation between the sparse coefficients, this paper employs the concept of overlapping group sparsity formulated for both convex and non-convex denoising frameworks. The optimization method used for solving the denoising framework is the well known majorization-minimization method, which has been applied successfully in sparse approximation and statistical estimations. Experimental results demonstrate that the proposed method offers, in general, a performance that is better than that of the existing state-of-the-art methods irrespective of the noise level and the image type.

10.
IEEE Trans Image Process ; 17(10): 1755-71, 2008 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-18784025

RESUMO

The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Teorema de Bayes , Simulação por Computador , Interpretação Estatística de Dados , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
11.
Artigo em Inglês | MEDLINE | ID: mdl-30489260

RESUMO

OBJECTIVE: Extraction and analysis of various clinically significant features of photoplethysmogram (PPG) signals for monitoring several physiological parameters as well as for biometric authentication have become important areas of research in recent years. However, PPG signal compression; particularly quality-guaranteed compression, and steganography of patient's secret information is still lagging behind. METHOD: This paper presents a robust, reliable and highly-efficient singular value decomposition (SVD) and lossless ASCII character encoding (LL-ACE)-based quality-guaranteed PPG compression algorithm. This algorithm can not only be used to compress PPG signals but also do so for steganographed PPG signals that include the patient information. RESULT AND CONCLUSION: It is worth mentioning that such an algorithm is being proposed for the first time to compress steganographed PPG signals. The algorithm is tested on PPG signals collected from four different databases, and its performance is assessed using both quantitative and qualitative measures. The proposed steganographed PPG compression algorithm provides a compression ratio that is much higher than that provided by other algorithms that are designed to compress the PPG signals only. SIGNIFICANCE: (1) the clinical quality of the reconstructed PPG signal can be controlled precisely, (2) the patient's personal information is restored with no errors, (3) high compression ratio, and (4) the PPG signal reconstruction error is neither dependent on the steganographic operation nor on the size of the patient information data.

12.
IEEE Trans Biomed Circuits Syst ; 12(1): 137-150, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29377802

RESUMO

Advancements in electronics and miniaturized device fabrication technologies have enabled simultaneous acquisition of multiple biosignals (MBioSigs), but the area of compression of MBioSigs remains unexplored to date. This paper presents a robust singular value decomposition (SVD) and American standard code for information interchange (ASCII) character encoding-based algorithm for compression of MBioSigs for the first time to the best of our knowledge. At the preprocessing stage, MBioSigs are denoised, down sampled and then transformed to a two-dimensional (2-D) data array. SVD of the 2-D array is carried out and the dimensionality of the singular values is reduced. The resulting matrix is then compressed by a lossless ASCII character encoding-based technique. The proposed compression algorithm can be used in a variety of modes such as lossless, with or without using the down sampling operation. The compressed file is then uploaded to a hypertext preprocessor (PHP)-based website for remote monitoring application. Evaluation results show that the proposed algorithm provides a good compression performance; in particular, the mean opinion score of the reconstructed signal falls under the category "very good" as per the gold standard subjective measure.


Assuntos
Algoritmos , Compressão de Dados/métodos
13.
Neural Netw ; 96: 128-136, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28987976

RESUMO

This paper presents an algorithm for solving the minimum-energy optimal control problem of conductance-based spiking neurons. The basic procedure is (1) to construct a conductance-based spiking neuron oscillator as an affine nonlinear system, (2) to formulate the optimal control problem of the affine nonlinear system as a boundary value problem based on Pontryagin's maximum principle, and (3) to solve the boundary value problem using the homotopy perturbation method. The construction of the minimum-energy optimal control in the framework of the homotopy perturbation technique is novel and valid for a broad class of nonlinear conductance-based neuron models. The applicability of our method in the FitzHugh-Nagumo and Hindmarsh-Rose models is validated by simulations.


Assuntos
Potenciais de Ação , Modelos Neurológicos , Neurônios , Potenciais de Ação/fisiologia , Algoritmos , Neurônios/fisiologia , Dinâmica não Linear
14.
IEEE Trans Neural Netw Learn Syst ; 28(1): 149-163, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-26685272

RESUMO

In this paper, we present a multiclass data classifier, denoted by optimal conformal transformation kernel (OCTK), based on learning a specific kernel model, the CTK, and utilize it in two types of image recognition tasks, namely, face recognition and object categorization. We show that the learned CTK can lead to a desirable spatial geometry change in mapping data from the input space to the feature space, so that the local spatial geometry of the heterogeneous regions is magnified to favor a more refined distinguishing, while that of the homogeneous regions is compressed to neglect or suppress the intraclass variations. This nature of the learned CTK is of great benefit in image recognition, since in image recognition we always have to face a challenge that the images to be classified are with a large intraclass diversity and interclass similarity. Experiments on face recognition and object categorization show that the proposed OCTK classifier achieves the best or second best recognition result compared with that of the state-of-the-art classifiers, no matter what kind of feature or feature representation is used. In computational efficiency, the OCTK classifier can perform significantly faster than the linear support vector machine classifier (linear LIBSVM) can.

15.
IEEE Trans Med Imaging ; 24(6): 743-54, 2005 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-15957598

RESUMO

A novel technique for despeckling the medical ultrasound images using lossy compression is presented. The logarithm of the input image is first transformed to the multiscale wavelet domain. It is then shown that the subband coefficients of the log-transformed ultrasound image can be successfully modeled using the generalized Laplacian distribution. Based on this modeling, a simple adaptation of the zero-zone and reconstruction levels of the uniform threshold quantizer is proposed in order to achieve simultaneous despeckling and quantization. This adaptation is based on: (1) an estimate of the corrupting speckle noise level in the image; (2) the estimated statistics of the noise-free subband coefficients; and (3) the required compression rate. The Laplacian distribution is considered as a special case of the generalized Laplacian distribution and its efficacy is demonstrated for the problem under consideration. Context-based classification is also applied to the noisy coefficients to enhance the performance of the subband coder. Simulation results using a contrast detail phantom image and several real ultrasound images are presented. To validate the performance of the proposed scheme, comparison with two two-stage schemes, wherein the speckled image is first filtered and then compressed using the state-of-the-art JPEG2000 encoder, is presented. Experimental results show that the proposed scheme works better, both in terms of the signal to noise ratio and the visual quality.


Assuntos
Algoritmos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Ultrassonografia/métodos , Simulação por Computador , Humanos , Modelos Biológicos , Modelos Estatísticos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador
16.
IEEE Trans Neural Netw ; 16(2): 460-74, 2005 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15787152

RESUMO

In this paper, we present a method of kernel optimization by maximizing a measure of class separability in the empirical feature space, an Euclidean space in which the training data are embedded in such a way that the geometrical structure of the data in the feature space is preserved. Employing a data-dependent kernel, we derive an effective kernel optimization algorithm that maximizes the class separability of the data in the empirical feature space. It is shown that there exists a close relationship between the class separability measure introduced here and the alignment measure defined recently by Cristianini. Extensive simulations are carried out which show that the optimized kernel is more adaptive to the input data, and leads to a substantial, sometimes significant, improvement in the performance of various data classification algorithms.


Assuntos
Pesquisa Empírica , Redes Neurais de Computação
17.
IEEE Trans Neural Netw ; 15(2): 417-29, 2004 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15384534

RESUMO

This paper presents a new self-creating model of a neural network in which a branching mechanism is incorporated with competitive learning. Unlike other self-creating models, the proposed scheme, called branching competitive learning (BCL), adopts a special node-splitting criterion, which is based mainly on the geometrical measurements of the movement of the synaptic vectors in the weight space. Compared with other self-creating and nonself-creating competitive learning models, the BCL network is more efficient to capture the spatial distribution of the input data and, therefore, tends to give better clustering or quantization results. We demonstrate the ability of the BCL model to appropriately estimate the cluster number in a data distribution, show its adaptability to nonstationary data inputs and, moreover, present a scheme leading to a multiresolution data clustering. Extensive experiments on vector quantization of image compression are given to illustrate the effectiveness of the BCL algorithm.


Assuntos
Redes Neurais de Computação , Inteligência Artificial
18.
IEEE Trans Biomed Circuits Syst ; 8(5): 716-28, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25388879

RESUMO

This paper presents a method for automatic segmentation of nuclei in phase-contrast images using the intensity, convexity and texture of the nuclei. The proposed method consists of three main stages: preprocessing, h-maxima transformation-based marker controlled watershed segmentation ( h-TMC), and texture analysis. In the preprocessing stage, a top-hat filter is used to increase the contrast and suppress the non-uniform illumination, shading, and other imaging artifacts in the input image. The nuclei segmentation stage consists of a distance transformation, h-maxima transformation and watershed segmentation. These transformations utilize the intensity information and the convexity property of the nucleus for the purpose of detecting a single marker in every nucleus; these markers are then used in the h-TMC watershed algorithm to obtain segments of the nuclei. However, dust particles, imaging artifacts, or prolonged cell cytoplasm may falsely be segmented as nuclei at this stage, and thus may lead to an inaccurate analysis of the cell image. In order to identify and remove these non-nuclei segments, in the third stage a texture analysis is performed, that uses six of the Haralick measures along with the AdaBoost algorithm. The novelty of the proposed method is that it introduces a systematic framework that utilizes intensity, convexity, and texture information to achieve a high accuracy for automatic segmentation of nuclei in the phase-contrast images. Extensive experiments are performed demonstrating the superior performance ( precision = 0.948; recall = 0.924; F1-measure = 0.936; validation based on  âˆ¼ 4850 manually-labeled nuclei) of the proposed method.


Assuntos
Núcleo Celular , Técnicas Citológicas/métodos , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Contraste de Fase/métodos , Algoritmos , Análise por Conglomerados , Células HeLa , Humanos
19.
IEEE Trans Image Process ; 23(10): 4348-60, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25051554

RESUMO

In the past decade, several schemes for digital image watermarking have been proposed to protect the copyright of an image document or to provide proof of ownership in some identifiable fashion. This paper proposes a novel multiplicative watermarking scheme in the contourlet domain. The effectiveness of a watermark detector depends highly on the modeling of the transform-domain coefficients. In view of this, we first investigate the modeling of the contourlet coefficients by the alpha-stable distributions. It is shown that the univariate alpha-stable distribution fits the empirical data more accurately than the formerly used distributions, such as the generalized Gaussian and Laplacian, do. We also show that the bivariate alpha-stable distribution can capture the across scale dependencies of the contourlet coefficients. Motivated by the modeling results, a blind watermark detector in the contourlet domain is designed by using the univariate and bivariate alpha-stable distributions. It is shown that the detectors based on both of these distributions provide higher detection rates than that based on the generalized Gaussian distribution does. However, a watermark detector designed based on the alpha-stable distribution with a value of its parameter α other than 1 or 2 is computationally expensive because of the lack of a closed-form expression for the distribution in this case. Therefore, a watermark detector is designed based on the bivariate Cauchy member of the alpha-stable family for which α = 1 . The resulting design yields a significantly reduced-complexity detector and provides a performance that is much superior to that of the GG detector and very close to that of the detector corresponding to the best-fit alpha-stable distribution. The robustness of the proposed bivariate Cauchy detector against various kinds of attacks, such as noise, filtering, and compression, is studied and shown to be superior to that of the generalized Gaussian detector.

20.
Artigo em Inglês | MEDLINE | ID: mdl-24091400

RESUMO

Regulatory interactions among genes and gene products are dynamic processes and hence modeling these processes is of great interest. Since genes work in a cascade of networks, reconstruction of gene regulatory network (GRN) is a crucial process for a thorough understanding of the underlying biological interactions. We present here an approach based on pairwise correlations and lasso to infer the GRN, taking into account the variable time delays between various genes. The proposed method is applied to both synthetic and real data sets, and the results on synthetic data show that the proposed approach outperforms the current methods. Further, the results using real data are more consistent with the existing knowledge concerning the possible gene interactions.


Assuntos
Biologia Computacional/métodos , Redes Reguladoras de Genes/genética , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Ciclo Celular/genética , Células HeLa , Humanos , Modelos Lineares , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA