Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
IEEE Trans Med Imaging ; 40(5): 1363-1376, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33507867

RESUMO

To better understand early brain development in health and disorder, it is critical to accurately segment infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). Deep learning-based methods have achieved state-of-the-art performance; h owever, one of the major limitations is that the learning-based methods may suffer from the multi-site issue, that is, the models trained on a dataset from one site may not be applicable to the datasets acquired from other sites with different imaging protocols/scanners. To promote methodological development in the community, the iSeg-2019 challenge (http://iseg2019.web.unc.edu) provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods. T raining/validation subjects are from UNC (MAP) and testing subjects are from UNC/UMN (BCP), Stanford University, and Emory University. By the time of writing, there are 30 automatic segmentation methods participated in the iSeg-2019. In this article, 8 top-ranked methods were reviewed by detailing their pipelines/implementations, presenting experimental results, and evaluating performance across different sites in terms of whole brain, regions of interest, and gyral landmark curves. We further pointed out their limitations and possible directions for addressing the multi-site issue. We find that multi-site consistency is still an open issue. We hope that the multi-site dataset in the iSeg-2019 and this review article will attract more researchers to address the challenging and critical multi-site issue in practice.

2.
Artigo em Inglês | MEDLINE | ID: mdl-30668499

RESUMO

Structural information, in particular, the edges present in an image are the most important part that get noticed by human eyes. Therefore, it is important to denoise this information effectively for better visualization. Recently, research work has been carried out to characterize the structural information into plain and edge patches and denoise them separately. However, the information about the geometrical orientation of the edges are not considered leading to sub-optimal denoising results. This has motivated us to introduce in this paper an adaptive steerable total variation regularizer (ASTV) based on geometric moments. The proposed ASTV regularizer is capable of denoising the edges based on their geometrical orientation, thus boosting the denoising performance. Further, earlier works exploited the sparsity of the natural images in DCT and wavelet domains which help in improving the denoising performance. Based on this observation, we introduce the sparsity of an image in orthogonal moment domain, in particular, the Tchebichef moment. Then, we propose a new sparse regularizer, which is a combination of the Tchebichef moment and ASTVbased regularizers. The overall denoising framework is optimized using split Bregman-based multivariable minimization technique. Experimental results demonstrate the competitiveness of the proposed method with the existing ones in terms of both the objective and subjective image qualities.

3.
ISA Trans ; 85: 293-304, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30392726

RESUMO

Recently, sparse representation has attracted a great deal of interest in many of the image processing applications. However, the idea of self-similarity, which is inherently present in an image, has not been considered in standard sparse representation. Moreover, if the dictionary atoms are not constrained to be correlated, the redundancy present in the dictionary may not improve the performance of sparse coding. This paper addresses these issues by using orthogonal moments to extract the correlations among the atoms and group them together by extracting the characteristics of the noisy image patches. Most of the existing sparsity-based image denoising methods utilize an over-complete dictionary, for example, the K-SVD method that requires solving a minimization problem which is computationally challenging. In order to improve the computational efficiency and the correlation between the sparse coefficients, this paper employs the concept of overlapping group sparsity formulated for both convex and non-convex denoising frameworks. The optimization method used for solving the denoising framework is the well known majorization-minimization method, which has been applied successfully in sparse approximation and statistical estimations. Experimental results demonstrate that the proposed method offers, in general, a performance that is better than that of the existing state-of-the-art methods irrespective of the noise level and the image type.

4.
Artigo em Inglês | MEDLINE | ID: mdl-30489260

RESUMO

OBJECTIVE: Extraction and analysis of various clinically significant features of photoplethysmogram (PPG) signals for monitoring several physiological parameters as well as for biometric authentication have become important areas of research in recent years. However, PPG signal compression; particularly quality-guaranteed compression, and steganography of patient's secret information is still lagging behind. METHOD: This paper presents a robust, reliable and highly-efficient singular value decomposition (SVD) and lossless ASCII character encoding (LL-ACE)-based quality-guaranteed PPG compression algorithm. This algorithm can not only be used to compress PPG signals but also do so for steganographed PPG signals that include the patient information. RESULT AND CONCLUSION: It is worth mentioning that such an algorithm is being proposed for the first time to compress steganographed PPG signals. The algorithm is tested on PPG signals collected from four different databases, and its performance is assessed using both quantitative and qualitative measures. The proposed steganographed PPG compression algorithm provides a compression ratio that is much higher than that provided by other algorithms that are designed to compress the PPG signals only. SIGNIFICANCE: (1) the clinical quality of the reconstructed PPG signal can be controlled precisely, (2) the patient's personal information is restored with no errors, (3) high compression ratio, and (4) the PPG signal reconstruction error is neither dependent on the steganographic operation nor on the size of the patient information data.

5.
IEEE Trans Biomed Circuits Syst ; 12(1): 137-150, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29377802

RESUMO

Advancements in electronics and miniaturized device fabrication technologies have enabled simultaneous acquisition of multiple biosignals (MBioSigs), but the area of compression of MBioSigs remains unexplored to date. This paper presents a robust singular value decomposition (SVD) and American standard code for information interchange (ASCII) character encoding-based algorithm for compression of MBioSigs for the first time to the best of our knowledge. At the preprocessing stage, MBioSigs are denoised, down sampled and then transformed to a two-dimensional (2-D) data array. SVD of the 2-D array is carried out and the dimensionality of the singular values is reduced. The resulting matrix is then compressed by a lossless ASCII character encoding-based technique. The proposed compression algorithm can be used in a variety of modes such as lossless, with or without using the down sampling operation. The compressed file is then uploaded to a hypertext preprocessor (PHP)-based website for remote monitoring application. Evaluation results show that the proposed algorithm provides a good compression performance; in particular, the mean opinion score of the reconstructed signal falls under the category "very good" as per the gold standard subjective measure.


Assuntos
Algoritmos , Compressão de Dados/métodos
6.
Neural Netw ; 96: 128-136, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28987976

RESUMO

This paper presents an algorithm for solving the minimum-energy optimal control problem of conductance-based spiking neurons. The basic procedure is (1) to construct a conductance-based spiking neuron oscillator as an affine nonlinear system, (2) to formulate the optimal control problem of the affine nonlinear system as a boundary value problem based on Pontryagin's maximum principle, and (3) to solve the boundary value problem using the homotopy perturbation method. The construction of the minimum-energy optimal control in the framework of the homotopy perturbation technique is novel and valid for a broad class of nonlinear conductance-based neuron models. The applicability of our method in the FitzHugh-Nagumo and Hindmarsh-Rose models is validated by simulations.


Assuntos
Potenciais de Ação , Modelos Neurológicos , Neurônios , Potenciais de Ação/fisiologia , Algoritmos , Neurônios/fisiologia , Dinâmica não Linear
7.
IEEE Trans Neural Netw Learn Syst ; 28(1): 149-163, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-26685272

RESUMO

In this paper, we present a multiclass data classifier, denoted by optimal conformal transformation kernel (OCTK), based on learning a specific kernel model, the CTK, and utilize it in two types of image recognition tasks, namely, face recognition and object categorization. We show that the learned CTK can lead to a desirable spatial geometry change in mapping data from the input space to the feature space, so that the local spatial geometry of the heterogeneous regions is magnified to favor a more refined distinguishing, while that of the homogeneous regions is compressed to neglect or suppress the intraclass variations. This nature of the learned CTK is of great benefit in image recognition, since in image recognition we always have to face a challenge that the images to be classified are with a large intraclass diversity and interclass similarity. Experiments on face recognition and object categorization show that the proposed OCTK classifier achieves the best or second best recognition result compared with that of the state-of-the-art classifiers, no matter what kind of feature or feature representation is used. In computational efficiency, the OCTK classifier can perform significantly faster than the linear support vector machine classifier (linear LIBSVM) can.

8.
Sensors (Basel) ; 16(9)2016 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-27657080

RESUMO

The direction of arrival (DOA) estimation problem is formulated in a compressive sensing (CS) framework, and an extended array aperture is presented to increase the number of degrees of freedom of the array. The ordinary least square adaptable least absolute shrinkage and selection operator (OLS A-LASSO) is applied for the first time for DOA estimation. Furthermore, a new LASSO algorithm, the minimum variance distortionless response (MVDR) A-LASSO, which solves the DOA problem in the CS framework, is presented. The proposed algorithm does not depend on the singular value decomposition nor on the orthogonality of the signal and the noise subspaces. Hence, the DOA estimation can be done without a priori knowledge of the number of sources. The proposed algorithm can estimate up to ( ( M 2 - 2 ) / 2 + M - 1 ) / 2 sources using M sensors without any constraints or assumptions about the nature of the signal sources. Furthermore, the proposed algorithm exhibits performance that is superior compared to that of the classical DOA estimation methods, especially for low signal to noise ratios (SNR), spatially-closed sources and coherent scenarios.

9.
BMC Bioinformatics ; 16: 393, 2015 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-26597571

RESUMO

BACKGROUND: The alignment of multiple protein sequences is one of the most commonly performed tasks in bioinformatics. In spite of considerable research and efforts that have been recently deployed for improving the performance of multiple sequence alignment (MSA) algorithms, finding a highly accurate alignment between multiple protein sequences is still a challenging problem. RESULTS: We propose a novel and efficient algorithm called, MSAIndelFR, for multiple sequence alignment using the information on the predicted locations of IndelFRs and the computed average log-loss values obtained from IndelFR predictors, each of which is designed for a different protein fold. We demonstrate that the introduction of a new variable gap penalty function based on the predicted locations of the IndelFRs and the computed average log-loss values into the proposed algorithm substantially improves the protein alignment accuracy. This is illustrated by evaluating the performance of the algorithm in aligning sequences belonging to the protein folds for which the IndelFR predictors already exist and by using the reference alignments of the four popular benchmarks, BAliBASE 3.0, OXBENCH, PREFAB 4.0, and SABRE (SABmark 1.65). CONCLUSIONS: We have proposed a novel and efficient algorithm, the MSAIndelFR algorithm, for multiple protein sequence alignment incorporating a new variable gap penalty function. It is shown that the performance of the proposed algorithm is superior to that of the most-widely used alignment algorithms, Clustal W2, Clustal Omega, Kalign2, MSAProbs, MAFFT, MUSCLE, ProbCons and Probalign, in terms of both the sum-of-pairs and total column metrics.


Assuntos
Algoritmos , Biologia Computacional/métodos , Mutação INDEL/genética , Proteínas/química , Alinhamento de Sequência/métodos , Análise de Sequência de Proteína/métodos , Humanos
10.
Bioinformatics ; 31(1): 40-7, 2015 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-25178462

RESUMO

MOTIVATION: Insertion/deletion (indel) and amino acid substitution are two common events that lead to the evolution of and variations in protein sequences. Further, many of the human diseases and functional divergence between homologous proteins are more related to indel mutations, even though they occur less often than the substitution mutations do. A reliable identification of indels and their flanking regions is a major challenge in research related to protein evolution, structures and functions. RESULTS: In this article, we propose a novel scheme to predict indel flanking regions in a protein sequence for a given protein fold, based on a variable-order Markov model. The proposed indel flanking region (IndelFR) predictors are designed based on prediction by partial match (PPM) and probabilistic suffix tree (PST), which are referred to as the PPM IndelFR and PST IndelFR predictors, respectively. The overall performance evaluation results show that the proposed predictors are able to predict IndelFRs in the protein sequences with a high accuracy and F1 measure. In addition, the results show that if one is interested only in predicting IndelFRs in protein sequences, it would be preferable to use the proposed predictors instead of HMMER 3.0 in view of the substantially superior performance of the former.


Assuntos
Algoritmos , Bases de Dados de Proteínas , Mutação INDEL/genética , Cadeias de Markov , Proteínas/genética , Substituição de Aminoácidos , Humanos , Proteínas/química
11.
IEEE Trans Biomed Circuits Syst ; 8(5): 716-28, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25388879

RESUMO

This paper presents a method for automatic segmentation of nuclei in phase-contrast images using the intensity, convexity and texture of the nuclei. The proposed method consists of three main stages: preprocessing, h-maxima transformation-based marker controlled watershed segmentation ( h-TMC), and texture analysis. In the preprocessing stage, a top-hat filter is used to increase the contrast and suppress the non-uniform illumination, shading, and other imaging artifacts in the input image. The nuclei segmentation stage consists of a distance transformation, h-maxima transformation and watershed segmentation. These transformations utilize the intensity information and the convexity property of the nucleus for the purpose of detecting a single marker in every nucleus; these markers are then used in the h-TMC watershed algorithm to obtain segments of the nuclei. However, dust particles, imaging artifacts, or prolonged cell cytoplasm may falsely be segmented as nuclei at this stage, and thus may lead to an inaccurate analysis of the cell image. In order to identify and remove these non-nuclei segments, in the third stage a texture analysis is performed, that uses six of the Haralick measures along with the AdaBoost algorithm. The novelty of the proposed method is that it introduces a systematic framework that utilizes intensity, convexity, and texture information to achieve a high accuracy for automatic segmentation of nuclei in the phase-contrast images. Extensive experiments are performed demonstrating the superior performance ( precision = 0.948; recall = 0.924; F1-measure = 0.936; validation based on  âˆ¼ 4850 manually-labeled nuclei) of the proposed method.


Assuntos
Núcleo Celular , Técnicas Citológicas/métodos , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Contraste de Fase/métodos , Algoritmos , Análise por Conglomerados , Células HeLa , Humanos
12.
IEEE Trans Image Process ; 23(10): 4348-60, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25051554

RESUMO

In the past decade, several schemes for digital image watermarking have been proposed to protect the copyright of an image document or to provide proof of ownership in some identifiable fashion. This paper proposes a novel multiplicative watermarking scheme in the contourlet domain. The effectiveness of a watermark detector depends highly on the modeling of the transform-domain coefficients. In view of this, we first investigate the modeling of the contourlet coefficients by the alpha-stable distributions. It is shown that the univariate alpha-stable distribution fits the empirical data more accurately than the formerly used distributions, such as the generalized Gaussian and Laplacian, do. We also show that the bivariate alpha-stable distribution can capture the across scale dependencies of the contourlet coefficients. Motivated by the modeling results, a blind watermark detector in the contourlet domain is designed by using the univariate and bivariate alpha-stable distributions. It is shown that the detectors based on both of these distributions provide higher detection rates than that based on the generalized Gaussian distribution does. However, a watermark detector designed based on the alpha-stable distribution with a value of its parameter α other than 1 or 2 is computationally expensive because of the lack of a closed-form expression for the distribution in this case. Therefore, a watermark detector is designed based on the bivariate Cauchy member of the alpha-stable family for which α = 1 . The resulting design yields a significantly reduced-complexity detector and provides a performance that is much superior to that of the GG detector and very close to that of the detector corresponding to the best-fit alpha-stable distribution. The robustness of the proposed bivariate Cauchy detector against various kinds of attacks, such as noise, filtering, and compression, is studied and shown to be superior to that of the generalized Gaussian detector.

13.
Artigo em Inglês | MEDLINE | ID: mdl-24091400

RESUMO

Regulatory interactions among genes and gene products are dynamic processes and hence modeling these processes is of great interest. Since genes work in a cascade of networks, reconstruction of gene regulatory network (GRN) is a crucial process for a thorough understanding of the underlying biological interactions. We present here an approach based on pairwise correlations and lasso to infer the GRN, taking into account the variable time delays between various genes. The proposed method is applied to both synthetic and real data sets, and the results on synthetic data show that the proposed approach outperforms the current methods. Further, the results using real data are more consistent with the existing knowledge concerning the possible gene interactions.


Assuntos
Biologia Computacional/métodos , Redes Reguladoras de Genes/genética , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Ciclo Celular/genética , Células HeLa , Humanos , Modelos Lineares , Fatores de Tempo
14.
Artigo em Inglês | MEDLINE | ID: mdl-24110594

RESUMO

Neuron transforms information via a complex interaction between its previous states, its intrinsic properties, and the synaptic input it receives from other neurons. Inferring synaptic input of a neuron only from its membrane potential (output) that contains both sub-threshold and action potentials can effectively elucidate the information processing mechanism of a neuron. The term coined blind deconvolution of Hodgkin-Huxley (HH) neuronal model is defined, for the first time in this paper, to address the problem of reconstructing the hidden dynamics and synaptic input of a single neuron modeled by the HH model as well as estimating its intrinsic parameters only from single trace of noisy membrane potential. The blind deconvolution is accomplished via a recursive algorithm whose iterations contain running an extended Kalman filtering followed by the expectation maximization (EM) algorithm. The accuracy and robustness of the proposed algorithm have been demonstrated by our simulations. The capability of the proposed algorithm makes it particularly useful to understand the neural coding mechanism of a neuron.


Assuntos
Modelos Neurológicos , Neurônios/fisiologia , Potenciais de Ação/fisiologia , Algoritmos , Simulação por Computador , Humanos , Transmissão Sináptica
15.
Front Comput Neurosci ; 7: 109, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24027523

RESUMO

Time-varying excitatory and inhibitory synaptic inputs govern activity of neurons and process information in the brain. The importance of trial-to-trial fluctuations of synaptic inputs has recently been investigated in neuroscience. Such fluctuations are ignored in the most conventional techniques because they are removed when trials are averaged during linear regression techniques. Here, we propose a novel recursive algorithm based on Gaussian mixture Kalman filtering (GMKF) for estimating time-varying excitatory and inhibitory synaptic inputs from single trials of noisy membrane potential in current clamp recordings. The KF is followed by an expectation maximization (EM) algorithm to infer the statistical parameters (time-varying mean and variance) of the synaptic inputs in a non-parametric manner. As our proposed algorithm is repeated recursively, the inferred parameters of the mixtures are used to initiate the next iteration. Unlike other recent algorithms, our algorithm does not assume an a priori distribution from which the synaptic inputs are generated. Instead, the algorithm recursively estimates such a distribution by fitting a Gaussian mixture model (GMM). The performance of the proposed algorithms is compared to a previously proposed PF-based algorithm (Paninski et al., 2012) with several illustrative examples, assuming that the distribution of synaptic input is unknown. If noise is small, the performance of our algorithms is similar to that of the previous one. However, if noise is large, they can significantly outperform the previous proposal. These promising results suggest that our algorithm is a robust and efficient technique for estimating time varying excitatory and inhibitory synaptic conductances from single trials of membrane potential recordings.

16.
IEEE Trans Image Process ; 22(12): 5085-95, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24002000

RESUMO

The number of projections is a critical factor in tomographic imaging. The larger the number, the better the quality of the reconstructed image; however, it increases the radiation dose delivered to the patient. Therefore, it is important to keep the number of projections as small as possible. Traditionally, the projections are taken by moving the x-ray source around the patient at uniform angular steps. Taking projections at nonuniform steps may result in better images as compared with that obtained using uniform projections. This paper describes two different approaches that adjust the step size to adaptively select the angle of projections. The first one is based on the spectral richness of the acquired projections and the second relies on the amount of new information added by successive projections. The superior performance of the two proposed methods over the uniform projection scheme is demonstrated through simulation results using both phantom and real images.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Simulação por Computador , Cabeça/diagnóstico por imagem , Humanos , Imagens de Fantasmas , Reprodutibilidade dos Testes
17.
IEEE Trans Biomed Eng ; 59(7): 1871-81, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22434792

RESUMO

In this paper, a new seizure detection system aimed at assisting in a rapid review of prolonged intracerebral EEG recordings is described. It is based on quantifying the sharpness of the waveform, one of the most important electrographic EEG features utilized by experts for an accurate and reliable identification of a seizure. The waveform morphology is characterized by a measure of sharpness as defined by the slope of the half-waves. A train of abnormally sharp waves resulting from subsequent filtering are used to identify seizures. The method was optimized using 145 h of single-channel depth EEG from seven patients, and tested on another 158 h of single-channel depth EEG from another seven patients. Additionally, 725 h of depth EEG from 21 patients was utilized to assess the system performance in a multichannel configuration. Single-channel test data resulted in a sensitivity of 87% and a specificity of 71%. The multichannel test data reported a sensitivity of 81% and a specificity of 58.9%. The new system detected a wide range of seizure patterns that included rhythmic and nonrhythmic seizures of varying length, including those missed by the experts. We also compare the proposed system with a popular commercial system.


Assuntos
Eletroencefalografia/métodos , Reconhecimento Automatizado de Padrão/métodos , Convulsões/diagnóstico , Bases de Dados Factuais , Eletroencefalografia/classificação , Epilepsia/diagnóstico , Epilepsia/fisiopatologia , Humanos , Curva ROC , Convulsões/fisiopatologia , Sensibilidade e Especificidade
18.
IEEE Trans Biomed Eng ; 59(5): 1419-28, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-22361656

RESUMO

This paper presents a novel model-based patient-specific method for automatic detection of seizures in the intracranial EEG recordings. The proposed method overcomes the complexities in the practical implementation of the patient-specific approach of seizure detection. The method builds a seizure model (set of basis functions) for a priori known seizure (the template seizure pattern), and uses the statistically optimal null filters as a building block for the detection of similar seizures. The process of modeling the template seizure is fully automatic. Overall, the detection method involves the segmentation of the template seizure pattern, rejection of the redundant and noisy segments, extraction of features from the segments to generate a set of models, selection of the best seizure model, and training of the classifier. The trained classifier is used to detect similar seizures in the remaining data. The resulting seizure detection method was evaluated on a total of 304 h of single-channel depth EEG recordings from 14 patients. The system performance is further compared to the Qu-Gotman patient-specific system using the same data. A significant improvement in the proposed system, in terms of specificity, is observed over the compared method.


Assuntos
Eletroencefalografia/métodos , Modelos Neurológicos , Convulsões/diagnóstico , Processamento de Sinais Assistido por Computador , Eletrodos Implantados , Humanos , Análise dos Mínimos Quadrados , Convulsões/fisiopatologia , Sensibilidade e Especificidade , Razão Sinal-Ruído
19.
Artigo em Inglês | MEDLINE | ID: mdl-21464508

RESUMO

The regulation of gene expression is a dynamic process, hence it is of vital interest to identify and characterize changes in gene expression over time. We present here a general statistical method for detecting changes in microarray expression over time within a single biological group and is based on repeated measures (RM) ANOVA. In this method, unlike the classical F-statistic, statistical significance is determined taking into account the time dependency of the microarray data. A correction factor for this RM F-statistic is introduced leading to a higher sensitivity as well as high specificity. We investigate the two approaches that exist in the literature for calculating the p-values using resampling techniques of gene-wise p-values and pooled p-values. It is shown that the pooled p-values method compared to the method of the gene-wise p-values is more powerful, and computationally less expensive, and hence is applied along with the introduced correction factor to various synthetic data sets and a real data set. These results show that the proposed technique outperforms the current methods. The real data set results are consistent with the existing knowledge concerning the presence of the genes. The algorithms presented are implemented in R and are freely available upon request.


Assuntos
Biologia Computacional/métodos , Perfilação da Expressão Gênica/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Algoritmos , Análise de Variância , Sensibilidade e Especificidade
20.
IEEE Trans Biomed Eng ; 58(6): 1637-47, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21278009

RESUMO

This paper presents a vision-based method for automatic tracking of biological cells in time-lapse microscopy by combining the motion features with the topological features of the cells. The automation of tracking frequently faces problems of segmentation error and of finding correct cell correspondence in consecutive frames, since the cells are of varying size and shape, and may have uneven movement; these problems become more acute when the cell population is very high. To reduce the segmentation error, we introduce a cell-detection method based on h-maxima transformation, followed by the fitting of an ellipse for the nucleus shape. To find the correct correspondence between the detected cells, the topological features, namely, color compatibility, area overlap and deformation are combined with the motion features of skewness and displacement. This reduces the ambiguity of matching and constructs accurately the trajectories of the cell proliferation. Finally, a template-matching-based backward tracking procedure is employed to recover any break in a cell trajectory that may occur due to the segmentation errors or the presence of a mitosis. The tracking procedure is tested using a number of different cell sequences with nonuniform illumination, or uneven cell motion, and is shown to provide high accuracy both in the detection and the tracking of the cells.


Assuntos
Rastreamento de Células/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagem com Lapso de Tempo/métodos , Animais , Núcleo Celular/fisiologia , Células HeLa , Humanos , Camundongos , Microscopia de Contraste de Fase , Mitose/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...