Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
1.
J Med Imaging (Bellingham) ; 4(2): 027501, 2017 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28653016

RESUMO

Glandular structural features are important for the tumor pathologist in the assessment of cancer malignancy of prostate tissue slides. The varying shapes and sizes of glands combined with the tedious manual observation task can result in inaccurate assessment. There are also discrepancies and low-level agreement among pathologists, especially in cases of Gleason pattern 3 and pattern 4 prostate adenocarcinoma. An automated gland segmentation system can highlight various glandular shapes and structures for further analysis by the pathologist. These objective highlighted patterns can help reduce the assessment variability. We propose an automated gland segmentation system. Forty-three hematoxylin and eosin-stained images were acquired from prostate cancer tissue slides and were manually annotated for gland, lumen, periacinar retraction clefting, and stroma regions. Our automated gland segmentation system was trained using these manual annotations. It identifies these regions using a combination of pixel and object-level classifiers by incorporating local and spatial information for consolidating pixel-level classification results into object-level segmentation. Experimental results show that our method outperforms various texture and gland structure-based gland segmentation algorithms in the literature. Our method has good performance and can be a promising tool to help decrease interobserver variability among pathologists.

2.
Comput Med Imaging Graph ; 57: 40-49, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-27544932

RESUMO

Autoimmune diseases (AD) are the abnormal response of the immune system of the body to healthy tissues. ADs have generally been on the increase. Efficient computer aided diagnosis of ADs through classification of the human epithelial type 2 (HEp-2) cells become beneficial. These methods make lower diagnosis costs, faster response and better diagnosis repeatability. In this paper, we present an automated HEp-2 cell image classification technique that exploits the sparse coding of the visual features together with the Bag of Words model (SBoW). In particular, SURF (Speeded Up Robust Features) and SIFT (Scale-invariant feature transform) features are specially integrated to work in a complementary fashion. This method helps greatly improve the cell classification accuracy. Additionally, a hierarchical max-pooling method is proposed to aggregate the local sparse codes in different layers to provide final feature vector. Furthermore, various parameters of the dictionary learning including the dictionary size, the learning iteration number, and the pooling strategy is also investigated. Experiments conducted on publicly available datasets show that the proposed technique clearly outperforms state-of-the-art techniques in cell and specimen levels.


Assuntos
Doenças Autoimunes/diagnóstico por imagem , Doenças Autoimunes/patologia , Diagnóstico por Computador/métodos , Células Epiteliais/classificação , Células Epiteliais/patologia , Humanos
3.
IEEE Trans Image Process ; 25(12): 5622-5634, 2016 12.
Artigo em Inglês | MEDLINE | ID: mdl-27623587

RESUMO

Text recognition in video/natural scene images has gained significant attention in the field of image processing in many computer vision applications, which is much more challenging than recognition in plain background images. In this paper, we aim to restore complete character contours in video/scene images from gray values, in contrast to the conventional techniques that consider edge images/binary information as inputs for text detection and recognition. We explore and utilize the strengths of zero crossing points given by the Laplacian to identify stroke candidate pixels (SPC). For each SPC pair, we propose new symmetry features based on gradient magnitude and Fourier phase angles to identify probable stroke candidate pairs (PSCP). The same symmetry properties are proposed at the PSCP level to choose seed stroke candidate pairs (SSCP). Finally, an iterative algorithm is proposed for SSCP to restore complete character contours. Experimental results on benchmark databases, namely, the ICDAR family of video and natural scenes, Street View Data, and MSRA data sets, show that the proposed technique outperforms the existing techniques in terms of both quality measures and recognition rate. We also show that character contour restoration is effective for text detection in video and natural scene images.

4.
IEEE Trans Image Process ; 24(11): 4488-501, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26259083

RESUMO

Scene text detection from video as well as natural scene images is challenging due to the variations in background, contrast, text type, font type, font size, and so on. Besides, arbitrary orientations of texts with multi-scripts add more complexity to the problem. The proposed approach introduces a new idea of convolving Laplacian with wavelet sub-bands at different levels in the frequency domain for enhancing low resolution text pixels. Then, the results obtained from different sub-bands (spectral) are fused for detecting candidate text pixels. We explore maxima stable extreme regions along with stroke width transform for detecting candidate text regions. Text alignment is done based on the distance between the nearest neighbor clusters of candidate text regions. In addition, the approach presents a new symmetry driven nearest neighbor for restoring full text lines. We conduct experiments on our collected video data as well as several benchmark data sets, such as ICDAR 2011, ICDAR 2013, and MSRA-TD500 to evaluate the proposed method. The proposed approach is compared with the state-of-the-art methods to show its superiority to the existing methods.

5.
Comput Med Imaging Graph ; 38(1): 1-14, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24332442

RESUMO

Brain midline shift (MLS) is a significant factor in brain CT diagnosis. In this paper, we present a new method of automatically detecting and quantifying brain midline shift in traumatic injury brain CT images. The proposed method automatically picks out the CT slice on which midline shift can be observed most clearly and uses automatically detected anatomical markers to delineate the deformed midline and quantify the shift. For each anatomical marker, the detector generates five candidate points. Then the best candidate for each marker is selected based on the statistical distribution of features characterizing the spatial relationships among the markers. Experiments show that the proposed method outperforms previous methods, especially in the cases of large intra-cerebral hemorrhage and missing ventricles. A brain CT retrieval system is also developed based on the brain midline shift quantification results.


Assuntos
Pontos de Referência Anatômicos/diagnóstico por imagem , Hemorragia Encefálica Traumática/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Humanos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
6.
Artigo em Inglês | MEDLINE | ID: mdl-25571541

RESUMO

With the prevalence of brain-related diseases like Alzheimer in an increasing ageing population, Connectomics, the study of connections between neurons of the human brain, has emerged as a novel and challenging research topic. Accurate and fully automatic algorithms are needed to deal with the increasing amount of data from the brain images. This paper presents an automatic 3D neuron reconstruction technique where neurons within each slice image are first segmented and then linked across multiple slices within the publicly available Electron Microscopy dataset (SNEMI3D). First, random Forest classifier is adapted on top of superpixels for the neuron segmentation within each slice image. The maximum overlap between two consecutive images is then calculated for neuron linking, where the adjacency matrix of two different labeling of the segments is used to distinguish neuron merging and splitting. Experiments over the SNEMI3D dataset show that the proposed technique is efficient and accurate.


Assuntos
Doença de Alzheimer/diagnóstico , Imageamento Tridimensional , Microscopia Eletrônica , Neurônios/ultraestrutura , Algoritmos , Doença de Alzheimer/patologia , Encéfalo/ultraestrutura , Humanos , Interpretação de Imagem Assistida por Computador , Prevalência
7.
Stud Health Technol Inform ; 192: 739-43, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23920655

RESUMO

We introduce an automated pathology classification system for medical volumetric brain image slices. Existing work often relies on handcrafted features extracted from automatic image segmentation. This is not only a challenging and time-consuming process, but it may also limit the adaptability and robustness of the system. We propose a novel approach to combine sparse Gabor-feature based classifiers in an ensemble classification framework. The unsupervised nature of this non-parametric technique can significantly reduce the time and effort for system calibration. In particular, classification of medical images in this framework does not rely on segmentation, nor semantic-based or annotation-based feature selection. Our experiments show very promising results in classifying computer tomography image slices into pathological classes for traumatic brain injury patients.


Assuntos
Algoritmos , Inteligência Artificial , Lesões Encefálicas/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Técnica de Subtração , Tomografia Computadorizada por Raios X/métodos , Humanos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
8.
IEEE Trans Image Process ; 22(4): 1408-17, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23221822

RESUMO

Segmentation of text from badly degraded document images is a very challenging task due to the high inter/intra-variation between the document background and the foreground text of different document images. In this paper, we propose a novel document image binarization technique that addresses these issues by using adaptive image contrast. The adaptive image contrast is a combination of the local image contrast and the local image gradient that is tolerant to text and background variation caused by different types of document degradations. In the proposed technique, an adaptive contrast map is first constructed for an input degraded document image. The contrast map is then binarized and combined with Canny's edge map to identify the text stroke edge pixels. The document text is further segmented by a local threshold that is estimated based on the intensities of detected text stroke edge pixels within a local window. The proposed method is simple, robust, and involves minimum parameter tuning. It has been tested on three public datasets that are used in the recent document image binarization contest (DIBCO) 2009 & 2011 and handwritten-DIBCO 2010 and achieves accuracies of 93.5%, 87.8%, and 92.03%, respectively, that are significantly higher than or close to that of the best-performing methods reported in the three contests. Experiments on the Bickley diary dataset that consists of several challenging bad quality document images also show the superior performance of our proposed method, compared with other techniques.

9.
IEEE Trans Pattern Anal Mach Intell ; 33(10): 2039-50, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21321366

RESUMO

Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

10.
IEEE Trans Pattern Anal Mach Intell ; 33(2): 412-9, 2011 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-20733217

RESUMO

In this paper, we propose a method based on the Laplacian in the frequency domain for video text detection. Unlike many other approaches which assume that text is horizontally-oriented, our method is able to handle text of arbitrary orientation. The input image is first filtered with Fourier-Laplacian. K-means clustering is then used to identify candidate text regions based on the maximum difference. The skeleton of each connected component helps to separate the different text strings from each other. Finally, text string straightness and edge density are used for false positive elimination. Experimental results show that the proposed method is able to handle graphics text and scene text of both horizontal and nonhorizontal orientation.

11.
IEEE Trans Pattern Anal Mach Intell ; 32(4): 755-62, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20224129

RESUMO

A common problem encountered in recognizing real-scene symbols is the perspective deformation. In this paper, a recognition method resistant to perspective deformation is proposed, based on Cross-Ratio Spectrum descriptor. This method shows good resistance to severe perspective deformation and good discriminating power to similar symbols.

12.
J Biomed Inform ; 42(5): 866-72, 2009 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-19616641

RESUMO

Automatic detecting protein-protein interaction (PPI) relevant articles is a crucial step for large-scale biological database curation. The previous work adopted POS tagging, shallow parsing and sentence splitting techniques, but they achieved worse performance than the simple bag-of-words representation. In this paper, we generated and investigated multiple types of feature representations in order to further improve the performance of PPI text classification task. Besides the traditional domain-independent bag-of-words approach and the term weighting methods, we also explored other domain-dependent features, i.e. protein-protein interaction trigger keywords, protein named entities and the advanced ways of incorporating Natural Language Processing (NLP) output. The integration of these multiple features has been evaluated on the BioCreAtIvE II corpus. The experimental results showed that both the advanced way of using NLP output and the integration of bag-of-words and NLP output improved the performance of text classification. Specifically, in comparison with the best performance achieved in the BioCreAtIvE II IAS, the feature-level and classifier-level integration of multiple features improved the performance of classification 2.71% and 3.95%, respectively.


Assuntos
Informática Médica/métodos , Processamento de Linguagem Natural , Mapeamento de Interação de Proteínas/métodos , Vocabulário Controlado , Distribuição de Qui-Quadrado , Cadeias de Markov , Terminologia como Assunto
13.
IEEE Trans Pattern Anal Mach Intell ; 31(4): 721-35, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19229086

RESUMO

In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kappa NN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets.

14.
IEEE Trans Pattern Anal Mach Intell ; 30(11): 1913-8, 2008 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-18787240

RESUMO

This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.


Assuntos
Inteligência Artificial , Sistemas de Gerenciamento de Base de Dados , Bases de Dados Factuais , Documentação/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento Eletrônico de Dados/métodos , Aumento da Imagem/métodos , Idioma , Leitura
15.
IEEE Trans Pattern Anal Mach Intell ; 28(2): 195-208, 2006 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-16468617

RESUMO

Scanning a document page from a thick bound volume often results in two kinds of distortions in the scanned image, i.e., shade along the "spine" of the book and warping in the shade area. In this paper, we propose an efficient restoration method based on the discovery of the 3D shape of a book surface from the shading information in a scanned document image. From a technical point of view, this shape from shading (SFS) problem in real-world environments is characterized by 1) a proximal and moving light source, 2) Lambertian reflection, 3) nonuniform albedo distribution, and 4) document skew. Taking all these factors into account, we first build practical models (consisting of a 3D geometric model and a 3D optical model) for the practical scanning conditions to reconstruct the 3D shape of the book surface. We next restore the scanned document image using this shape based on deshading and dewarping models. Finally, we evaluate the restoration results by comparing our estimated surface shape with the real shape as well as the OCR performance on original and restored document images. The results show that the geometric and photometric distortions are mostly removed and the OCR results are improved markedly.


Assuntos
Documentação/métodos , Processamento Eletrônico de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Artefatos , Inteligência Artificial , Gráficos por Computador , Simulação por Computador , Modelos Teóricos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
J Biomed Inform ; 37(6): 411-22, 2004 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-15542015

RESUMO

The purpose of this research is to enhance an HMM-based named entity recognizer in the biomedical domain. First, we analyze the characteristics of biomedical named entities. Then, we propose a rich set of features, including orthographic, morphological, part-of-speech, and semantic trigger features. All these features are integrated via a Hidden Markov Model with back-off modeling. Furthermore, we propose a method for biomedical abbreviation recognition and two methods for cascaded named entity recognition. Evaluation on the GENIA V3.02 and V1.1 shows that our system achieves 66.5 and 62.5 F-measure, respectively, and outperforms the previous best published system by 8.1 F-measure on the same experimental setting. The major contribution of this paper lies in its rich feature set specially designed for biomedical domain and the effective methods for abbreviation and cascaded named entity recognition. To our best knowledge, our system is the first one that copes with the cascaded phenomena.


Assuntos
Indexação e Redação de Resumos/métodos , Biologia Computacional/métodos , Armazenamento e Recuperação da Informação/métodos , Abreviaturas como Assunto , Algoritmos , Animais , Inteligência Artificial , Biologia/métodos , Sistemas de Gerenciamento de Base de Dados , Bases de Dados como Assunto , Bases de Dados Bibliográficas , Humanos , Idioma , Cadeias de Markov , Modelos Estatísticos , Nomes , Processamento de Linguagem Natural , Software , Terminologia como Assunto
17.
IEEE Trans Neural Netw ; 15(3): 728-37, 2004 May.
Artigo em Inglês | MEDLINE | ID: mdl-15384559

RESUMO

This paper introduces the Adaptive Resonance Theory under Constraint (ART-C 2A) learning paradigm based on ART 2A, which is capable of generating a user-defined number of recognition nodes through online estimation of an appropriate vigilance threshold. Empirical experiments compare the cluster validity and the learning efficiency of ART-C 2A with those of ART 2A, as well as three closely related clustering methods, namely online K-Means, batch K-Means, and SOM, in a quantitative manner. Besides retaining the online cluster creation capability of ART 2A, ART-C 2A gives the alternative clustering solution, which allows a direct control on the number of output clusters generated by the self-organizing process.


Assuntos
Análise por Conglomerados , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...