Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 17.434
Filtrar
1.
Medicine (Baltimore) ; 99(4): e18724, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31977863

RESUMO

Deep analysis of radiographic images can quantify the extent of intra-tumoral heterogeneity for personalized medicine.In this paper, we propose a novel content-based multi-feature image retrieval (CBMFIR) scheme to discriminate pulmonary nodules benign or malignant. Two types of features are applied to represent the pulmonary nodules. With each type of features, a single-feature distance metric model is proposed to measure the similarity of pulmonary nodules. And then, multiple single-feature distance metric models learned from different types of features are combined to a multi-feature distance metric model. Finally, the learned multi-feature distance metric is used to construct a content-based image retrieval (CBIR) scheme to assist the doctors in diagnosis of pulmonary nodules. The classification accuracy and retrieval accuracy are used to evaluate the performance of the scheme.The classification accuracy is 0.955 ±â€Š0.010, and the retrieval accuracies outperform the comparison methods.The proposed CBMFIR scheme is effective in diagnosis of pulmonary nodules. Our method can better integrate multiple types of features from pulmonary nodules.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Nódulos Pulmonares Múltiplos/diagnóstico , Nódulo Pulmonar Solitário/diagnóstico , Humanos , Reconhecimento Automatizado de Padrão/métodos , Tomografia Computadorizada por Raios X
3.
Acta Crystallogr F Struct Biol Commun ; 75(Pt 8): 531-536, 2019 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-31397323

RESUMO

Described here are instructions for building and using an inexpensive automated microscope (AMi) that has been specifically designed for viewing and imaging the contents of multi-well plates. The X, Y, Z translation stage is controlled through dedicated software (AMiGUI) that is being made freely available. Movements are controlled by an Arduino-based board running grbl, and the graphical user interface and image acquisition are controlled via a Raspberry Pi microcomputer running Python. Images can be written to the Raspberry Pi or to a remote disk. Plates with multiple sample wells at each row/column position are supported, and a script file for automated z-stack depth-of-field enhancement is written along with the images. The graphical user interface and real-time imaging also make it easy to manually inspect and capture images of individual samples.


Assuntos
Gráficos por Computador , Processamento de Imagem Assistida por Computador/métodos , Microscopia/instrumentação , Reconhecimento Automatizado de Padrão/métodos , Software , Interface Usuário-Computador , Humanos
4.
Sensors (Basel) ; 19(17)2019 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-31466235

RESUMO

The dorsal hand vein images captured by cross-device may have great differences in brightness, displacement, rotation angle and size. These deviations must influence greatly the results of dorsal hand vein recognition. To solve these problems, the method of dorsal hand vein recognition was put forward based on bit plane and block mutual information in this paper. Firstly, the input gray image of dorsal hand vein was converted to eight-bit planes to overcome the interference of brightness inside the higher bit planes and the interference of noise inside the lower bit planes. Secondly, the texture of each bit plane of dorsal hand vein was described by a block method and the mutual information between blocks was calculated as texture features by three kinds of modes to solve the problem of rotation and size. Finally, the experiments cross-device were carried out. One device was used to be registered, the other was used to recognize. Compared with the SIFT (Scale-invariant feature transform, SIFT) algorithm, the new algorithm can increase the recognition rate of dorsal hand vein from 86.60% to 93.33%.


Assuntos
Mãos/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Veias/diagnóstico por imagem , Algoritmos , Humanos
5.
Sensors (Basel) ; 19(14)2019 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-31295850

RESUMO

Activity recognition, a key component in pervasive healthcare monitoring, relies on classification algorithms that require labeled data of individuals performing the activity of interest to train accurate models. Labeling data can be performed in a lab setting where an individual enacts the activity under controlled conditions. The ubiquity of mobile and wearable sensors allows the collection of large datasets from individuals performing activities in naturalistic conditions. Gathering accurate data labels for activity recognition is typically an expensive and time-consuming process. In this paper we present two novel approaches for semi-automated online data labeling performed by the individual executing the activity of interest. The approaches have been designed to address two of the limitations of self-annotation: (i) The burden on the user performing and annotating the activity, and (ii) the lack of accuracy due to the user labeling the data minutes or hours after the completion of an activity. The first approach is based on the recognition of subtle finger gestures performed in response to a data-labeling query. The second approach focuses on labeling activities that have an auditory manifestation and uses a classifier to have an initial estimation of the activity, and a conversational agent to ask the participant for clarification or for additional data. Both approaches are described, evaluated in controlled experiments to assess their feasibility and their advantages and limitations are discussed. Results show that while both studies have limitations, they achieve 80% to 90% precision.


Assuntos
Assistência à Saúde/métodos , Dedos/fisiologia , Gestos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos
6.
Comput Intell Neurosci ; 2019: 1604392, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31341466

RESUMO

The medical knowledge sharing community provides users with an open platform for accessing medical resources and sharing medical knowledge, treatment experience, and emotions. Compared with the recipients of general commodities, the recipients in the medical knowledge sharing community pay more attention to the intensity or overall evaluation of emotional vocabularies in the comments, such as treatment effects, prices, service attitudes, and other aspects. Therefore, the overall evaluation is not a key factor in medical service comments, but the semantics of the emotional polarity is the key to affect recipients of the medical information. In this paper, we propose an adaptive learning emotion identification method (ALEIM) based on mutual information feature weight, which captures the correlation and redundancy of features. In order to evaluate the proposed method's effectiveness, we use four basic corpus libraries crawled from the Haodf's online platform and employ Taiwan University NTUSD Simplified Chinese Emotion Dictionary for emotion classification. The experimental results show that our proposed ALEIM method has a better performance for the identification of the low-frequency words' redundant features in comments of the online medical knowledge sharing community.


Assuntos
Inteligência Artificial , Emoções , Troca de Informação em Saúde , Internet , Reconhecimento Automatizado de Padrão/métodos , Humanos , Disseminação de Informação , Teoria da Informação , Semântica
7.
Sensors (Basel) ; 19(13)2019 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-31277492

RESUMO

Device-free human gesture recognition (HGR) using commercial off the shelf (COTS) Wi-Fi devices has gained attention with recent advances in wireless technology. HGR recognizes the human activity performed, by capturing the reflections of Wi-Fi signals from moving humans and storing them as raw channel state information (CSI) traces. Existing work on HGR applies noise reduction and transformation to pre-process the raw CSI traces. However, these methods fail to capture the non-Gaussian information in the raw CSI data due to its limitation to deal with linear signal representation alone. The proposed higher order statistics-based recognition (HOS-Re) model extracts higher order statistical (HOS) features from raw CSI traces and selects a robust feature subset for the recognition task. HOS-Re addresses the limitations in the existing methods, by extracting third order cumulant features that maximizes the recognition accuracy. Subsequently, feature selection methods derived from information theory construct a robust and highly informative feature subset, fed as input to the multilevel support vector machine (SVM) classifier in order to measure the performance. The proposed methodology is validated using a public database SignFi, consisting of 276 gestures with 8280 gesture instances, out of which 5520 are from the laboratory and 2760 from the home environment using a 10 × 5 cross-validation. HOS-Re achieved an average recognition accuracy of 97.84%, 98.26% and 96.34% for the lab, home and lab + home environment respectively. The average recognition accuracy for 150 sign gestures with 7500 instances, collected from five different users was 96.23% in the laboratory environment.


Assuntos
Gestos , Reconhecimento Automatizado de Padrão/métodos , Tecnologia sem Fio/instrumentação , Bases de Dados Factuais , Humanos , Aprendizado de Máquina , Máquina de Vetores de Suporte
8.
J Clin Pathol ; 72(11): 755-761, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31256009

RESUMO

AIMS: Morphological differentiation among different blast cell lineages is a difficult task and there is a lack of automated analysers able to recognise these abnormal cells. This study aims to develop a machine learning approach to predict the diagnosis of acute leukaemia using peripheral blood (PB) images. METHODS: A set of 442 smears was analysed from 206 patients. It was split into a training set with 75% of these smears and a testing set with the remaining 25%. Colour clustering and mathematical morphology were used to segment cell images, which allowed the extraction of 2,867 geometric, colour and texture features. Several classification techniques were studied to obtain the most accurate classification method. Afterwards, the classifier was assessed with the images of the testing set. The final strategy was to predict the patient's diagnosis using the PB smear, and the final assessment was done with the cell images of the smears of the testing set. RESULTS: The highest classification accuracy was achieved with the selection of 700 features with linear discriminant analysis. The overall classification accuracy for the six groups of cell types was 85.8%, while the overall classification accuracy for individual smears was 94% as compared with the true confirmed diagnosis. CONCLUSIONS: The proposed method achieves a high diagnostic precision in the recognition of different types of blast cells among other mononuclear cells circulating in blood. It is the first encouraging step towards the idea of being a diagnostic support tool in the future.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Leucemia/patologia , Leucócitos/patologia , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos , Coloração e Rotulagem/métodos , Doença Aguda , Coleta de Amostras Sanguíneas , Linhagem da Célula , Diagnóstico Diferencial , Humanos , Leucemia/sangue , Leucemia/classificação , Valor Preditivo dos Testes , Reprodutibilidade dos Testes
9.
J Electromyogr Kinesiol ; 48: 152-160, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31357113

RESUMO

Research in pattern recognition (PR) for myoelectric control of the upper limb prostheses has been extensive. However, there has been limited attention to the factors that influence the clinical translation of this technology. A relevant factor of influence in clinical performance of EMG PR-based control of prostheses is the variation in muscle activation level, which modifies the EMG patterns even when the amputee attempts the same movement. To decrease the effect of muscle activation level variations on EMG PR, this work proposes to use dynamic time warping (DTW) and is validated on two databases. The first database, which has data from ten intact-limbed subjects, was used to test the baseline performance of DTW, resulting in an average classification accuracy of more than 90%. The second database comprised data from nine upper limb amputees recorded at three levels of force for six hand grips. The results showed that DTW trained at a single force level achieved an average classification accuracy of 60 ±â€¯9%, 70 ±â€¯8%, and 60 ±â€¯7% at the low, medium and high force levels respectively across all amputee subjects. The proposed scheme with DTW achieved a significant 10% improvement in classification accuracy when trained at a low force level when compared to the traditional time-dependent power spectrum descriptors (TD-PSD) method.


Assuntos
Membros Artificiais/normas , Eletromiografia/métodos , Mãos/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Adulto , Eletromiografia/normas , Mãos/fisiopatologia , Humanos , Masculino , Movimento , Músculo Esquelético/fisiologia , Músculo Esquelético/fisiopatologia , Reconhecimento Automatizado de Padrão/normas
10.
Comput Intell Neurosci ; 2019: 2060796, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31354800

RESUMO

In today's society, image resources are everywhere, and the number of available images can be overwhelming. Determining how to rapidly and effectively query, retrieve, and organize image information has become a popular research topic, and automatic image annotation is the key to text-based image retrieval. If the semantic images with annotations are not balanced among the training samples, the low-frequency labeling accuracy can be poor. In this study, a dual-channel convolution neural network (DCCNN) was designed to improve the accuracy of automatic labeling. The model integrates two convolutional neural network (CNN) channels with different structures. One channel is used for training based on the low-frequency samples and increases the proportion of low-frequency samples in the model, and the other is used for training based on all training sets. In the labeling process, the outputs of the two channels are fused to obtain a labeling decision. We verified the proposed model on the Caltech-256, Pascal VOC 2007, and Pascal VOC 2012 standard datasets. On the Pascal VOC 2012 dataset, the proposed DCCNN model achieves an overall labeling accuracy of up to 93.4% after 100 training iterations: 8.9% higher than the CNN and 15% higher than the traditional method. A similar accuracy can be achieved by the CNN only after 2,500 training iterations. On the 50,000-image dataset from Caltech-256 and Pascal VOC 2012, the performance of the DCCNN is relatively stable; it achieves an average labeling accuracy above 93%. In contrast, the CNN reaches an accuracy of only 91% even after extended training. Furthermore, the proposed DCCNN achieves a labeling accuracy for low-frequency words approximately 10% higher than that of the CNN, which further verifies the reliability of the proposed model in this study.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Humanos , Reconhecimento Automatizado de Padrão/métodos
11.
Comput Intell Neurosci ; 2019: 9378014, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31354803

RESUMO

The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and follow-up treatment. An automatic segmentation method for brain lesions is proposed based on the low-rank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a low-rank representation that incorporates sparsity-inducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.


Assuntos
Encéfalo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Imagem por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Esclerose Múltipla/diagnóstico por imagem , Reconhecimento Automatizado de Padrão/métodos
12.
Sensors (Basel) ; 19(15)2019 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-31357650

RESUMO

Wearable robotic braces have the potential to improve rehabilitative therapies for patients suffering from musculoskeletal (MSK) conditions. Ideally, a quantitative assessment of health would be incorporated into rehabilitative devices to monitor patient recovery. The purpose of this work is to develop a model to distinguish between the healthy and injured arms of elbow trauma patients based on electromyography (EMG) data. Surface EMG recordings were collected from the healthy and injured limbs of 30 elbow trauma patients while performing 10 upper-limb motions. Forty-two features and five feature sets were extracted from the data. Feature selection was performed to improve the class separation and to reduce the computational complexity of the feature sets. The following classifiers were tested: linear discriminant analysis (LDA), support vector machine (SVM), and random forest (RF). The classifiers were used to distinguish between two levels of health: healthy and injured (50% baseline accuracy rate). Maximum fractal length (MFL), myopulse percentage rate (MYOP), power spectrum ratio (PSR) and spike shape analysis features were identified as the best features for classifying elbow muscle health. A majority vote of the LDA classification models provided a cross-validation accuracy of 82.1%. The work described in this paper indicates that it is possible to discern between healthy and injured limbs of patients with MSK elbow injuries. Further assessment and optimization could improve the consistency and accuracy of the classification models. This work is the first of its kind to identify EMG metrics for muscle health assessment by wearable rehabilitative devices.


Assuntos
Cotovelo/diagnóstico por imagem , Eletromiografia , Músculo Esquelético/diagnóstico por imagem , Ferimentos e Lesões/diagnóstico por imagem , Adulto , Algoritmos , Análise Discriminante , Cotovelo/lesões , Cotovelo/fisiopatologia , Feminino , Humanos , Masculino , Músculo Esquelético/lesões , Músculo Esquelético/fisiopatologia , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Máquina de Vetores de Suporte , Dispositivos Eletrônicos Vestíveis , Ferimentos e Lesões/fisiopatologia , Ferimentos e Lesões/reabilitação
13.
Neural Netw ; 117: 201-215, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31174048

RESUMO

Most existing low-rank and sparse representation models cannot preserve the local manifold structures of samples adaptively, or separate the locality preservation from the coding process, which may result in the decreased performance. In this paper, we propose an inductive Robust Auto-weighted Low-Rank and Sparse Representation (RALSR) framework by joint feature embedding for the salient feature extraction of high-dimensional data. Technically, the model of our RALSR seamlessly integrates the joint low-rank and sparse recovery with robust salient feature extraction. Specifically, RALSR integrates the adaptive locality preserving weighting, joint low-rank/sparse representation and the robustness-promoting representation into a unified model. For accurate similarity measure, RALSR computes the adaptive weights by minimizing the joint reconstruction errors over the recovered clean data and salient features simultaneously, where L1-norm is also applied to ensure the sparse properties of learnt weights. The joint minimization can also potentially enable the weight matrix to have the power to remove noise and unfavorable features by reconstruction adaptively. The underlying projection is encoded by a joint low-rank and sparse regularization, which can ensure it to be powerful for salient feature extraction. Thus, the calculated low-rank sparse features of high-dimensional data would be more accurate for the subsequent classification. Visual and numerical comparison results demonstrate the effectiveness of our RALSR for data representation and classification.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos , Processamento de Imagem Assistida por Computador/normas , Reconhecimento Automatizado de Padrão/normas
14.
J Med Syst ; 43(7): 211, 2019 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-31152236

RESUMO

Lung Cancer is the leading cause of death among all the cancers' in today's world. The survival rate of the patients is 85% if the cancer can be diagnosed during Stage 1. Mining of the patient records can help in diagnosing cancer during Stage 1. Using a multi-class neural networks helps to identify the disease during its stage 1 itself. The implementation of multi-class neural networks has yielded an accuracy of 100%. The model created using the neural networks approach helps to identify lung cancer during Stage 1 itself, thus the survival rate of the patients can be increased. This model can serve as pre-diagnosis tool for the practitioners.


Assuntos
Mineração de Dados/métodos , Neoplasias Pulmonares/diagnóstico , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos , Neoplasias Pulmonares/patologia
15.
Sensors (Basel) ; 19(12)2019 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-31242651

RESUMO

We propose a method to automatically detect 3D poses of closely interactive humans from sparse multi-view images at one time instance. It is a challenging problem due to the strong partial occlusion and truncation between humans and no tracking process to provide priori poses information. To solve this problem, we first obtain 2D joints in every image using OpenPose and human semantic segmentation results from Mask R-CNN. With the 3D joints triangulated from multi-view 2D joints, a two-stage assembling method is proposed to select the correct 3D pose from thousands of pose seeds combined by joint semantic meanings. We further present a novel approach to minimize the interpenetration between human shapes with close interactions. Finally, we test our method on multi-view human-human interaction (MHHI) datasets. Experimental results demonstrate that our method achieves high visualized correct rate and outperforms the existing method in accuracy and real-time capability.


Assuntos
Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagem Tridimensional/métodos
16.
Sensors (Basel) ; 19(12)2019 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-31234366

RESUMO

Human action recognition (HAR) has emerged as a core research domain for video understanding and analysis, thus attracting many researchers. Although significant results have been achieved in simple scenarios, HAR is still a challenging task due to issues associated with view independence, occlusion and inter-class variation observed in realistic scenarios. In previous research efforts, the classical bag of visual words approach along with its variations has been widely used. In this paper, we propose a Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) model for human action recognition without compromising the strengths of the classical bag of visual words approach. Expressions are formed based on the density of a spatio-temporal cube of a visual word. To handle inter-class variation, we use class-specific visual word representation for visual expression generation. In contrast to the Bag of Expressions (BoE) model, the formation of visual expressions is based on the density of spatio-temporal cubes built around each visual word, as constructing neighborhoods with a fixed number of neighbors could include non-relevant information making a visual expression less discriminative in scenarios with occlusion and changing viewpoints. Thus, the proposed approach makes the model more robust to occlusion and changing viewpoint challenges present in realistic scenarios. Furthermore, we train a multi-class Support Vector Machine (SVM) for classifying bag of expressions into action classes. Comprehensive experiments on four publicly available datasets: KTH, UCF Sports, UCF11 and UCF50 show that the proposed model outperforms existing state-of-the-art human action recognition methods in term of accuracy to 99.21%, 98.60%, 96.94 and 94.10%, respectively.


Assuntos
Atividades Humanas , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Análise Espaço-Temporal , Algoritmos , Humanos , Esportes/fisiologia , Gravação em Vídeo
17.
Sensors (Basel) ; 19(12)2019 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-31207911

RESUMO

The term "plenoptic" comes from the Latin words plenus ("full") + optic. The plenoptic function is the 7-dimensional function representing the intensity of the light observed from every position and direction in 3-dimensional space. Thanks to the plenoptic function it is thus possible to define the direction of every ray in the light-field vector function. Imaging systems are rapidly evolving with the emergence of light-field-capturing devices. Consequently, existing image-processing techniques need to be revisited to match the richer information provided. This article explores the use of light fields for face analysis. This field of research is very recent but already includes several works reporting promising results. Such works deal with the main steps of face analysis and include but are not limited to: face recognition; face presentation attack detection; facial soft-biometrics classification; and facial landmark detection. This article aims to review the state of the art on light fields for face analysis, identifying future challenges and possible applications.


Assuntos
Face/anatomia & histologia , Reconhecimento Facial , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Biometria/métodos , Humanos
18.
Sensors (Basel) ; 19(12)2019 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-31207949

RESUMO

Eye movements generate electric signals, which a user can employ to control his/her environment and communicate with others. This paper presents a review of previous studies on such electric signals, that is, electrooculograms (EOGs), from the perspective of human-computer interaction (HCI). EOGs represent one of the easiest means to estimate eye movements by using a low-cost device, and have been often considered and utilized for HCI applications, such as to facilitate typing on a virtual keyboard, moving a mouse, or controlling a wheelchair. The objective of this study is to summarize the experimental procedures of previous studies and provide a guide for researchers interested in this field. In this work the basic characteristics of EOGs, associated measurements, and signal processing and pattern recognition algorithms are briefly reviewed, and various applications reported in the existing literature are listed. It is expected that EOGs will be a useful source of communication in virtual reality environments, and can act as a valuable communication tools for people with amyotrophic lateral sclerosis.


Assuntos
Auxiliares de Comunicação para Pessoas com Deficiência , Eletroculografia/tendências , Movimentos Oculares/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Esclerose Amiotrófica Lateral/fisiopatologia , Esclerose Amiotrófica Lateral/reabilitação , Humanos , Processamento de Sinais Assistido por Computador , Interface Usuário-Computador
19.
Comput Intell Neurosci ; 2019: 3587036, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31217801

RESUMO

To overcome the shortcomings of inaccurate textural direction representation and high-computational complexity of Local Binary Patterns (LBPs), we propose a novel feature descriptor named as Local Dominant Directional Symmetrical Coding Patterns (LDDSCPs). Inspired by the directional sensitivity of human visual system, we partition eight convolution masks into two symmetrical groups according to their directions and adopt these two groups to compute the convolution values of each pixel. Then, we encode the dominant direction information of facial expression texture by comparing each pixel's convolution values with the average strength of its belonging group and obtain LDDSCP-1 and LDDSCP-2 codes, respectively. At last, in view of the symmetry of two groups of direction masks, we stack these corresponding histograms of LDDSCP-1 and LDDSCP-2 codes into the ultimate LDDSCP feature vector which has effects on the more precise facial feature description and the lower computational complexity. Experimental results on the JAFFE and Cohn-Kanade databases demonstrate that the proposed LDDSCP feature descriptor compared with LBP, Gabor, and other traditional operators achieves superior performance in recognition rate and computational complexity. Furthermore, it is also no less inferior to some state-of-the-art local descriptors like as LDP, LDNP, es-LBP, and GDP.


Assuntos
Identificação Biométrica/métodos , Simulação por Computador , Expressão Facial , Reconhecimento Automatizado de Padrão/métodos , Humanos , Interpretação de Imagem Assistida por Computador/métodos
20.
J Med Syst ; 43(8): 241, 2019 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-31227923

RESUMO

The multi-atlas method is one of the efficient and common automatic labeling method, which uses the prior information provided by expert-labeled images to guide the labeling of the target. However, most multi-atlas-based methods depend on the registration that may not give the correct information during the label propagation. To address the issue, we designed a new automatic labeling method through the hashing retrieval based atlas forest. The proposed method propagates labels without registration to reduce the errors, and constructs a target-oriented learning model to integrate information among the atlases. This method innovates a coarse classification strategy to preprocess the dataset, which retains the integrity of dataset and reduces computing time. Furthermore, the method considers each voxel in the atlas as a sample and encodes these samples with hashing for the fast sample retrieval. In the stage of labeling, the method selects suitable samples through hashing learning and trains atlas forests by integrating the information from the dataset. Then, the trained model is used to predict the labels of the target. Experimental results on two datasets illustrated that the proposed method is promising in the automatic labeling of MR brain images.


Assuntos
Processamento de Imagem Assistida por Computador , Neuroimagem , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA