Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3170-3174, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086672

RESUMEN

Among the different modalities to assess emotion, electroencephalogram (EEG), representing the electrical brain activity, achieved motivating results over the last decade. Emotion estimation from EEG could help in the diagnosis or rehabilitation of certain diseases. In this paper, we propose a dual model considering two different representations of EEG feature maps: 1) a sequential based representation of EEG band power, 2) an image-based representation of the feature vectors. We also propose an innovative method to combine the information based on a saliency analysis of the image-based model to promote joint learning of both model parts. The model has been evaluated on four publicly available datasets: SEED-IV, SEED, DEAP and MPED. The achieved results outperform results from state-of-the-art approaches for three of the proposed datasets with a lower standard deviation that reflects higher stability. For sake of reproducibility, the codes and models proposed in this paper are available at https://github.com/VDelv/Emotion-EEG.


Asunto(s)
Electroencefalografía , Emociones , Electroencefalografía/métodos , Reproducibilidad de los Resultados
2.
IEEE Trans Cybern ; 45(7): 1340-52, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-25216492

RESUMEN

Recognizing human actions in 3-D video sequences is an important open problem that is currently at the heart of many research domains including surveillance, natural interfaces and rehabilitation. However, the design and development of models for action recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, clothing and appearance. In this paper, we propose a new framework to extract a compact representation of a human action captured through a depth sensor, and enable accurate action recognition. The proposed solution develops on fitting a human skeleton model to acquired data so as to represent the 3-D coordinates of the joints and their change over time as a trajectory in a suitable action space. Thanks to such a 3-D joint-based framework, the proposed solution is capable to capture both the shape and the dynamics of the human body, simultaneously. The action recognition problem is then formulated as the problem of computing the similarity between the shape of trajectories in a Riemannian manifold. Classification using k-nearest neighbors is finally performed on this manifold taking advantage of Riemannian geometry in the open curve shape space. Experiments are carried out on four representative benchmarks to demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Comparative results with state-of-the-art methods are reported.


Asunto(s)
Imagenología Tridimensional/métodos , Aprendizaje Automático , Movimiento/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotograbar/métodos , Imagen de Cuerpo Entero/métodos , Actigrafía/métodos , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Grabación en Video/métodos
3.
IEEE Trans Med Imaging ; 30(2): 315-26, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20875969

RESUMEN

With the widespread use of digital cameras, freehand wound imaging has become common practice in clinical settings. There is however still a demand for a practical tool for accurate wound healing assessment, combining dimensional measurements and tissue classification in a single user-friendly system. We achieved the first part of this objective by computing a 3-D model for wound measurements using uncalibrated vision techniques. We focus here on tissue classification from color and texture region descriptors computed after unsupervised segmentation. Due to perspective distortions, uncontrolled lighting conditions and view points, wound assessments vary significantly between patient examinations. The main contribution of this paper is to overcome this drawback with a multiview strategy for tissue classification, relying on a 3-D model onto which tissue labels are mapped and classification results merged. The experimental classification tests demonstrate that enhanced repeatability and robustness are obtained and that metric assessment is achieved through real area and volume measurements and wound outline extraction. This innovative tool is intended for use not only in therapeutic follow-up in hospitals but also for telemedicine purposes and clinical research, where repeatability and accuracy of wound assessment are critical.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Fotograbar/métodos , Cicatrización de Heridas/fisiología , Pie Diabético/patología , Humanos , Úlcera de la Pierna/patología , Úlcera por Presión/patología , Reproducibilidad de los Resultados
4.
Artículo en Inglés | MEDLINE | ID: mdl-18003389

RESUMEN

This work is part of the ESCALE project dedicated to the design of a complete 3D and color wound assessment tool using a simple free handled digital camera. The first part was concerned with the computation of a 3D model for wound measurements using uncalibrated vision techniques. This paper presents the second part which deals with color classification of wound tissues, a prior step before to combine shape and color analysis in a single tool for real tissue surface measurements. As direct pixel classification proved to be inefficient for tissue wound labeling, we have adopted an original approach based on unsupervised segmentation prior to classification, to improve the robustness of the labeling step by considering spatial continuity and homogeneity. A ground truth is first provided by merging the images collected and labeled by clinicians. Then, color and texture tissue descriptors are extracted on labeled regions of this learning database to design a SVM region classifier, achieving 88% success overlap score. Finally, we apply unsupervised color region segmentation on test images and classify the regions. Compared to the ground truth, segmentation driven classification and clinician labeling achieve similar performance, around 75% for granulation and 60% for slough.


Asunto(s)
Algoritmos , Inteligencia Artificial , Colorimetría/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotograbar/métodos , Úlcera por Presión/patología , Heridas y Lesiones/patología , Color , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...