Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Syst Man Cybern B Cybern ; 41(3): 664-74, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21097382

RESUMO

In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain. This pain data set highlights the many challenges associated with spontaneous emotion detection, particularly that of expression and head movement due to the patient's reaction to pain. In this paper, we show that the AAM can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the current-state-of-the-art approaches which utilize similarity-normalized appearance features only.


Assuntos
Inteligência Artificial , Face/patologia , Interpretação de Imagem Assistida por Computador/métodos , Medição da Dor/métodos , Dor/patologia , Reconhecimento Automatizado de Padrão/métodos , Gravação em Vídeo/métodos , Humanos , Dor/classificação , Fotografação/métodos
2.
Artigo em Inglês | MEDLINE | ID: mdl-21278824

RESUMO

Pain is generally measured by patient self-report, normally via verbal communication. However, if the patient is a child or has limited ability to communicate (i.e. the mute, mentally impaired, or patients having assisted breathing) self-report may not be a viable measurement. In addition, these self-report measures only relate to the maximum pain level experienced during a sequence so a frame-by-frame measure is currently not obtainable. Using image data from patients with rotator-cuff injuries, in this paper we describe an AAM-based automatic system which can detect pain on a frame-by-frame level. We do this two ways: directly (straight from the facial features); and indirectly (through the fusion of individual AU detectors). From our results, we show that the latter method achieves the optimal results as most discriminant features from each AU detector (i.e. shape or appearance) are used.

3.
IEEE Workshop Multimed Signal Proc ; 2008: 337-342, 2008 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-20689666

RESUMO

A common problem that affects object alignment algorithms is when they have to deal with objects with unseen intra-class appearance variation. Several variants based on gradient-decent algorithms, such as the Lucas-Kanade (or forward-additive) and inverse-compositional algorithms, have been proposed to deal with this issue by solving for both alignment and appearance simultaneously. In [1], Baker and Matthews showed that without appearance variation, the inverse-compositional (IC) algorithm was theoretically and empirically equivalent to the forward-additive (FA) algorithm, whilst achieving significant improvement in computational efficiency. With appearance variation, it would be intuitive that a similar benefit of the IC algorithm would be experienced over the FA counterpart. However, to date no such comparison has been performed. In this paper we remedy this situation by performing such a comparison. In this comparison we show that the two algorithms are not equivalent due to the inclusion of the appearance variation parameters. Through a number of experiments on the MultiPIE face database, we show that we can gain greater refinement using the FA algorithm due to it being a truer solution than the IC approach.

4.
Artigo em Inglês | MEDLINE | ID: mdl-25285316

RESUMO

Automatically recognizing pain from video is a very useful application as it has the potential to alert carers to patients that are in discomfort who would otherwise not be able to communicate such emotion (i.e young children, patients in postoperative care etc.). In previous work [1], a "pain-no pain" system was developed which used an AAM-SVM approach to good effect. However, as with any task involving a large amount of video data, there are memory constraints that need to be adhered to and in the previous work this was compressing the temporal signal using K-means clustering in the training phase. In visual speech recognition, it is well known that the dynamics of the signal play a vital role in recognition. As pain recognition is very similar to the task of visual speech recognition (i.e. recognising visual facial actions), it is our belief that compressing the temporal signal reduces the likelihood of accurately recognising pain. In this paper, we show that by compressing the spatial signal instead of the temporal signal, we achieve better pain recognition. Our results show the importance of the temporal signal in recognizing pain, however, we do highlight some problems associated with doing this due to the randomness of a patient's facial actions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA