Learning atomic human actions using variable-length Markov models.
IEEE Trans Syst Man Cybern B Cybern
; 39(1): 268-80, 2009 Feb.
Article
en En
| MEDLINE
| ID: mdl-19068434
Visual analysis of human behavior has generated considerable interest in the field of computer vision because of its wide spectrum of potential applications. Human behavior can be segmented into atomic actions, each of which indicates a basic and complete movement. Learning and recognizing atomic human actions are essential to human behavior analysis. In this paper, we propose a framework for handling this task using variable-length Markov models (VLMMs). The framework is comprised of the following two modules: a posture labeling module and a VLMM atomic action learning and recognition module. First, a posture template selection algorithm, based on a modified shape context matching technique, is developed. The selected posture templates form a codebook that is used to convert input posture sequences into discrete symbol sequences for subsequent processing. Then, the VLMM technique is applied to learn the training symbol sequences of atomic actions. Finally, the constructed VLMMs are transformed into hidden Markov models (HMMs) for recognizing input atomic actions. This approach combines the advantages of the excellent learning function of a VLMM and the fault-tolerant recognition ability of an HMM. Experiments on realistic data demonstrate the efficacy of the proposed system.
Texto completo:
1
Colección:
01-internacional
Banco de datos:
MEDLINE
Asunto principal:
Postura
/
Conducta
/
Reconocimiento de Normas Patrones Automatizadas
/
Inteligencia Artificial
/
Movimiento
Tipo de estudio:
Health_economic_evaluation
/
Prognostic_studies
Límite:
Humans
Idioma:
En
Revista:
IEEE Trans Syst Man Cybern B Cybern
Asunto de la revista:
ENGENHARIA BIOMEDICA
Año:
2009
Tipo del documento:
Article
País de afiliación:
Taiwán