Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Telerehabil ; 15(1): e6557, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38046547

RESUMO

Background: Family caregivers with continuous caregiving responsibilities are at increased risk for adverse physical and mental health outcomes. In response to the challenges of caregiving, a mobile health system (iMHere 2.0) was developed to support caregivers. The study's objective was to gather feedback from family caregivers of older adults on the current features of iMHere 2.0 and to formulate design criteria for future iterations of the system. Methods: An exploratory qualitative study with thematic analyses of focus group feedback. Findings: A total of 10 caregivers of older adults participated in a focus group. Five themes emerged: (1) Monitoring health data, (2) Setting up customized reminders, (3) Supporting care coordination, (4) Balancing security and multiple user access, and (5) Disseminating iMHere 2.0 into the community, along with some potential barriers to implementation. Conclusions: Design criteria were developed to provide a framework for iterative design and development of the iMHere system to support caregivers of older adults.

2.
JMIR Hum Factors ; 9(1): e31376, 2022 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-35254274

RESUMO

BACKGROUND: Mobile health (mHealth) systems that support self-management can improve medical, functional, and psychosocial outcomes for individuals with disabilities and chronic conditions. The mHealth systems can potentially be expanded to support community integration. OBJECTIVE: The purposes of this study were to (1) partner with a community-based organization that supports community integration of individuals with disabilities; (2) identify software requirements needed to support community participation; and (3) iteratively refine an existing mHealth application to include new requirements. METHODS: Community Living and Support Services (CLASS), a nonprofit organization that serves individuals with disabilities in Pittsburgh, Pennsylvania, was identified as the focus group for this study. Key stakeholders within the Community Partners Program at CLASS proposed design requirements for an existing mHealth application, Interactive Mobile Health and Rehabilitation (iMHere) 2.0, that has been used to support self-management. RESULTS: We gathered qualitative data from a focus group composed of CLASS members to develop and iteratively revise iMHere 2.0 to include new modules and features to support community integration. A caregiver app was also developed. The new system contains features to support finance, transportation, client and caregiver communication, calendar and checklist management, upcoming medical and nonmedical appointments, social engagement, pain management, and access to a personal profile. Modifications were made to the following existing modules: education, mood, personal health record, goals, medications, and nutrition. CONCLUSIONS: A successful partnership with a community-based organization that supports individuals with disabilities resulted in a newly designed mHealth system with features to support community integration.

3.
J Exp Psychol Hum Percept Perform ; 37(3): 874-91, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21463081

RESUMO

During conversation, women tend to nod their heads more frequently and more vigorously than men. An individual speaking with a woman tends to nod his or her head more than when speaking with a man. Is this due to social expectation or due to coupled motion dynamics between the speakers? We present a novel methodology that allows us to randomly assign apparent identity during free conversation in a video-conference, thereby dissociating apparent sex from motion dynamics. The method uses motion-tracked synthesized avatars that are accepted by naive participants as being live video. We find that 1) motion dynamics affect head movements but that apparent sex does not; 2) judgments of sex are driven almost entirely by appearance; and 3) ratings of masculinity and femininity rely on a combination of both appearance and dynamics. Together, these findings are consistent with the hypothesis of separate perceptual streams for appearance and biological motion. In addition, our results are consistent with a view that head movements in conversation form a low level perception and action system that can operate independently from top-down social expectations.


Assuntos
Comportamento Imitativo , Relações Interpessoais , Percepção de Movimento , Comunicação não Verbal , Percepção Social , Simulação por Computador , Feminino , Humanos , Masculino , Modelos Psicológicos , Movimento , Valores de Referência , Caracteres Sexuais , Método Simples-Cego , Interface Usuário-Computador
4.
J Nonverbal Behav ; 33(1): 17-34, 2009 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-19554208

RESUMO

We investigated the correspondence between perceived meanings of smiles and their morphological and dynamic characteristics. Morphological characteristics included co-activation of Orbicularis oculi (AU 6), smile controls, mouth opening, amplitude, and asymmetry of amplitude. Dynamic characteristics included duration, onset and offset velocity, asymmetry of velocity, and head movements. Smile characteristics were measured using the Facial Action Coding System (Ekman, Friesen, & Hager, 2002) and Automated Facial Image Analysis (Cohn & Kanade, 2007). Observers judged 122 smiles as amused, embarrassed, nervous, polite, or other. Fifty-three smiles met criteria for classification as perceived amused, embarrassed/nervous, or polite. In comparison with perceived polite, perceived amused more often included AU 6, open mouth, smile controls, larger amplitude, larger maximum onset and offset velocity, and longer duration. In comparison with perceived embarrassed/nervous, perceived amused more often included AU 6, lower maximum offset velocity, and smaller forward head pitch. In comparison with perceived polite, perceived embarrassed more often included mouth opening and smile controls, larger amplitude, and greater forward head pitch. Occurrence of the AU 6 in perceived embarrassed/nervous and polite smiles questions the assumption that AU 6 with a smile is sufficient to communicate felt enjoyment. By comparing three perceptually distinct types of smiles, we found that perceived smile meanings were related to specific variation in smile morphological and dynamic characteristics.

5.
Image Vis Comput ; 27(12): 1788-1796, 2009 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22837587

RESUMO

Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?

6.
J Nonverbal Behav ; 32(3): 133-155, 2008 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-19421336

RESUMO

To better understand early positive emotional expression, automated software measurements of facial action were supplemented with anatomically based manual coding. These convergent measurements were used to describe the dynamics of infant smiling and predict perceived positive emotional intensity. Over the course of infant smiles, degree of smile strength varied with degree of eye constriction (cheek raising, the Duchenne marker), which varied with degree of mouth opening. In a series of three rating studies, automated measurements of smile strength and mouth opening predicted naïve (undergraduate) observers' continuous ratings of video clips of smile sequences, as well as naïve and experienced (parent) ratings of positive emotion in still images from the sequences. An a priori measure of smile intensity combining anatomically based manual coding of both smile strength and mouth opening predicted positive emotion ratings of the still images. The findings indicate the potential of automated and fine-grained manual measurements of facial actions to describe the course of emotional expressions over time and to predict perceptions of emotional intensity.

7.
Artigo em Inglês | MEDLINE | ID: mdl-25285316

RESUMO

Automatically recognizing pain from video is a very useful application as it has the potential to alert carers to patients that are in discomfort who would otherwise not be able to communicate such emotion (i.e young children, patients in postoperative care etc.). In previous work [1], a "pain-no pain" system was developed which used an AAM-SVM approach to good effect. However, as with any task involving a large amount of video data, there are memory constraints that need to be adhered to and in the previous work this was compressing the temporal signal using K-means clustering in the training phase. In visual speech recognition, it is well known that the dynamics of the signal play a vital role in recognition. As pain recognition is very similar to the task of visual speech recognition (i.e. recognising visual facial actions), it is our belief that compressing the temporal signal reduces the likelihood of accurately recognising pain. In this paper, we show that by compressing the spatial signal instead of the temporal signal, we achieve better pain recognition. Our results show the importance of the temporal signal in recognizing pain, however, we do highlight some problems associated with doing this due to the randomness of a patient's facial actions.

8.
Psychol Sci ; 18(7): 564-8, 2007 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-17614862

RESUMO

Although controversy surrounds the relative authenticity of discontinuous versus continuous memories of childhood sexual abuse (CSA), little is known about whether such memories differ in their likelihood of corroborative evidence. Individuals reporting CSA memories were interviewed, and two independent raters attempted to find corroborative information for the allegations. Continuous CSA memories and discontinuous memories that were unexpectedly recalled outside therapy were more likely to be corroborated than anticipated discontinuous memories recovered in therapy. Evidence that suggestion during therapy possibly mediates these differences comes from the additional finding that individuals who recalled the memories outside therapy were markedly more surprised at the existence of their memories than were individuals who initially recalled the memories in therapy. These results indicate that discontinuous CSA memories spontaneously retrieved outside of therapy may be accurate, while implicating expectations arising from suggestions during therapy in producing false CSA memories.


Assuntos
Abuso Sexual na Infância/psicologia , Memória , Terapia Psicanalítica/métodos , Teste de Realidade , Repressão Psicológica , Tempo , Adulto , Criança , Abuso Sexual na Infância/estatística & dados numéricos , Feminino , Humanos , Entrevista Psicológica/métodos , Masculino , Transtornos Mentais/psicologia , Transtornos Mentais/terapia , Rememoração Mental/fisiologia , Valor Preditivo dos Testes , Terapia Psicanalítica/estatística & dados numéricos , Estresse Psicológico/psicologia , Inquéritos e Questionários
9.
Cleft Palate Craniofac J ; 43(2): 226-36, 2006 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-16526929

RESUMO

OBJECTIVE: To examine and compare social acceptance, social behavior, and facial movements of children with and without oral clefts in an experimental setting. DESIGN: Two groups of children (with and without oral clefts) were videotaped in a structured social interaction with a peer confederate, when listening to emotional stories, and when told to pose specific facial expressions. PARTICIPANTS: Twenty-four children and adolescents ages 7 to 16(1)/(2) years with oral clefts were group matched for gender, grade, and socioeconomic status with 25 noncleft controls. MAIN OUTCOME MEASURES: Specific social and facial behaviors coded from videotapes; Harter Self-Perception Profile, Social Acceptance subscale. RESULTS: Significant between-group differences were obtained. Children in the cleft group more often displayed "Tongue Out," "Eye Contact," "Mimicry," and "Initiates Conversation." For the cleft group, "Gaze Avoidance" was significantly negatively correlated with social acceptance scores. The groups were comparable in their ability to pose and spontaneously express facial emotion. CONCLUSIONS: When comparing children with and without oral clefts in an experimental setting, with a relatively small sample size, behavior analysis identified some significant differences in patterns of social behavior but not in the ability to express facial emotion. Results suggest that many children with oral clefts may have relatively typical social development. However, for those who do have social competence deficits, systematic behavioral observation of atypical social responses may help individualize social skills interventions.


Assuntos
Fenda Labial/psicologia , Fissura Palatina/psicologia , Expressão Facial , Comportamento Social , Adolescente , Fatores Etários , Criança , Métodos Epidemiológicos , Feminino , Humanos , Masculino , Fatores Sexuais , Gravação de Videoteipe
10.
J Nonverbal Behav ; 30(1): 37-52, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-19367343

RESUMO

Previous research suggests differences in lip movement between deliberate and spontaneous facial expressions. We investigated within participant differences between deliberately posed and spontaneously occurring smiles during a directed facial action task. Using automated facial image analysis, we quantified lip corner movement during periods of visible Zygomaticus major activity. Onset and offset speed, amplitude of movement, and offset duration were greater in deliberate smiles. In contrast to previous results, however, lip corner movement asymmetry was not greater in deliberate smiles. Observed characteristics of deliberate and spontaneous smiling may be related to differences in the typical context and purpose of the facial signal.

11.
Emotion ; 5(2): 166-74, 2005 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-15982082

RESUMO

Cultural variations in the associations of 12 body sensations with 7 emotions were studied in 2 rural samples from northern Mexico (n = 61) and Java, Indonesia (n = 99), with low exposure to Western influences and in 3 university student samples from Belgium (n = 75), Indonesia (n = 85), and Mexico (n = 123). Both parametric and nonparametric analyses suggest that findings from previous studies with only student samples (K. R. Scherer & H. G. Wallbott, 1994) were generalizable to the 2 rural samples. Some notable cultural deviations from common profiles were also identified. Implications of the findings for explanations of body sensations experienced with emotions and the cross-cultural study of emotions are discussed.


Assuntos
Características Culturais , Emoções , Percepção , Adulto , Bélgica , Afogueamento , Temperatura Corporal , Feminino , Frequência Cardíaca , Humanos , Indonésia/etnologia , Masculino , México/etnologia , População Rural , Sudorese
12.
Psychol Sci ; 16(5): 403-10, 2005 May.
Artigo em Inglês | MEDLINE | ID: mdl-15869701

RESUMO

Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.


Assuntos
Afeto , Cognição , Face , Expressão Facial , Criança , Feminino , Humanos , Julgamento , Masculino , Projetos Piloto , Reconhecimento Psicológico
13.
Behav Res Methods Instrum Comput ; 35(3): 420-8, 2003 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-14587550

RESUMO

Previous research in automatic facial expression recognition has been limited to recognition of gross expression categories (e.g., joy or anger) in posed facial behavior under well-controlled conditions (e.g., frontal pose and minimal out-of-plane head motion). We have developed a system that detects a discrete and important facial action (e.g., eye blinking) in spontaneously occurring facial behavior that has been measured with a nonfrontal pose, moderate out-of-plane head motion, and occlusion. The system recovers three-dimensional motion parameters, stabilizes facial regions, extracts motion and appearance information, and recognizes discrete facial actions in spontaneous facial behavior. We tested the system in video data from a two-person interview. The 10 subjects were ethnically diverse, action units occurred during speech, and out-of-plane motion and occlusion from head motion and glasses were common. The video data were originally collected to answer substantive questions in psychology and represent a substantial challenge to automated action unit recognition. In analysis of blinks, the system achieved 98% accuracy.


Assuntos
Piscadela , Expressão Facial , Software , Adulto , Algoritmos , Inteligência Artificial , Processamento Eletrônico de Dados/métodos , Humanos , Masculino , Gravação em Vídeo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...