Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
J Clin Med ; 12(4)2023 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-36836177

RESUMO

BACKGROUND: Merely the sight of needles can cause extreme emotional and physical (vasovagal) reactions (VVRs). However, needle fear and VVRs are not easy to measure nor prevent as they are automatic and difficult to self-report. This study aims to investigate whether a blood donors' unconscious facial microexpressions in the waiting room, prior to actual blood donation, can be used to predict who will experience a VVR later, during the donation. METHODS: The presence and intensity of 17 facial action units were extracted from video recordings of 227 blood donors and were used to classify low and high VVR levels using machine-learning algorithms. We included three groups of blood donors as follows: (1) a control group, who had never experienced a VVR in the past (n = 81); (2) a 'sensitive' group, who experienced a VVR at their last donation (n = 51); and (3) new donors, who are at increased risk of experiencing a VVR (n = 95). RESULTS: The model performed very well, with an F1 (=the weighted average of precision and recall) score of 0.82. The most predictive feature was the intensity of facial action units in the eye regions. CONCLUSIONS: To our knowledge, this study is the first to demonstrate that it is possible to predict who will experience a vasovagal response during blood donation through facial microexpression analyses prior to donation.

2.
IEEE Winter Conf Appl Comput Vis ; 2021: 1247-1256, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38250021

RESUMO

Critical obstacles in training classifiers to detect facial actions are the limited sizes of annotated video databases and the relatively low frequencies of occurrence of many actions. To address these problems, we propose an approach that makes use of facial expression generation. Our approach reconstructs the 3D shape of the face from each video frame, aligns the 3D mesh to a canonical view, and then trains a GAN-based network to synthesize novel images with facial action units of interest. To evaluate this approach, a deep neural network was trained on two separate datasets: One network was trained on video of synthesized facial expressions generated from FERA17; the other network was trained on unaltered video from the same database. Both networks used the same train and validation partitions and were tested on the test partition of actual video from FERA17. The network trained on synthesized facial expressions outperformed the one trained on actual facial expressions and surpassed current state-of-the-art approaches.

3.
Nat Med ; 27(12): 2154-2164, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34887577

RESUMO

Detection of neural signatures related to pathological behavioral states could enable adaptive deep brain stimulation (DBS), a potential strategy for improving efficacy of DBS for neurological and psychiatric disorders. This approach requires identifying neural biomarkers of relevant behavioral states, a task best performed in ecologically valid environments. Here, in human participants with obsessive-compulsive disorder (OCD) implanted with recording-capable DBS devices, we synchronized chronic ventral striatum local field potentials with relevant, disease-specific behaviors. We captured over 1,000 h of local field potentials in the clinic and at home during unstructured activity, as well as during DBS and exposure therapy. The wide range of symptom severity over which the data were captured allowed us to identify candidate neural biomarkers of OCD symptom intensity. This work demonstrates the feasibility and utility of capturing chronic intracranial electrophysiology during daily symptom fluctuations to enable neural biomarker identification, a prerequisite for future development of adaptive DBS for OCD and other psychiatric disorders.


Assuntos
Eletrofisiologia/métodos , Transtorno Obsessivo-Compulsivo/fisiopatologia , Adulto , Biomarcadores/metabolismo , Eletrodos , Estudos de Viabilidade , Feminino , Humanos , Masculino , Estriado Ventral/fisiologia
4.
IEEE Trans Biom Behav Identity Sci ; 2(2): 158-171, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32377637

RESUMO

Facial action unit (AU) detectors have performed well when trained and tested within the same domain. How well do AU detectors transfer to domains in which they have not been trained? We review literature on cross-domain transfer and conduct experiments to address limitations of prior research. We evaluate generalizability in four publicly available databases. EB+ (an expanded version of BP4D+), Sayette GFT, DISFA and UNBC Shoulder Pain (SP). The databases differ in observational scenarios, context, participant diversity, range of head pose, video resolution, and AU base rates. In most cases performance decreased with change in domain, often to below the threshold needed for behavioral research. However, exceptions were noted. Deep and shallow approaches generally performed similarly and average results were slightly better for deep model compared to shallow one. Occlusion sensitivity maps revealed that local specificity was greater for AU detection within than cross domains. The findings suggest that more varied domains and deep learning approaches may be better suited for generalizability and suggest the need for more attention to characteristics that vary between domains. Until further improvement is realized, caution is warranted when applying AU classifiers from one domain to another.

5.
Artigo em Inglês | MEDLINE | ID: mdl-33937916

RESUMO

Continuous deep brain stimulation (DBS) of the ventral striatum (VS) is an effective treatment for severe, treatment-refractory obsessive-compulsive disorder (OCD). Optimal parameter settings are signaled by a mirth response of intense positive affect, which are subjectively identified by clinicians. Subjective judgments are idiosyncratic and difficult to standardize. To objectively measure mirth responses, we used Automatic Facial Affect Recognition (AFAR) in a series of longitudinal assessments of a patient treated with DBS. Pre- and post-adjustment DBS were compared using both statistical and machine learning approaches. Positive affect was significantly higher post-DBS adjustment. Using SVM and XGBoost, participant's pre- and post-adjustment appearances were differentiated with F1 of 0.76, which suggests feasibility of objective measurement of mirth response.

6.
BMVC ; 20192019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32510058

RESUMO

The performance of automated facial expression coding has improving steadily as evidenced by results of the latest Facial Expression Recognition and Analysis (FERA 2017) Challenge. Advances in deep learning techniques have been key to this success. Yet the contribution of critical design choices remains largely unknown. Using the FERA 2017 database, we systematically evaluated design choices in pre-training, feature alignment, model size selection, and optimizer details. Our findings vary from the counter-intuitive (e.g., generic pre-training outperformed face-specific models) to best practices in tuning optimizers. Informed by what we found, we developed an architecture that exceeded state-of-the-art on FERA 2017. We achieved a 3.5% increase in F1 score for occurrence detection and a 5.8% increase in ICC for intensity estimation.

7.
Front Comput Sci ; 12019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31930192

RESUMO

Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that (i) controls for 3D head and face rotation, (ii) learns mappings of patches to AUs, and (iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.

8.
Artigo em Inglês | MEDLINE | ID: mdl-31749665

RESUMO

Facial action unit (AU) detectors have performed well when trained and tested within the same domain. Do AU detectors transfer to new domains in which they have not been trained? To answer this question, we review literature on cross-domain transfer and conduct experiments to address limitations of prior research. We evaluate both deep and shallow approaches to AU detection (CNN and SVM, respectively) in two large, well-annotated, publicly available databases, Expanded BP4D+ and GFT. The databases differ in observational scenarios, participant characteristics, range of head pose, video resolution, and AU base rates. For both approaches and databases, performance decreased with change in domain, often to below the threshold needed for behavioral research. Decreases were not uniform, however. They were more pronounced for GFT than for Expanded BP4D+ and for shallow relative to deep learning. These findings suggest that more varied domains and deep learning approaches may be better suited for promoting generalizability. Until further improvement is realized, caution is warranted when applying AU classifiers from one domain to another.

9.
Artigo em Inglês | MEDLINE | ID: mdl-30944768

RESUMO

Most automated facial expression analysis methods treat the face as a 2D object, flat like a sheet of paper. That works well provided images are frontal or nearly so. In real- world conditions, moderate to large head rotation is common and system performance to recognize expression degrades. Multi-view Convolutional Neural Networks (CNNs) have been proposed to increase robustness to pose, but they require greater model sizes and may generalize poorly across views that are not included in the training set. We propose FACSCaps architecture to handle multi-view and multi-label facial action unit (AU) detection within a single model that can generalize to novel views. Additionally, FACSCaps's ability to synthesize faces enables insights into what is leaned by the model. FACSCaps models video frames using matrix capsules, where hierarchical pose relationships between face parts are built into internal representations. The model is trained by jointly optimizing a multi-label loss and the reconstruction accuracy. FACSCaps was evaluated using the FERA 2017 facial expression dataset that includes spontaneous facial expressions in a wide range of head orientations. FACSCaps outperformed both state-of-the-art CNNs and their temporal extensions.

10.
Artigo em Inglês | MEDLINE | ID: mdl-30511050

RESUMO

Automated measurement of affective behavior in psychopathology has been limited primarily to screening and diagnosis. While useful, clinicians more often are concerned with whether patients are improving in response to treatment. Are symptoms abating, is affect becoming more positive, are unanticipated side effects emerging? When treatment includes neural implants, need for objective, repeatable biometrics tied to neurophysiology becomes especially pressing. We used automated face analysis to assess treatment response to deep brain stimulation (DBS) in two patients with intractable obsessive-compulsive disorder (OCD). One was assessed intraoperatively following implantation and activation of the DBS device. The other was assessed three months post-implantation. Both were assessed during DBS on and o conditions. Positive and negative valence were quantified using a CNN trained on normative data of 160 non-OCD participants. Thus, a secondary goal was domain transfer of the classifiers. In both contexts, DBS-on resulted in marked positive affect. In response to DBS-off, affect flattened in both contexts and alternated with increased negative affect in the outpatient setting. Mean AUC for domain transfer was 0.87. These findings suggest that parametric variation of DBS is strongly related to affective behavior and may introduce vulnerability for negative affect in the event that DBS is discontinued.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA