Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Infancy ; 28(5): 910-929, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37466002

RESUMO

Although still-face effects are well-studied, little is known about the degree to which the Face-to-Face/Still-Face (FFSF) is associated with the production of intense affective displays. Duchenne smiling expresses more intense positive affect than non-Duchenne smiling, while Duchenne cry-faces express more intense negative affect than non-Duchenne cry-faces. Forty 4-month-old infants and their mothers completed the FFSF, and key affect-indexing facial Action Units (AUs) were coded by expert Facial Action Coding System coders for the first 30 s of each FFSF episode. Computer vision software, automated facial affect recognition (AFAR), identified AUs for the entire 2-min episodes. Expert coding and AFAR produced similar infant and mother Duchenne and non-Duchenne FFSF effects, highlighting the convergent validity of automated measurement. Substantive AFAR analyses indicated that both infant Duchenne and non-Duchenne smiling declined from the FF to the SF, but only Duchenne smiling increased from the SF to the RE. In similar fashion, the magnitude of mother Duchenne smiling changes over the FFSF were 2-4 times greater than non-Duchenne smiling changes. Duchenne expressions appear to be a sensitive index of intense infant and mother affective valence that are accessible to automated measurement and may be a target for future FFSF research.


Assuntos
Expressão Facial , Mães , Feminino , Humanos , Lactente , Mães/psicologia , Sorriso/psicologia , Software
2.
Behav Res Methods ; 55(3): 1024-1035, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35538295

RESUMO

Automated detection of facial action units in infants is challenging. Infant faces have different proportions, less texture, fewer wrinkles and furrows, and unique facial actions relative to adults. For these and related reasons, action unit (AU) detectors that are trained on adult faces may generalize poorly to infant faces. To train and test AU detectors for infant faces, we trained convolutional neural networks (CNN) in adult video databases and fine-tuned these networks in two large, manually annotated, infant video databases that differ in context, head pose, illumination, video resolution, and infant age. AUs were those central to expression of positive and negative emotion. AU detectors trained in infants greatly outperformed ones trained previously in adults. Training AU detectors across infant databases afforded greater robustness to between-database differences than did training database specific AU detectors and outperformed previous state-of-the-art in infant AU detection. The resulting AU detection system, which we refer to as Infant AFAR (Automated Facial Action Recognition), is available to the research community for further testing and applications in infant emotion, social interaction, and related topics.


Assuntos
Expressão Facial , Reconhecimento Facial , Humanos , Lactente , Redes Neurais de Computação , Emoções , Interação Social , Bases de Dados Factuais
3.
Sci Rep ; 13(1): 9667, 2023 06 14.
Artigo em Inglês | MEDLINE | ID: mdl-37316637

RESUMO

Around one-third of adults are scared of needles, which can result in adverse emotional and physical responses such as dizziness and fainting (e.g. vasovagal reactions; VVR) and consequently, avoidance of healthcare, treatments, and immunizations. Unfortunately, most people are not aware of vasovagal reactions until they escalate, at which time it is too late to intervene. This study aims to investigate whether facial temperature profiles measured in the waiting room, prior to a blood donation, can be used to classify who will and will not experience VVR during the donation. Average temperature profiles from six facial regions were extracted from pre-donation recordings of 193 blood donors, and machine learning was used to classify whether a donor would experience low or high levels of VVR during the donation. An XGBoost classifier was able to classify vasovagal groups from an adverse reaction during a blood donation based on this early facial temperature data, with a sensitivity of 0.87, specificity of 0.84, F1 score of 0.86, and PR-AUC of 0.93. Temperature fluctuations in the area under the nose, chin and forehead have the highest predictive value. This study is the first to demonstrate that it is possible to classify vasovagal responses during a blood donation using temperature profiles.


Assuntos
Agulhas , Síncope Vasovagal , Adulto , Humanos , Agulhas/efeitos adversos , Temperatura , Síncope Vasovagal/etiologia , Síncope , Vertigem
4.
Brain Imaging Behav ; 14(2): 460-476, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30671775

RESUMO

Brain connectivity networks have been shown to represent gender differences under a number of cognitive tasks. Recently, it has been conjectured that fMRI signals decomposed into different resolutions embed different types of cognitive information. In this paper, we combine multiresolution analysis and connectivity networks to study gender differences under a variety of cognitive tasks, and propose a machine learning framework to discriminate individuals according to their gender. For this purpose, we estimate a set of brain networks, formed at different resolutions while the subjects perform different cognitive tasks. First, we decompose fMRI signals recorded under a sequence of cognitive stimuli into its frequency subbands using Discrete Wavelet Transform (DWT). Next, we represent the fMRI signals by mesh networks formed among the anatomic regions for each task experiment at each subband. The mesh networks are constructed by ensembling a set of local meshes, each of which represents the relationship of an anatomical region as a weighted linear combination of its neighbors. Then, we estimate the edge weights of each mesh by ridge regression. The proposed approach yields 2CL functional mesh networks for each subject, where C is the number of cognitive tasks and L is the number of subband signals obtained after wavelet decomposition. This approach enables one to classify gender under different cognitive tasks and different frequency subbands. The final step of the suggested framework is to fuse the complementary information of the mesh networks for each subject to discriminate the gender. We fuse the information embedded in mesh networks formed for different tasks and resolutions under a three-level fuzzy stacked generalization (FSG) architecture. In this architecture, different layers are responsible for fusion of diverse information obtained from different cognitive tasks and resolutions. In the experimental analyses, we use Human Connectome Project task fMRI dataset. Results reflect that fusing the mesh network representations computed at multiple resolutions for multiple tasks provides the best gender classification accuracy compared to the single subband task mesh networks or fusion of representations obtained using only multitask or only multiresolution data. Besides, mesh edge weights slightly outperform pairwise correlations between regions, and significantly outperform raw fMRI signals. In addition, we analyze the gender discriminative power of mesh edge weights for different tasks and resolutions.


Assuntos
Conectoma/métodos , Encéfalo , Conectoma/psicologia , Bases de Dados Factuais , Feminino , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Masculino , Análise Multivariada , Caracteres Sexuais , Análise de Ondaletas
5.
Brain Imaging Behav ; 13(4): 893-904, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29948907

RESUMO

In this work, we propose a novel framework to encode the local connectivity patterns of brain, using Fisher vectors (FV), vector of locally aggregated descriptors (VLAD) and bag-of-words (BoW) methods. We first obtain local descriptors, called mesh arc descriptors (MADs) from fMRI data, by forming local meshes around anatomical regions, and estimating their relationship within a neighborhood. Then, we extract a dictionary of relationships, called brain connectivity dictionary by fitting a generative Gaussian mixture model (GMM) to a set of MADs, and selecting codewords at the mean of each component of the mixture. Codewords represent connectivity patterns among anatomical regions. We also encode MADs by VLAD and BoW methods using k-Means clustering. We classify cognitive tasks using the Human Connectome Project (HCP) task fMRI dataset and cognitive states using the Emotional Memory Retrieval (EMR). We train support vector machines (SVMs) using the encoded MADs. Results demonstrate that, FV encoding of MADs can be successfully employed for classification of cognitive tasks, and outperform VLAD and BoW representations. Moreover, we identify the significant Gaussians in mixture models by computing energy of their corresponding FV parts, and analyze their effect on classification accuracy. Finally, we suggest a new method to visualize the codewords of the learned brain connectivity dictionary.


Assuntos
Conectoma/métodos , Reconhecimento Automatizado de Padrão/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Análise por Conglomerados , Cognição , Humanos , Imageamento por Ressonância Magnética/métodos , Modelos Teóricos , Rede Nervosa , Distribuição Normal , Máquina de Vetores de Suporte
6.
Brain Imaging Behav ; 12(4): 1067-1083, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28980144

RESUMO

Human brain is supposed to process information in multiple frequency bands. Therefore, we can extract diverse information from functional Magnetic Resonance Imaging (fMRI) data by processing it at multiple resolutions. We propose a framework, called Hierarchical Multi-resolution Mesh Networks (HMMNs), which establishes a set of brain networks at multiple resolutions of fMRI signal to represent the underlying cognitive process. Our framework, first, decomposes the fMRI signal into various frequency subbands using wavelet transform. Then, a brain network is formed at each subband by ensembling a set of local meshes. Arc weights of each local mesh are estimated by ridge regression. Finally, adjacency matrices of mesh networks obtained at different subbands are used to train classifiers in an ensemble learning architecture, called fuzzy stacked generalization (FSG). Our decoding performances on Human Connectome Project task-fMRI dataset reflect that HMMNs can successfully discriminate tasks with 99% accuracy, across 808 subjects. Diversity of information embedded in mesh networks of multiple subbands enables the ensemble of classifiers to collaborate with each other for brain decoding. The suggested HMMNs decode the cognitive tasks better than a single classifier applied to any subband. Also mesh networks have a better representation power compared to pairwise correlations or average voxel time series. Moreover, fusion of diverse information using FSG outperforms fusion with majority voting. We conclude that, fMRI data, recorded during a cognitive task, provide diverse information in multi-resolution mesh networks. Our framework fuses this complementary information and boosts the brain decoding performances obtained at individual subbands.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Cognição/fisiologia , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Lógica Fuzzy , Humanos , Aprendizado de Máquina , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiologia , Análise de Ondaletas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA