ABSTRACT
BACKGROUND: Alzheimer's disease (AD) is a neurodegenerative disorder featured by progressive cognitive decline, which manifests in severe impairment of memory, attention, emotional processing and daily activities, leading to significant disability and social burden. Investigation on Mild Cognitive Impairment (MCI), the prodromal and transitional stage between normal aging and AD, serves as a key in diagnosing and slowing down the progression of AD. Numerous effects have been made up to date, however, the attentional mechanisms under different external emotion stimuli in MCI and AD are still unexplored in deep. OBJECTIVE: To further explore the attentional mechanisms under different external emotion stimuli in both MCI and AD patients. DESIGN/SETTING/PARTICIPANTS/MEASUREMENTS: In 51 healthy volunteers (Controls, 24 males and 27 females), 52 MCI (19 males and 33 females), and 47 AD (15 males and 32 females) patients, we administered the visual oddball event-related potentials (ERPs) under three types of external emotional stimuli: Neutral, Happiness and Sadness, in which the components N1, P2, N2 and P3 as well as the abnormal cortical activations corresponding to the significant ERP differences in the three groups were observed. RESULTS: Under all three external emotions, in AD patients, N2 and P3 latencies were significantly prolonged compared to both Controls and MCI. In addition, under Happiness, in MCI, P3 latencies were significantly delayed compared to Controls. Meanwhile, under both Happiness and Sadness, in AD patients, P3 amplitudes were significantly decreased compared to Controls and MCI, respectively. During N2 time window, under Neutral emotion, significant hypoactivation in the right superior temporal gyrus was found in AD patients compared to Controls, and under Happiness, the activation of the right inferior frontal gyrus was significantly attenuated in MCI compared to Controls. Under Sadness, in AD patients, the activation of the right superior frontal gyrus was significantly decreased compared to MCI. During P3 time window, under both Happiness and Sadness, when AD patients compared to MCI, the significantly attenuated activations were located in the right fusiform gyrus and the right middle occipital gyrus, respectively. CONCLUSION: Our results demonstrated visual attentional deficits under external emotional stimuli in both MCI and AD patients, highlighting the function of Happiness for early detecting MCI, in which the P3 latency and the hypoactivation of right inferior frontal gyrus during N2 time window can be early signs. The current study sheds further light of attentional mechanisms in MCI and AD patients, and indicates the value of emotional processing in the early detection of cognitive dysfunction.
Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Emotions , Humans , Cognitive Dysfunction/physiopathology , Cognitive Dysfunction/psychology , Cognitive Dysfunction/diagnosis , Male , Female , Alzheimer Disease/physiopathology , Alzheimer Disease/psychology , Aged , Emotions/physiology , Evoked Potentials, Visual/physiology , Electroencephalography , Middle Aged , Attention/physiologyABSTRACT
In this study, we assessed the efficacy of various linear and chaotic physiological synchrony methods during collaborative emotive recall of stories, examining how physiological synchronization impacts dyadic interaction in tasks involving emotionally charged narratives. Eighty-two young individuals, forming 41dyads, participated in a task requiring the recall of stories with varying emotional content. We analyzed physiological data using the Lyapunov coefficient, cross-correlation, and coherence indices. Our statistical approach included concise applications of the student's t-test, Pearson's correlation, and notably, the receiver operating characteristic (ROC) curve. The results highlighted significant differences in physiological synchrony between emotional and less emotional situations, revealing increased synchronization in collaborative remembering of emotional stories. The integration of the Lyapunov coefficient with other indices was crucial for identifying emotional conditions, underscoring its significance in exploring emotional engagement in group memory activities. This study provides valuable insights into the dynamics of physiological synchrony in emotional interactions, its implications in cognitive and social domains, and suggests potential applications in understanding collective behavior and emotional processing.
Subject(s)
Emotions , Mental Recall , Humans , Mental Recall/physiology , Emotions/physiology , Female , Male , Young Adult , Cooperative Behavior , Adult , Narration , Heart Rate/physiologyABSTRACT
The surprising omission or reduction of vital resources (food, fluid, social partners) can induce an aversive emotion known as frustrative nonreward (FNR), which can influence subsequent behavior and physiology. FNR is an integral mediator of irritability/aggression, motivation (substance use disorders, depression), anxiety/fear/threat, learning/conditioning, and social behavior. Despite substantial progress in the study of FNR during the twentieth century, research lagged in the later part of the century and into the early twenty-first century until the National Institute of Mental Health's Research Domain Criteria initiative included FNR and loss as components of the negative valence domain. This led to a renaissance of new research and paradigms relevant to basic and clinical science alike. The COVID-19 pandemic's extensive individual and social restrictions were correlated with increased drug and alcohol use, social conflict, irritability, and suicide, all potential consequences of FNR. This article highlights animal models related to these psychiatric disorders and symptoms and presents recent advances in identifying the brain regions and neurotransmitters implicated.
Subject(s)
COVID-19 , Humans , Animals , COVID-19/psychology , Mental Disorders/psychology , Brain/metabolism , Brain/physiology , Substance-Related Disorders/psychology , Emotions/physiology , NeurochemistryABSTRACT
Human vision can detect a single photon, but the minimal exposure required to extract meaning from stimulation remains unknown. This requirement cannot be characterised by stimulus energy, because the system is differentially sensitive to attributes defined by configuration rather than physical amplitude. Determining minimal exposure durations required for processing various stimulus attributes can thus reveal the system's priorities. Using a tachistoscope enabling arbitrarily brief displays, we establish minimal durations for processing human faces, a stimulus category whose perception is associated with several well-characterised behavioural and neural markers. Neural and psychophysical measures show a sequence of distinct minimal exposures for stimulation detection, object-level detection, face-specific processing, and emotion-specific processing. Resolving ongoing debates, face orientation affects minimal exposure but emotional expression does not. Awareness emerges with detection, showing no evidence of subliminal perception. These findings inform theories of visual processing and awareness, elucidating the information to which the visual system is attuned.
Subject(s)
Photic Stimulation , Visual Perception , Humans , Female , Male , Visual Perception/physiology , Adult , Young Adult , Emotions/physiology , Facial Expression , Awareness/physiology , Time FactorsABSTRACT
Poetry is arguably the most creative expression of language and can evoke diverse subjective experiences, such as emotions and aesthetic responses, subsequently influencing the subjective judgment of the creativity of poem. This study investigated how certain personality traits-specifically openness, intellect, awe-proneness, and epistemic curiosity-influence the relationship between these subjective experiences and the creativity judgment of 36 English language poems. One hundred and twenty-nine participants rated each poem across six dimensions: clarity, aesthetic appeal, felt valence, felt arousal, surprise, and overall creativity. Initially, we obtained a parsimonious model that suggested aesthetic appeal, felt valence, and surprise as key predictors of poetic creativity. Subsequently, using multilevel analysis, we investigated the interactions between the four personality traits and these three predictors. Among the personality traits, openness emerged as the primary moderator in predicting judgments of poetic creativity, followed by curiosity and awe-proneness. Among the predictors, aesthetic appeal was moderated by all four personality traits, while surprise was moderated by openness, awe-proneness, and curiosity. Valence, on the other hand, was moderated by openness only. These findings provide novel insights into the ways individual differences influence evaluations of poetic creativity.
Subject(s)
Creativity , Individuality , Personality , Poetry as Topic , Humans , Female , Male , Adult , Young Adult , Judgment , Emotions/physiology , Adolescent , Esthetics/psychologyABSTRACT
Green consumption is a crucial pathway towards achieving global sustainability goals. Product-oriented green advertisements can effectively stimulate consumers' latent needs and convert them into eventual purchasing intentions and behaviors, thereby promoting green consumption. Given that neuromarketing methods facilitate the understanding of consumers' decision-making processes, this study combines prospect theory and need fulfillment theory, employing event-related potentials (ERPs) as measures to explore changes in consumers' cognitive resources and emotional arousal levels when confronted with green products and advertising information. This enables inference regarding consumers' acceptance of purchasing and their psychological processes. Behavioral results indicated that message framing influences consumers' purchases, with consumers consuming more green in response to negatively framed advertisements. EEG results indicated that matching positive framing with utilitarian green products was effective in increasing consumers' cognitive attention in the early cognitive stage. In the late stage of cognition negative frames stimulated consumers' mood swings more, and the influence of product type depended on the role of message frames, and the consumption motivation induced by the product, whose influence was overridden by external evaluations such as message frames. These research findings provide an explanation for the impact of frame information on consumers' purchasing decisions at different stages, assisting marketers in devising diverse promotional strategies based on product characteristics to foster the development and practice of green consumption. This will further embed the concept of green consumption advocated by organizations such as the United Nations Environment Programme (UNEP), World Wildlife Fund (WWF), and Greenpeace into the public consciousness.
Subject(s)
Consumer Behavior , Decision Making , Evoked Potentials , Humans , Female , Male , Evoked Potentials/physiology , Adult , Young Adult , Electroencephalography , Motivation , Cognition/physiology , Emotions/physiologyABSTRACT
Intransitive gestures are expressive and symbolic, whereas pantomimes are object-related actions. These gestures convey different meanings depending on whether they are directed toward (TB) or away from the body (AB). TB gestures express mental states (intransitive) or hygiene/nutritional activities (pantomime), while AB gestures modify the behaviour of the observer (intransitive) or demonstrate tool use with an object (pantomime). A substantial body of literature suggests that females exhibit stronger social cue processing compared to males. Considering the social significance of gestures, this study aimed to explore the physiological gender differences in the observation of AB and TB gestures. Pupil dilation and High Frequency Heart Rate Variability (HF-HRV) were measured in 54 participants (27 female) while observing TB and AB gestures. The Interpersonal Reactivity Index (IRI) and the Vicarious Distress Questionnaire (VDQ) were used to assess social-emotional processes. Results showed greater pupil dilation in females for TB gestures, but no significant gender differences for HF-HRV. Males showed a significant correlation between increased pupil dilation to both TB and AB gestures and empathy levels (IRI). The support scale of the VDQ correlated significantly with TB gestures in males. These findings provide insight into the neurobiological basis of gender differences in perceiving social gestures.
Subject(s)
Gestures , Humans , Female , Male , Adult , Young Adult , Heart Rate/physiology , Sex Characteristics , Pupil/physiology , Empathy/physiology , Emotions/physiology , Sex FactorsABSTRACT
In modern healthcare, the influence of a patient's mindset on health outcomes is an often neglected yet vital component of holistic care. This review explores the significant impact of positive and negative mindsets on disease progression and recovery, emphasizing the need to integrate mental wellness practices into conventional medical care. Drawing from a wide array of studies, it demonstrates how fostering a positive mindset can enhance patient trajectories across various medical specialties. The article advocates for training healthcare providers to adopt a more empathetic and patient-centered approach, bridging the gap between mind and body. By presenting compelling evidence on the correlation between patient mindset and health outcomes, this review highlights the potential benefits of incorporating psychological support and holistic strategies into standard care protocols. Practical strategies for implementing mindset-focused interventions are also proposed, including training programs for healthcare professionals and the development of interdisciplinary treatment plans. Ultimately, this article underscores the need for a paradigm shift in medical practice, advocating for a comprehensive approach that recognizes the power of thought in promoting patient wellness.
Subject(s)
Emotions , Humans , Emotions/physiology , Mental HealthABSTRACT
OBJECTIVE: Japan has a system of occupational therapy programs known as self-reliance training (training for daily living), which helps people with various disabilities lead more meaningful lives. Recently, it has been shown that green care farms are beneficial for dementia care and that agricultural and horticultural work has a positive impact on people with intellectual disabilities and mental disorders. This study examined the health-improving effects of farm activities and developed an attractive program for adolescents with developmental and intellectual disabilities who use independent training facilities. The program comprised agricultural and horticultural activities such as vegetable cultivation and management, flower planting, and flower arrangement. RESULTS: No significant differences were observed in any of the measures for positive mood before and after the usual program (UP). However, anger-hostility and depression-dejection improved significantly after the farm program (FP) (p < .05). Self-efficacy improved significantly after both UP and FP (p < .10). Free responses were obtained from UP (131 responses) and FP (126 responses) participants; thematic analysis of FP participants' statements revealed that positive comments included "confidence in accomplishing tasks," "anticipation and joy of growing plants," and "motivation for gardening activities."
Subject(s)
Agriculture , Feasibility Studies , Humans , Male , Female , Adolescent , Agriculture/methods , Agriculture/education , Emotions/physiology , Japan , Intellectual Disability/psychology , Self Efficacy , Developmental Disabilities/psychology , Young AdultABSTRACT
The human auditory system includes discrete cortical patches and selective regions for processing voice information, including emotional prosody. Although behavioral evidence indicates individuals with autism spectrum disorder (ASD) have difficulties in recognizing emotional prosody, it remains understudied whether and how localized voice patches (VPs) and other voice-sensitive regions are functionally altered in processing prosody. This fMRI study investigated neural responses to prosodic voices in 25 adult males with ASD and 33 controls using voices of anger, sadness, and happiness with varying degrees of emotion. We used a functional region-of-interest analysis with an independent voice localizer to identify multiple VPs from combined ASD and control data. We observed a general response reduction to prosodic voices in specific VPs of left posterior temporal VP (TVP) and right middle TVP. Reduced cortical responses in right middle TVP were consistently correlated with the severity of autistic symptoms for all examined emotional prosodies. Moreover, representation similarity analysis revealed the reduced effect of emotional intensity in multivoxel activation patterns in left anterior superior temporal cortex only for sad prosody. These results indicate reduced response magnitudes to voice prosodies in specific TVPs and altered emotion intensity-dependent multivoxel activation patterns in adult ASDs, potentially underlying their socio-communicative difficulties.
Subject(s)
Autism Spectrum Disorder , Emotions , Magnetic Resonance Imaging , Temporal Lobe , Voice , Humans , Male , Autism Spectrum Disorder/physiopathology , Autism Spectrum Disorder/diagnostic imaging , Autism Spectrum Disorder/psychology , Temporal Lobe/physiopathology , Temporal Lobe/diagnostic imaging , Adult , Emotions/physiology , Young Adult , Speech Perception/physiology , Brain Mapping/methods , Acoustic Stimulation , Auditory Perception/physiologyABSTRACT
In recent years, the boom in the field of positive psychology in second language acquisition research has seen an increasing number of scholars focusing on the individual well-being of second language learners alongside their learning effectiveness. Despite this growing interest, there is a need to further investigate the specific emotional factors influencing academic achievement in foreign language learning. This study investigates the impact of three emotions-enjoyment, boredom, and burnout-on academic achievement, and the moderating role of academic buoyancy. Data were collected from 563 college English-as-a-foreign-language (EFL) students from China's mainland using latent moderated structural equation modeling with Mplus. The results of the latent bivariate correlation analysis showed significant correlations between EFL learning emotions, academic buoyancy, and test performance. In the latent moderated structural equations model, enjoyment and burnout predicted test performance. Moreover, academic buoyancy moderated the relationships between enjoyment and test performance, and between burnout and test performance. EFL test performance was highest when enjoyment and buoyancy were both high, or when burnout and buoyancy were both low. These findings highlight the importance of fostering positive emotions and resilience in language learners to enhance their academic performance, offering valuable insights for educators and policymakers aiming to improve foreign language education.
Subject(s)
Boredom , Students , Humans , Male , Female , Young Adult , Students/psychology , Latent Class Analysis , Academic Success , Adult , China , Burnout, Psychological/psychology , Language , Multilingualism , Adolescent , Pleasure , Emotions/physiologyABSTRACT
Speech emotion recognition (SER) is not only a ubiquitous aspect of everyday communication, but also a central focus in the field of human-computer interaction. However, SER faces several challenges, including difficulties in detecting subtle emotional nuances and the complicated task of recognizing speech emotions in noisy environments. To effectively address these challenges, we introduce a Transformer-based model called MelTrans, which is designed to distill critical clues from speech data by learning core features and long-range dependencies. At the heart of our approach is a dual-stream framework. Using the Transformer architecture as its foundation, MelTrans deciphers broad dependencies within speech mel-spectrograms, facilitating a nuanced understanding of emotional cues embedded in speech signals. Comprehensive experimental evaluations on the EmoDB (92.52%) and IEMOCAP (76.54%) datasets demonstrate the effectiveness of MelTrans. These results highlight MelTrans's ability to capture critical cues and long-range dependencies in speech data, setting a new benchmark within the context of these specific datasets. These results highlight the effectiveness of the proposed model in addressing the complex challenges posed by SER tasks.
Subject(s)
Emotions , Speech , Humans , Emotions/physiology , Speech/physiology , Algorithms , Speech Recognition SoftwareABSTRACT
Most existing intelligent editing tools for music and video rely on the cross-modal matching technology of the affective consistency or the similarity of feature representations. However, these methods are not fully applicable to complex audiovisual matching scenarios, resulting in low matching accuracy and suboptimal audience perceptual effects due to ambiguous matching rules and associated factors. To address these limitations, this paper focuses on both the similarity and integration of affective distribution for the artistic audiovisual works of movie and television video and music. Based on the rich emotional perception elements, we propose a hybrid matching model based on feature canonical correlation analysis (CCA) and fine-grained affective similarity. The model refines KCCA fusion features by analyzing both matched and unmatched music-video pairs. Subsequently, the model employs XGBoost to predict relevance and to compute similarity by considering fine-grained affective semantic distance as well as affective factor distance. Ultimately, the matching prediction values are obtained through weight allocation. Experimental results on a self-built dataset demonstrate that the proposed affective matching model balances feature parameters and affective semantic cognitions, yielding relatively high prediction accuracy and better subjective experience of audiovisual association. This paper is crucial for exploring the affective association mechanisms of audiovisual objects from a sensory perspective and improving related intelligent tools, thereby offering a novel technical approach to retrieval and matching in music-video editing.
Subject(s)
Emotions , Music , Humans , Emotions/physiology , AlgorithmsABSTRACT
Speech emotion recognition is key to many fields, including human-computer interaction, healthcare, and intelligent assistance. While acoustic features extracted from human speech are essential for this task, not all of them contribute to emotion recognition effectively. Thus, reduced numbers of features are required within successful emotion recognition models. This work aimed to investigate whether splitting the features into two subsets based on their distribution and then applying commonly used feature reduction methods would impact accuracy. Filter reduction was employed using the Kruskal-Wallis test, followed by principal component analysis (PCA) and independent component analysis (ICA). A set of features was investigated to determine whether the indiscriminate use of parametric feature reduction techniques affects the accuracy of emotion recognition. For this investigation, data from three databases-Berlin EmoDB, SAVEE, and RAVDES-were organized into subsets according to their distribution in applying both PCA and ICA. The results showed a reduction from 6373 features to 170 for the Berlin EmoDB database with an accuracy of 84.3%; a final size of 130 features for SAVEE, with a corresponding accuracy of 75.4%; and 150 features for RAVDESS, with an accuracy of 59.9%.
Subject(s)
Emotions , Principal Component Analysis , Speech , Humans , Emotions/physiology , Speech/physiology , Databases, Factual , Algorithms , Pattern Recognition, Automated/methodsABSTRACT
Emotion recognition through speech is a technique employed in various scenarios of Human-Computer Interaction (HCI). Existing approaches have achieved significant results; however, limitations persist, with the quantity and diversity of data being more notable when deep learning techniques are used. The lack of a standard in feature selection leads to continuous development and experimentation. Choosing and designing the appropriate network architecture constitutes another challenge. This study addresses the challenge of recognizing emotions in the human voice using deep learning techniques, proposing a comprehensive approach, and developing preprocessing and feature selection stages while constructing a dataset called EmoDSc as a result of combining several available databases. The synergy between spectral features and spectrogram images is investigated. Independently, the weighted accuracy obtained using only spectral features was 89%, while using only spectrogram images, the weighted accuracy reached 90%. These results, although surpassing previous research, highlight the strengths and limitations when operating in isolation. Based on this exploration, a neural network architecture composed of a CNN1D, a CNN2D, and an MLP that fuses spectral features and spectogram images is proposed. The model, supported by the unified dataset EmoDSc, demonstrates a remarkable accuracy of 96%.
Subject(s)
Deep Learning , Emotions , Neural Networks, Computer , Humans , Emotions/physiology , Speech/physiology , Databases, Factual , Algorithms , Pattern Recognition, Automated/methodsABSTRACT
BACKGROUND: Emotion is an important area in neuroscience. Cross-subject emotion recognition based on electroencephalogram (EEG) data is challenging due to physiological differences between subjects. Domain gap, which refers to the different distributions of EEG data at different subjects, has attracted great attention for cross-subject emotion recognition. COMPARISON WITH EXISTING METHODS: This study focuses on narrowing the domain gap between subjects through the emotional frequency bands and the relationship information between EEG channels. Emotional frequency band features represent the energy distribution of EEG data in different frequency ranges, while relationship information between EEG channels provides spatial distribution information about EEG data. NEW METHOD: To achieve this, this paper proposes a model called the Frequency Band Attention Graph convolutional Adversarial neural Network (FBAGAN). This model includes three components: a feature extractor, a classifier, and a discriminator. The feature extractor consists of a layer with a frequency band attention mechanism and a graph convolutional neural network. The mechanism effectively extracts frequency band information by assigning weights and Graph Convolutional Networks can extract relationship information between EEG channels by modeling the graph structure. The discriminator then helps minimize the gap in the frequency information and relationship information between the source and target domains, improving the model's ability to generalize. RESULTS: The FBAGAN model is extensively tested on the SEED, SEED-IV, and DEAP datasets. The accuracy and standard deviation scores are 88.17% and 4.88, respectively, on the SEED dataset, and 77.35% and 3.72 on the SEED-IV dataset. On the DEAP dataset, the model achieves 69.64% for Arousal and 65.18% for Valence. These results outperform most existing models. CONCLUSIONS: The experiments indicate that FBAGAN effectively addresses the challenges of transferring EEG channel domain and frequency band domain, leading to improved performance.
Subject(s)
Brain-Computer Interfaces , Electroencephalography , Emotions , Neural Networks, Computer , Humans , Electroencephalography/methods , Emotions/physiology , Brain/physiology , Signal Processing, Computer-AssistedABSTRACT
Time of day can alter memory performance in general. Its influence on memory recognition performance for faces, which is important for daily encounters with new persons or testimonies, has not been investigated yet. Importantly, high levels of the stress hormone cortisol impair memory recognition, in particular for emotional material. However, some studies also reported high cortisol levels to enhance memory recognition. Since cortisol levels in the morning are usually higher than in the evening, time of day might also influence recognition performance. In this pre-registered study with a two-day design, 51 healthy men encoded pictures of male and female faces with distinct emotional expressions on day one around noon. Memory for the faces was retrieved two days later at two consecutive testing times either in the morning (high and moderately increased endogenous cortisol levels) or in the evening (low endogenous cortisol levels). Additionally, alertness as well as salivary cortisol levels at the different timepoints was assessed. Cortisol levels were significantly higher in the morning compared to the evening group as expected, while both groups did not differ in alertness. Familiarity ratings for female stimuli were significantly better when participants were tested during moderately increased endogenous cortisol levels in the morning than during low endogenous cortisol levels in the evening, a pattern which was previously also observed for stressed versus non-stressed participants. In addition, cortisol levels during that time in the morning were positively correlated with the recollection of face stimuli in general. Thus, recognition memory performance may depend on the time of day and as well as on stimulus type, such as the difference of male and female faces. Most importantly, the results suggest that cortisol may be meaningful and worth investigating when studying the effects of time of day on memory performance. This research offers both, insights into daily encounters as well as legally relevant domains as for instance testimonies.
Subject(s)
Circadian Rhythm , Hydrocortisone , Recognition, Psychology , Saliva , Humans , Male , Hydrocortisone/metabolism , Hydrocortisone/analysis , Adult , Saliva/chemistry , Saliva/metabolism , Young Adult , Recognition, Psychology/physiology , Female , Circadian Rhythm/physiology , Facial Recognition/physiology , Facial Expression , Emotions/physiology , Time FactorsABSTRACT
Adolescence poses significant challenges for emotion regulation (ER) and is thus a critical phase in the emergence of various mental disorders, specifically internalising disorders such as anxiety and depression. Affective control, defined as the application of cognitive control in affective contexts, is crucial for effective ER. However, the relationship between ER and affective control is unclear. This study examined the predictive role of ER strategies and difficulties in affective control, measured as the congruency effect and error rate on an Emotional Stroop task (EST), in a sample of adolescents and young adults (aged 14-21, M = 17.28, 22% male). It was hypothesised that participants with internalising disorders would show higher congruency effects and error rates on the EST than healthy controls after a psychosocial stress induction, indicating lower affective control. Surprisingly, our findings revealed no significant differences in these measures between the groups. However, higher depression scores were associated with increased EST errors. While ER strategies and difficulties did not predict affective control, exploratory analyses unveiled associations between depression scores and ER strategy repertoire, perceived ER success and the ER strategy Acceptance. These findings underscore the importance of implicit ER facets, particularly perceived ER success and flexibility to change between applied strategies for adolescents and young adults with elevated depressive symptoms.
Subject(s)
Depression , Emotional Regulation , Humans , Male , Adolescent , Female , Emotional Regulation/physiology , Young Adult , Depression/psychology , Adult , Stroop Test , Anxiety/psychology , Emotions/physiology , PsychopathologyABSTRACT
Cinema, a modern titan of entertainment, holds power to move people with the artful manipulation of auditory and visual stimuli. Despite this, the mechanisms behind how sensory stimuli elicit emotional responses are unknown. Thus, this study evaluated which brain regions were involved when sensory stimuli evoke auditory- or visual-driven emotions during film viewing. Using functional magnetic resonance imaging (fMRI) decoding techniques, we found that brain activities in the auditory area and insula represent the stimuli that evoke emotional response. The observation of brain activities in these regions could provide further insights to these mechanisms for the improvement of film-making, as well as the development of novel neural techniques in neuroscience. In near feature, such a "neuro-designed" products/ applications might gain in popularity.
Subject(s)
Auditory Cortex , Brain Mapping , Emotions , Insular Cortex , Magnetic Resonance Imaging , Humans , Emotions/physiology , Magnetic Resonance Imaging/methods , Male , Female , Adult , Young Adult , Brain Mapping/methods , Auditory Cortex/physiology , Auditory Cortex/diagnostic imaging , Insular Cortex/physiology , Insular Cortex/diagnostic imaging , Acoustic Stimulation , Photic Stimulation/methodsABSTRACT
Pupillometry is widely used to measure arousal states. The primary functional role of the pupil, however, is to respond to the luminance of visual inputs. We previously demonstrated that cognitive effort-related arousal interacted multiplicatively with luminance, with the strongest pupillary effects of arousal occurring at low-to-mid luminances (< 37 cd/m2), implying a narrow range of conditions ideal for assessing cognitive arousal-driven pupillary differences. Does this generalize to other forms of arousal? To answer this, we assessed luminance-driven pupillary response functions while manipulating emotional arousal, using well-established visual and auditory stimulus sets. At the group level, emotional arousal interacted with the pupillary light response differently from cognitive arousal: the effects occurred primarily at much lower luminances (< 20 cd/m2). Analyses at the individual-participant level revealed qualitatively distinct patterns of modulation, with a sizable number of individuals displaying no arousal response to the visual or auditory stimuli, regardless of luminance. Together, our results suggest that effects of arousal on pupil size are not monolithic: different forms of arousal exert different patterns of effects. More practically, our findings suggest that lower luminances create better conditions for measuring pupil-linked arousal, and when selecting ambient luminance levels, consideration of the arousal manipulation and individual differences is critical.