Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 249
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Methods ; 218: 224-232, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37678514

RESUMO

Heart rate variability (HRV) is an important indicator of autonomic nervous system activity and can be used for the identification of affective states. The development of remote Photoplethysmography (rPPG) technology has made it possible to measure pulse rate variability (PRV) using a camera without any sensor-skin contact, which is highly correlated to HRV, thus, enabling contactless assessment of emotional states. In this study, we employed ten machine learning techniques to identify emotions using camera-based PRV features. Our experimental results show that the best classification model achieved a coordination correlation coefficient of 0.34 for value recognition and 0.36 for arousal recognition. The rPPG-based measurement has demonstrated promising results in detecting HAHV (high-arousal high-valence) emotions with high accuracy. Furthermore, for emotions with less noticeable variations, such as sadness, the rPPG-based measure outperformed the baseline deep network for facial expression analysis.


Assuntos
Emoções , Aprendizado de Máquina , Frequência Cardíaca , Pele
2.
Sensors (Basel) ; 24(10)2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38794064

RESUMO

Stress recognition, particularly using machine learning (ML) with physiological data such as heart rate variability (HRV), holds promise for mental health interventions. However, limited datasets in affective computing and healthcare research can lead to inaccurate conclusions regarding the ML model performance. This study employed supervised learning algorithms to classify stress and relaxation states using HRV measures. To account for limitations associated with small datasets, robust strategies were implemented based on methodological recommendations for ML with a limited dataset, including data segmentation, feature selection, and model evaluation. Our findings highlight that the random forest model achieved the best performance in distinguishing stress from non-stress states. Notably, it showed higher performance in identifying stress from relaxation (F1-score: 86.3%) compared to neutral states (F1-score: 65.8%). Additionally, the model demonstrated generalizability when tested on independent secondary datasets, showcasing its ability to distinguish between stress and relaxation states. While our performance metrics might be lower than some previous studies, this likely reflects our focus on robust methodologies to enhance the generalizability and interpretability of ML models, which are crucial for real-world applications with limited datasets.


Assuntos
Algoritmos , Frequência Cardíaca , Aprendizado de Máquina , Estresse Psicológico , Frequência Cardíaca/fisiologia , Humanos , Estresse Psicológico/fisiopatologia , Masculino , Feminino , Adulto , Eletrocardiografia/métodos , Adulto Jovem
3.
Sensors (Basel) ; 24(7)2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38610510

RESUMO

The perception of sound greatly impacts users' emotional states, expectations, affective relationships with products, and purchase decisions. Consequently, assessing the perceived quality of sounds through jury testing is crucial in product design. However, the subjective nature of jurors' responses may limit the accuracy and reliability of jury test outcomes. This research explores the utility of facial expression analysis in jury testing to enhance response reliability and mitigate subjectivity. Some quantitative indicators allow the research hypothesis to be validated, such as the correlation between jurors' emotional responses and valence values, the accuracy of jury tests, and the disparities between jurors' questionnaire responses and the emotions measured by FER (facial expression recognition). Specifically, analysis of attention levels during different statuses reveals a discernible decrease in attention levels, with 70 percent of jurors exhibiting reduced attention levels in the 'distracted' state and 62 percent in the 'heavy-eyed' state. On the other hand, regression analysis shows that the correlation between jurors' valence and their choices in the jury test increases when considering the data where the jurors are attentive. The correlation highlights the potential of facial expression analysis as a reliable tool for assessing juror engagement. The findings suggest that integrating facial expression recognition can enhance the accuracy of jury testing in product design by providing a more dependable assessment of user responses and deeper insights into participants' reactions to auditory stimuli.


Assuntos
Reconhecimento Facial , Humanos , Reprodutibilidade dos Testes , Acústica , Som , Emoções
4.
Philos Trans A Math Phys Eng Sci ; 381(2251): 20220047, 2023 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-37271174

RESUMO

From sparse descriptions of events, observers can make systematic and nuanced predictions of what emotions the people involved will experience. We propose a formal model of emotion prediction in the context of a public high-stakes social dilemma. This model uses inverse planning to infer a person's beliefs and preferences, including social preferences for equity and for maintaining a good reputation. The model then combines these inferred mental contents with the event to compute 'appraisals': whether the situation conformed to the expectations and fulfilled the preferences. We learn functions mapping computed appraisals to emotion labels, allowing the model to match human observers' quantitative predictions of 20 emotions, including joy, relief, guilt and envy. Model comparison indicates that inferred monetary preferences are not sufficient to explain observers' emotion predictions; inferred social preferences are factored into predictions for nearly every emotion. Human observers and the model both use minimal individualizing information to adjust predictions of how different people will respond to the same event. Thus, our framework integrates inverse planning, event appraisals and emotion concepts in a single computational model to reverse-engineer people's intuitive theory of emotions. This article is part of a discussion meeting issue 'Cognitive artificial intelligence'.


Assuntos
Teoria da Mente , Humanos , Inteligência Artificial , Emoções
5.
J Neuroeng Rehabil ; 20(1): 107, 2023 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-37582733

RESUMO

BACKGROUND: Anger dyscontrol is a common issue after traumatic brain injury (TBI). With the growth of wearable physiological sensors, there is new potential to facilitate the rehabilitation of such anger in the context of daily life. This potential, however, depends on how well physiological markers can distinguish changing emotional states and for such markers to generalize to real-world settings. Our study explores how wearable photoplethysmography (PPG), one of the most widely available physiological sensors, could be used detect anger within a heterogeneous population. METHODS: This study collected the TRIEP (Toronto Rehabilitation Institute Emotion-Physiology) dataset, which comprised of 32 individuals (10 TBI), exposed to a variety of elicitation material (film, pictures, self-statements, personal recall), over two day sessions. This complex dataset allowed for exploration into how the emotion-PPG relationship varied over changes in individuals, endogenous/exogenous drivers of emotion, and day-to-day differences. A multi-stage analysis was conducted looking at: (1) times-series visual clustering, (2) discriminative time-interval features of anger, and (3) out-of-sample anger classification. RESULTS: Characteristics of PPG are largely dominated by inter-subject (between individuals) differences first, then intra-subject (day-to-day) changes, before differentiation into emotion. Both TBI and non-TBI individuals showed evidence of linear separable features that could differentiate anger from non-anger classes within time-interval analysis. However, what is more challenging is that these separable features for anger have various degrees of stability across individuals and days. CONCLUSION: This work highlights how there are contextual, non-stationary challenges to the emotion-physiology relationship that must be accounted for before emotion regulation technology can perform in real-world scenarios. It also affirms the need for a larger breadth of emotional sampling when building classification models.


Assuntos
Lesões Encefálicas Traumáticas , Regulação Emocional , Humanos , Fotopletismografia , Ira/fisiologia , Emoções/fisiologia
6.
Sensors (Basel) ; 23(18)2023 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-37765910

RESUMO

Most studies have demonstrated that EEG can be applied to emotion recognition. In the process of EEG-based emotion recognition, real-time is an important feature. In this paper, the real-time problem of emotion recognition based on EEG is explained and analyzed. Secondly, the short time window length and attention mechanisms are designed on EEG signals to follow emotion change over time. Then, long short-term memory with the additive attention mechanism is used for emotion recognition, due to timely emotion updates, and the model is applied to the SEED and SEED-IV datasets to verify the feasibility of real-time emotion recognition. The results show that the model performs relatively well in terms of real-time performance, with accuracy rates of 85.40% and 74.26% on SEED and SEED-IV, but the accuracy rate has not reached the ideal state due to data labeling and other losses in the pursuit of real-time performance.


Assuntos
Emoções , Memória de Longo Prazo , Reconhecimento Psicológico , Eletroencefalografia
7.
Sensors (Basel) ; 23(2)2023 Jan 14.
Artigo em Inglês | MEDLINE | ID: mdl-36679760

RESUMO

The article deals with the detection of stress using the electrodermal activity (EDA) signal measured at the wrist. We present an approach for feature extraction from EDA. The approach uses frequency spectrum analysis in multiple frequency bands. We evaluate the proposed approach using the 4 Hz EDA signal measured at the wrist in the publicly available Wearable Stress and Affect Detection (WESAD) dataset. Seven existing approaches to stress detection using EDA signals measured by wrist-worn sensors are analysed and the reported results are compared with ours. The proposed approach represents an improvement in accuracy over the other techniques studied. Moreover, we focus on time to detection (TTD) and show that our approach is able to outperform competing techniques, with fewer data points. The proposed feature extraction is computationally inexpensive, thus the presented approach is suitable for use in real-world wearable applications where both short response times and high detection performance are important. We report both binary (stress vs. no stress) as well as three-class (baseline/stress/amusement) results.


Assuntos
Resposta Galvânica da Pele , Punho , Punho/fisiologia , Articulação do Punho
8.
Sensors (Basel) ; 23(3)2023 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-36772223

RESUMO

In recent years, there have been many approaches to using robots to teach computer programming. In intelligent tutoring systems and computer-aided learning, there is also some research to show that affective feedback to the student increases learning efficiency. However, a few studies on the role of incorporating an emotional personality in the robot in robot-assisted learning have found different results. To explore this issue further, we conducted a pilot study to investigate the effect of positive verbal encouragement and non-verbal emotive behaviour of the Miro-E robot during a robot-assisted programming session. The participants were tasked to program the robot's behaviour. In the experimental group, the robot monitored the participants' emotional state via their facial expressions, and provided affective feedback to the participants after completing each task. In the control group, the robot responded in a neutral way. The participants filled out a questionnaire before and after the programming session. The results show a positive reaction of the participants to the robot and the exercise. Though the number of participants was small, as the experiment was conducted during the pandemic, a qualitative analysis of the data was carried out. We found that the greatest affective outcome of the session was for students who had little experience or interest in programming before. We also found that the affective expressions of the robot had a negative impact on its likeability, revealing vestiges of the uncanny valley effect.


Assuntos
Robótica , Humanos , Retroalimentação , Projetos Piloto , Aprendizagem , Emoções
9.
Sensors (Basel) ; 23(6)2023 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-36991595

RESUMO

A core endeavour in current affective computing and social signal processing research is the construction of datasets embedding suitable ground truths to foster machine learning methods. This practice brings up hitherto overlooked intricacies. In this paper, we consider causal factors potentially arising when human raters evaluate the affect fluctuations of subjects involved in dyadic interactions and subsequently categorise them in terms of social participation traits. To gauge such factors, we propose an emulator as a statistical approximation of the human rater, and we first discuss the motivations and the rationale behind the approach.The emulator is laid down in the next section as a phenomenological model where the core affect stochastic dynamics as perceived by the rater are captured through an Ornstein-Uhlenbeck process; its parameters are then exploited to infer potential causal effects in the attribution of social traits. Following that, by resorting to a publicly available dataset, the adequacy of the model is evaluated in terms of both human raters' emulation and machine learning predictive capabilities. We then present the results, which are followed by a general discussion concerning findings and their implications, together with advantages and potential applications of the approach.


Assuntos
Participação Social , Percepção Social , Humanos
10.
Sensors (Basel) ; 23(6)2023 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-36991630

RESUMO

In recent years, affective computing has emerged as a promising approach to studying user experience, replacing subjective methods that rely on participants' self-evaluation. Affective computing uses biometrics to recognize people's emotional states as they interact with a product. However, the cost of medical-grade biofeedback systems is prohibitive for researchers with limited budgets. An alternative solution is to use consumer-grade devices, which are more affordable. However, these devices require proprietary software to collect data, complicating data processing, synchronization, and integration. Additionally, researchers need multiple computers to control the biofeedback system, increasing equipment costs and complexity. To address these challenges, we developed a low-cost biofeedback platform using inexpensive hardware and open-source libraries. Our software can serve as a system development kit for future studies. We conducted a simple experiment with one participant to validate the platform's effectiveness, using one baseline and two tasks that elicited distinct responses. Our low-cost biofeedback platform provides a reference architecture for researchers with limited budgets who wish to incorporate biometrics into their studies. This platform can be used to develop affective computing models in various domains, including ergonomics, human factors engineering, user experience, human behavioral studies, and human-robot interaction.


Assuntos
Projetos de Pesquisa , Software , Humanos , Computadores , Biometria , Biorretroalimentação Psicológica
11.
Sensors (Basel) ; 23(20)2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37896470

RESUMO

Facial expression recognition (FER) poses a complex challenge due to diverse factors such as facial morphology variations, lighting conditions, and cultural nuances in emotion representation. To address these hurdles, specific FER algorithms leverage advanced data analysis for inferring emotional states from facial expressions. In this study, we introduce a universal validation methodology assessing any FER algorithm's performance through a web application where subjects respond to emotive images. We present the labelled data database, FeelPix, generated from facial landmark coordinates during FER algorithm validation. FeelPix is available to train and test generic FER algorithms, accurately identifying users' facial expressions. A testing algorithm classifies emotions based on FeelPix data, ensuring its reliability. Designed as a computationally lightweight solution, it finds applications in online systems. Our contribution improves facial expression recognition, enabling the identification and interpretation of emotions associated with facial expressions, offering profound insights into individuals' emotional reactions. This contribution has implications for healthcare, security, human-computer interaction, and entertainment.


Assuntos
Reconhecimento Facial , Humanos , Reprodutibilidade dos Testes , Emoções , Face , Expressão Facial
12.
Sensors (Basel) ; 23(15)2023 Jul 29.
Artigo em Inglês | MEDLINE | ID: mdl-37571571

RESUMO

This paper presents novel preliminary research that investigates the relationship between the flow of a group of jazz musicians, quantified through multi-person pose synchronization, and their collective emotions. We have developed a real-time software to calculate the physical synchronicity of team members by tracking the difference in arm, leg, and head movements using Lightweight OpenPose. We employ facial expression recognition to evaluate the musicians' collective emotions. Through correlation and regression analysis, we establish that higher levels of synchronized body and head movements correspond to lower levels of disgust, anger, sadness, and higher levels of joy among the musicians. Furthermore, we utilize 1-D CNNs to predict the collective emotions of the musicians. The model leverages 17 body synchrony keypoint vectors as features, resulting in a training accuracy of 61.47% and a test accuracy of 66.17%.


Assuntos
Asco , Reconhecimento Facial , Humanos , Emoções , Expressão Facial , Movimentos da Cabeça
13.
Entropy (Basel) ; 25(5)2023 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-37238549

RESUMO

Affective understanding of language is an important research focus in artificial intelligence. The large-scale annotated datasets of Chinese textual affective structure (CTAS) are the foundation for subsequent higher-level analysis of documents. However, there are very few published datasets for CTAS. This paper introduces a new benchmark dataset for the task of CTAS to promote development in this research direction. Specifically, our benchmark is a CTAS dataset with the following advantages: (a) it is Weibo-based, which is the most popular Chinese social media platform used by the public to express their opinions; (b) it includes the most comprehensive affective structure labels at present; and (c) we propose a maximum entropy Markov model that incorporates neural network features and experimentally demonstrate that it outperforms the two baseline models.

14.
Pers Ubiquitous Comput ; 27(2): 299-315, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35528273

RESUMO

Movement and embodiment are communicative affordances central to social robotics, but designing embodied movements for robots often requires extensive knowledge of both robotics and movement theory. More accessible methods such as learning from demonstration often rely on physical access to the robot which is usually limited to research settings. Machine learning (ML) algorithms can complement hand-crafted or learned movements by generating new behaviors, but this requires large and diverse training datasets, which are hard to come by. In this work, we propose an embodied telepresence system for remotely crowdsourcing emotive robot movement samples that can serve as ML training data. Remote users control the robot through the internet using the motion sensors in their smartphones and view the movement either from a first-person or a third-person perspective. We evaluated the system in an online study where users created emotive movements for the robot and rated their experience. We then utilized the user-crafted movements as inputs to a neural network to generate new movements. We found that users strongly preferred the third-person perspective and that the ML-generated movements are largely comparable to the user-crafted movements. This work supports the usability of telepresence robots as a movement crowdsourcing platform.

15.
Neuroimage ; 249: 118873, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-34998969

RESUMO

This study applies adaptive mixture independent component analysis (AMICA) to learn a set of ICA models, each optimized by fitting a distributional model for each identified component process while maximizing component process independence within some subsets of time points of a multi-channel EEG dataset. Here, we applied 20-model AMICA decomposition to long-duration (1-2 h), high-density (128-channel) EEG data recorded while participants used guided imagination to imagine situations stimulating the experience of 15 specified emotions. These decompositions tended to return models identifying spatiotemporal EEG patterns or states within single emotion imagination periods. Model probability transitions reflected time-courses of EEG dynamics during emotion imagination, which varied across emotions. Transitions between models accounting for imagined "grief" and "happiness" were more abrupt and better aligned with participant reports, while transitions for imagined "contentment" extended into adjoining "relaxation" periods. The spatial distributions of brain-localizable independent component processes (ICs) were more similar within participants (across emotions) than emotions (across participants). Across participants, brain regions with differences in IC spatial distributions (i.e., dipole density) between emotion imagination versus relaxation were identified in or near the left rostrolateral prefrontal, posterior cingulate cortex, right insula, bilateral sensorimotor, premotor, and associative visual cortex. No difference in dipole density was found between positive versus negative emotions. AMICA models of changes in high-density EEG dynamics may allow data-driven insights into brain dynamics during emotional experience, possibly enabling the improved performance of EEG-based emotion decoding and advancing our understanding of emotion.


Assuntos
Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Emoções/fisiologia , Neuroimagem Funcional/métodos , Imaginação/fisiologia , Aprendizado de Máquina não Supervisionado , Adulto , Humanos
16.
Curr Psychiatry Rep ; 24(3): 203-211, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35212918

RESUMO

PURPOSE OF REVIEW: Emotion artificial intelligence (AI) is technology for emotion detection and recognition. Emotion AI is expanding rapidly in commercial and government settings outside of medicine, and will increasingly become a routine part of daily life. The goal of this narrative review is to increase awareness both of the widespread use of emotion AI, and of the concerns with commercial use of emotion AI in relation to people with mental illness. RECENT FINDINGS: This paper discusses emotion AI fundamentals, a general overview of commercial emotion AI outside of medicine, and examples of the use of emotion AI in employee hiring and workplace monitoring. The successful re-integration of patients with mental illness into society must recognize the increasing commercial use of emotion AI. There are concerns that commercial use of emotion AI will increase stigma and discrimination, and have negative consequences in daily life for people with mental illness. Commercial emotion AI algorithm predictions about mental illness should not be treated as medical fact.


Assuntos
Transtornos Mentais , Psiquiatria , Algoritmos , Inteligência Artificial , Emoções , Humanos , Transtornos Mentais/diagnóstico , Transtornos Mentais/terapia
17.
Sensors (Basel) ; 22(11)2022 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-35684644

RESUMO

Affective computing through physiological signals monitoring is currently a hot topic in the scientific literature, but also in the industry. Many wearable devices are being developed for health or wellness tracking during daily life or sports activity. Likewise, other applications are being proposed for the early detection of risk situations involving sexual or violent aggressions, with the identification of panic or fear emotions. The use of other sources of information, such as video or audio signals will make multimodal affective computing a more powerful tool for emotion classification, improving the detection capability. There are other biological elements that have not been explored yet and that could provide additional information to better disentangle negative emotions, such as fear or panic. Catecholamines are hormones produced by the adrenal glands, two small glands located above the kidneys. These hormones are released in the body in response to physical or emotional stress. The main catecholamines, namely adrenaline, noradrenaline and dopamine have been analysed, as well as four physiological variables: skin temperature, electrodermal activity, blood volume pulse (to calculate heart rate activity. i.e., beats per minute) and respiration rate. This work presents a comparison of the results provided by the analysis of physiological signals in reference to catecholamine, from an experimental task with 21 female volunteers receiving audiovisual stimuli through an immersive environment in virtual reality. Artificial intelligence algorithms for fear classification with physiological variables and plasma catecholamine concentration levels have been proposed and tested. The best results have been obtained with the features extracted from the physiological variables. Adding catecholamine's maximum variation during the five minutes after the video clip visualization, as well as adding the five measurements (1-min interval) of these levels, are not providing better performance in the classifiers.


Assuntos
Inteligência Artificial , Catecolaminas , Emoções/fisiologia , Medo , Feminino , Hormônios , Humanos
18.
Sensors (Basel) ; 22(5)2022 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-35270936

RESUMO

Extensive possibilities of applications have rendered emotion recognition ineluctable and challenging in the fields of computer science as well as in human-machine interaction and affective computing. Fields that, in turn, are increasingly requiring real-time applications or interactions in everyday life scenarios. However, while extremely desirable, an accurate and automated emotion classification approach remains a challenging issue. To this end, this study presents an automated emotion recognition model based on easily accessible physiological signals and deep learning (DL) approaches. As a DL algorithm, a Feedforward Neural Network was employed in this study. The network outcome was further compared with canonical machine learning algorithms such as random forest (RF). The developed DL model relied on the combined use of wearables and contactless technologies, such as thermal infrared imaging. Such a model is able to classify the emotional state into four classes, derived from the linear combination of valence and arousal (referring to the circumplex model of affect's four-quadrant structure) with an overall accuracy of 70% outperforming the 66% accuracy reached by the RF model. Considering the ecological and agile nature of the technique used the proposed model could lead to innovative applications in the affective computing field.


Assuntos
Aprendizado Profundo , Eletroencefalografia , Nível de Alerta/fisiologia , Eletroencefalografia/métodos , Emoções/fisiologia , Humanos , Redes Neurais de Computação
19.
Sensors (Basel) ; 22(6)2022 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-35336515

RESUMO

Every human being experiences emotions daily, e.g., joy, sadness, fear, anger. These might be revealed through speech-words are often accompanied by our emotional states when we talk. Different acoustic emotional databases are freely available for solving the Emotional Speech Recognition (ESR) task. Unfortunately, many of them were generated under non-real-world conditions, i.e., actors played emotions, and recorded emotions were under fictitious circumstances where noise is non-existent. Another weakness in the design of emotion recognition systems is the scarcity of enough patterns in the available databases, causing generalization problems and leading to overfitting. This paper examines how different recording environmental elements impact system performance using a simple logistic regression algorithm. Specifically, we conducted experiments simulating different scenarios, using different levels of Gaussian white noise, real-world noise, and reverberation. The results from this research show a performance deterioration in all scenarios, increasing the error probability from 25.57% to 79.13% in the worst case. Additionally, a virtual enlargement method and a robust multi-scenario speech-based emotion recognition system are proposed. Our system's average error probability of 34.57% is comparable to the best-case scenario with 31.55%. The findings support the prediction that simulated emotional speech databases do not offer sufficient closeness to real scenarios.


Assuntos
Percepção da Fala , Fala , Acústica , Emoções , Medo , Humanos
20.
Sensors (Basel) ; 23(1)2022 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-36616980

RESUMO

Music is capable of conveying many emotions. The level and type of emotion of the music perceived by a listener, however, is highly subjective. In this study, we present the Music Emotion Recognition with Profile information dataset (MERP). This database was collected through Amazon Mechanical Turk (MTurk) and features dynamical valence and arousal ratings of 54 selected full-length songs. The dataset contains music features, as well as user profile information of the annotators. The songs were selected from the Free Music Archive using an innovative method (a Triple Neural Network with the OpenSmile toolkit) to identify 50 songs with the most distinctive emotions. Specifically, the songs were chosen to fully cover the four quadrants of the valence-arousal space. Four additional songs were selected from the DEAM dataset to act as a benchmark in this study and filter out low quality ratings. A total of 452 participants participated in annotating the dataset, with 277 participants remaining after thoroughly cleaning the dataset. Their demographic information, listening preferences, and musical background were recorded. We offer an extensive analysis of the resulting dataset, together with a baseline emotion prediction model based on a fully connected model and an LSTM model, for our newly proposed MERP dataset.


Assuntos
Música , Humanos , Nível de Alerta , Percepção Auditiva , Emoções , Música/psicologia , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA