Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 118: 208-219, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31299625

ABSTRACT

Multimodal emotion understanding enables AI systems to interpret human emotions. With accelerated video surge, emotion understanding remains challenging due to inherent data ambiguity and diversity of video content. Although deep learning has made a considerable progress in big data feature learning, they are viewed as deterministic models used in a "black-box" manner which does not have capabilities to represent inherent ambiguities with data. Since the possibility theory of fuzzy logic focuses on knowledge representation and reasoning under uncertainty, we intend to incorporate the concepts of fuzzy logic into deep learning framework. This paper presents a novel convolutional neuro-fuzzy network, which is an integration of convolutional neural networks in fuzzy logic domain to extract high-level emotion features from text, audio, and visual modalities. The feature sets extracted by fuzzy convolutional layers are compared with those of convolutional neural networks at the same level using t-distributed Stochastic Neighbor Embedding. This paper demonstrates a multimodal emotion understanding framework with an adaptive neural fuzzy inference system that can generate new rules to classify emotions. For emotion understanding of movie clips, we concatenate audio, visual, and text features extracted using the proposed convolutional neuro-fuzzy network to train adaptive neural fuzzy inference system. In this paper, we go one step further to explain how deep learning arrives at a conclusion that can guide us to an interpretable AI. To identify which visual/text/audio aspects are important for emotion understanding, we use direct linear non-Gaussian additive model to explain the relevance in terms of causal relationships between features of deep hidden layers. The critical features extracted are input to the proposed multimodal framework to achieve higher accuracy.


Subject(s)
Deep Learning , Emotions , Fuzzy Logic , Motion Pictures , Neural Networks, Computer , Algorithms , Deep Learning/classification , Emotions/physiology , Humans , Motion Pictures/classification , Photic Stimulation/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...