Your browser doesn't support javascript.
loading
A Novel Redundant Validation IoT System for Affective Learning Based on Facial Expressions and Biological Signals.
Marceddu, Antonio Costantino; Pugliese, Luigi; Sini, Jacopo; Espinosa, Gustavo Ramirez; Amel Solouki, Mohammadreza; Chiavassa, Pietro; Giusto, Edoardo; Montrucchio, Bartolomeo; Violante, Massimo; De Pace, Francesco.
Afiliación
  • Marceddu AC; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
  • Pugliese L; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
  • Sini J; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
  • Espinosa GR; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
  • Amel Solouki M; Electronics Department, Engineering School, Pontificia Universidad Javeriana, Bogota 1301, Colombia.
  • Chiavassa P; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
  • Giusto E; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
  • Montrucchio B; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
  • Violante M; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
  • De Pace F; Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy.
Sensors (Basel) ; 22(7)2022 Apr 04.
Article en En | MEDLINE | ID: mdl-35408387
Teaching is an activity that requires understanding the class's reaction to evaluate the teaching methodology effectiveness. This operation can be easy to achieve in small classrooms, while it may be challenging to do in classes of 50 or more students. This paper proposes a novel Internet of Things (IoT) system to aid teachers in their work based on the redundant use of non-invasive techniques such as facial expression recognition and physiological data analysis. Facial expression recognition is performed using a Convolutional Neural Network (CNN), while physiological data are obtained via Photoplethysmography (PPG). By recurring to Russel's model, we grouped the most important Ekman's facial expressions recognized by CNN into active and passive. Then, operations such as thresholding and windowing were performed to make it possible to compare and analyze the results from both sources. Using a window size of 100 samples, both sources have detected a level of attention of about 55.5% for the in-presence lectures tests. By comparing results coming from in-presence and pre-recorded remote lectures, it is possible to note that, thanks to validation with physiological data, facial expressions alone seem useful in determining students' level of attention for in-presence lectures.
Asunto(s)
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Reconocimiento Facial / Internet de las Cosas Límite: Humans Idioma: En Revista: Sensors (Basel) Año: 2022 Tipo del documento: Article País de afiliación: Italia

Texto completo: 1 Banco de datos: MEDLINE Asunto principal: Reconocimiento Facial / Internet de las Cosas Límite: Humans Idioma: En Revista: Sensors (Basel) Año: 2022 Tipo del documento: Article País de afiliación: Italia