Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(13)2022 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-35808185

RESUMEN

Mechatronic systems, like mobile robots, are fairly complex. They are composed of electromechanical actuation components and sensing elements supervised by microcontrollers running complex embedded software. This paper proposes a novel approach to aid mobile robotics developers in adopting a rigorous development process to design and verify the robot's detection and mitigation capabilities against random hardware failures affecting its sensors or actuators. Unfortunately, assessing the interactions between the various safety/mission-critical subsystem is quite complex. The failure mode effect analysis (FMEA) alongside an analysis of the failure detection capabilities (FMEDA) are the state-of-the-art methodologies for performing such an analysis. Various guidelines are available, and the authors decided to follow the one released by AIAG&VDA in June 2019. Since the robot's behavior is based on embedded software, the FMEA has been integrated with the hardware/software interaction analysis described in the ECSS-Q-ST-30-02C manual. The core of this proposal is to show how a simulation-based approach, where the mechanical and electrical/electronic components are simulated alongside the embedded software, can effectively support FMEA. As a benchmark application, we considered the mobility system of a proof-of-concept assistance rover for Mars exploration designed by the D.I.A.N.A. student team at Politecnico di Torino. Thanks to the adopted approach, we described how to develop the detection and mitigation strategies and how to determine their effectiveness, with a particular focus on those affecting the sensors.


Asunto(s)
Robótica , Algoritmos , Simulación por Computador , Computadores , Humanos , Robótica/métodos , Programas Informáticos
2.
Sensors (Basel) ; 22(7)2022 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-35408387

RESUMEN

Teaching is an activity that requires understanding the class's reaction to evaluate the teaching methodology effectiveness. This operation can be easy to achieve in small classrooms, while it may be challenging to do in classes of 50 or more students. This paper proposes a novel Internet of Things (IoT) system to aid teachers in their work based on the redundant use of non-invasive techniques such as facial expression recognition and physiological data analysis. Facial expression recognition is performed using a Convolutional Neural Network (CNN), while physiological data are obtained via Photoplethysmography (PPG). By recurring to Russel's model, we grouped the most important Ekman's facial expressions recognized by CNN into active and passive. Then, operations such as thresholding and windowing were performed to make it possible to compare and analyze the results from both sources. Using a window size of 100 samples, both sources have detected a level of attention of about 55.5% for the in-presence lectures tests. By comparing results coming from in-presence and pre-recorded remote lectures, it is possible to note that, thanks to validation with physiological data, facial expressions alone seem useful in determining students' level of attention for in-presence lectures.


Asunto(s)
Reconocimiento Facial , Internet de las Cosas , Expresión Facial , Humanos , Redes Neurales de la Computación , Fotopletismografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA