Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de estudo
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(5)2023 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-36904689

RESUMO

We developed a mobile application for cervical rehabilitation that uses a non-invasive camera-based head-tracker sensor for monitoring neck movements. The intended user population should be able to use the mobile application in their own mobile device, but mobile devices have different camera sensors and screen dimensions that could affect the user performance and neck movement monitoring. In this work, we studied the influence of mobile devices type on camera-based monitoring of neck movements for rehabilitation purposes. We conducted an experiment to test whether the characteristics of a mobile device affect neck movements when using the mobile application with the head-tracker. The experiment consisted of the use of our application, containing an exergame, in three mobile devices. We used wireless inertial sensors to measure the real-time neck movements performed while using the different devices. The results showed that the effect of device type on neck movements was not statistically significant. We included the sex factor in the analysis, but there was no statistically significant interaction between sex and device variables. Our mobile application proved to be device-agnostic. This will allow intended users to use the mHealth application regardless of the type of device. Thus, future work can continue with the clinical evaluation of the developed application to analyse the hypothesis that the use of the exergame will improve therapeutic adherence in cervical rehabilitation.


Assuntos
Aplicativos Móveis , Telemedicina , Computadores de Mão
2.
Sensors (Basel) ; 23(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36616728

RESUMO

Recognizing facial expressions has been a persistent goal in the scientific community. Since the rise of artificial intelligence, convolutional neural networks (CNN) have become popular to recognize facial expressions, as images can be directly used as input. Current CNN models can achieve high recognition rates, but they give no clue about their reasoning process. Explainable artificial intelligence (XAI) has been developed as a means to help to interpret the results obtained by machine learning models. When dealing with images, one of the most-used XAI techniques is LIME. LIME highlights the areas of the image that contribute to a classification. As an alternative to LIME, the CEM method appeared, providing explanations in a way that is natural for human classification: besides highlighting what is sufficient to justify a classification, it also identifies what should be absent to maintain it and to distinguish it from another classification. This study presents the results of comparing LIME and CEM applied over complex images such as facial expression images. While CEM could be used to explain the results on images described with a reduced number of features, LIME would be the method of choice when dealing with images described with a huge number of features.


Assuntos
Inteligência Artificial , Expressão Facial , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
3.
Sensors (Basel) ; 21(6)2021 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-33806813

RESUMO

Vision-based interfaces are used for monitoring human motion. In particular, camera-based head-trackers interpret the movement of the user's head for interacting with devices. Neck pain is one of the most important musculoskeletal conditions in prevalence and years lived with disability. A common treatment is therapeutic exercise, which requires high motivation and adherence to treatment. In this work, we conduct an exploratory experiment to validate the use of a non-invasive camera-based head-tracker monitoring neck movements. We do it by means of an exergame for performing the rehabilitation exercises using a mobile device. The experiments performed in order to explore its feasibility were: (1) validate neck's range of motion (ROM) that the camera-based head-tracker was able to detect; (2) ensure safety application in terms of neck ROM solicitation by the mobile application. Results not only confirmed safety, in terms of ROM requirements for different preset patient profiles, according with the safety parameters previously established, but also determined the effectiveness of the camera-based head-tracker to monitor the neck movements for rehabilitation purposes.


Assuntos
Aplicativos Móveis , Terapia por Exercício , Movimentos da Cabeça , Humanos , Pescoço , Amplitude de Movimento Articular
4.
Sensors (Basel) ; 16(2): 254, 2016 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-26907288

RESUMO

Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user's head by processing the frames provided by the mobile device's front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device's orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user's perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people.

5.
PLoS One ; 18(9): e0287006, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37773958

RESUMO

It is well-known that lighting conditions have an important influence on the automatic recognition of human expressions. Although the impact of lighting on the perception of emotions has been studied in different works, databases of facial expressions do not consider intentional lighting. In this work, a new database of facial expressions performed by virtual characters with four different lighting configurations is presented. This database, named UIBVFEDPlus-Light, is an extension of the previously published UIBVFED virtual facial expression dataset. It includes 100 characters, four lighting configurations and a software application that allows one to interactively visualize the expressions, and manage their intensity and lighting condition. Also, an experience of use is described to show how this work can raise new challenges to facial expression and emotion recognition techniques under usual lighting environments. Thus, opening new study perspectives in this area.


Assuntos
Expressão Facial , Reconhecimento Facial , Humanos , Iluminação , Emoções , Software , Reconhecimento Psicológico
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa