Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 18(8)2018 Aug 03.
Artigo em Inglês | MEDLINE | ID: mdl-30081512

RESUMO

OBJECTIVES: The aim of this work is to provide a scoping review to compile and classify the systems helping train and enhance psychomotor skills in hearing impaired (HI) children. METHODS: Based on an exhaustive review on psychomotor deficits in HI children, the procedure used to carry out a scoping review was: select keywords and identify synonyms, select databases and prepare the queries using keywords, analyze the quality of the works found using the PEDro Scale, classify the works based on psychomotor competences, analyze the interactive systems (e.g., sensors), and the achieved results. RESULTS: Thirteen works were found. These works used a variety of sensors and input devices such as cameras, contact sensors, touch screens, mouse and keyboard, tangible objects, haptic and virtual reality (VR) devices. CONCLUSIONS: From the research it was possible to contextualize the deficits and psychomotor problems of HI children that prevent their normal development. Additionally, from the analysis of different proposals of interactive systems addressed to this population, it was possible to establish the current state of the use of different technologies and how they contribute to psychomotor rehabilitation.


Assuntos
Perda Auditiva/fisiopatologia , Perda Auditiva/reabilitação , Desempenho Psicomotor , Criança , Humanos , Tecnologia , Interface Usuário-Computador , Realidade Virtual
2.
Sensors (Basel) ; 16(2): 254, 2016 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-26907288

RESUMO

Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user's head by processing the frames provided by the mobile device's front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device's orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user's perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people.

3.
Digit Health ; 10: 20552076241259664, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38846372

RESUMO

Objective: Assessing pain in individuals with neurological conditions like cerebral palsy is challenging due to limited self-reporting and expression abilities. Current methods lack sensitivity and specificity, underlining the need for a reliable evaluation protocol. An automated facial recognition system could revolutionize pain assessment for such patients.The research focuses on two primary goals: developing a dataset of facial pain expressions for individuals with cerebral palsy and creating a deep learning-based automated system for pain assessment tailored to this group. Methods: The study trained ten neural networks using three pain image databases and a newly curated CP-PAIN Dataset of 109 images from cerebral palsy patients, classified by experts using the Facial Action Coding System. Results: The InceptionV3 model demonstrated promising results, achieving 62.67% accuracy and a 61.12% F1 score on the CP-PAIN dataset. Explainable AI techniques confirmed the consistency of crucial features for pain identification across models. Conclusion: The study underscores the potential of deep learning in developing reliable pain detection systems using facial recognition for individuals with communication impairments due to neurological conditions. A more extensive and diverse dataset could further enhance the models' sensitivity to subtle pain expressions in cerebral palsy patients and possibly extend to other complex neurological disorders. This research marks a significant step toward more empathetic and accurate pain management for vulnerable populations.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa