Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 20(19)2020 Sep 26.
Artigo em Inglês | MEDLINE | ID: mdl-32993047

RESUMO

Rehabilitative mobility aids are being used extensively for physically impaired people. Efforts are being made to develop human machine interfaces (HMIs), manipulating the biosignals to better control the electromechanical mobility aids, especially the wheelchairs. Creating precise control commands such as move forward, left, right, backward and stop, via biosignals, in an appropriate HMI is the actual challenge, as the people with a high level of disability (quadriplegia and paralysis, etc.) are unable to drive conventional wheelchairs. Therefore, a novel system driven by optical signals addressing the needs of such a physically impaired population is introduced in this paper. The present system is divided into two parts: the first part comprises of detection of eyeball movements together with the processing of the optical signal, and the second part encompasses the mechanical assembly module, i.e., control of the wheelchair through motor driving circuitry. A web camera is used to capture real-time images. The processor used is Raspberry-Pi with Linux operating system. In order to make the system more congenial and reliable, the voice-controlled mode is incorporated in the wheelchair. To appraise the system's performance, a basic wheelchair skill test (WST) is carried out. Basic skills like movement on plain and rough surfaces in forward, reverse direction and turning capability were analyzed for easier comparison with other existing wheelchair setups on the bases of controlling mechanisms, compatibility, design models, and usability in diverse conditions. System successfully operates with average response time of 3 s for eye and 3.4 s for voice control mode.


Assuntos
Pessoas com Deficiência , Movimentos Oculares , Interface Usuário-Computador , Voz , Cadeiras de Rodas , Humanos
2.
Front Robot AI ; 11: 1362294, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38500802

RESUMO

Cobots are robots that are built for human-robot collaboration (HRC) in a shared environment. In the aftermath of disasters, cobots can cooperate with humans to mitigate risks and increase the possibility of rescuing people in distress. This study examines the resilient and dynamic synergy between a swarm of snake robots, first responders and people to be rescued. The possibility of delivering first aid to potential victims dispersed around a disaster environment is implemented. In the HRC simulation framework presented in this study, the first responder initially deploys a UAV, swarm of snake robots and emergency items. The UAV provides the first responder with the site planimetry, which includes the layout of the area, as well as the precise locations of the individuals in need of rescue and the aiding goods to be delivered. Each individual snake robot in the swarm is then assigned a victim. Subsequently an optimal path is determined by each snake robot using the A* algorithm, to approach and reach its respective target while avoiding obstacles. By using their prehensile capabilities, each snake robot adeptly grasps the aiding object to be dispatched. The snake robots successively arrive at the delivering location near the victim, following their optimal paths, and proceed to release the items. To demonstrate the potential of the framework, several case studies are outlined concerning the execution of operations that combine locomotion, obstacle avoidance, grasping and deploying. The Coppelia-Sim Robotic Simulator is utilised for this framework. The analysis of the motion of the snake robots on the path show highly accurate movement with and without the emergency item. This study is a step towards a holistic semi-autonomous search and rescue operation.

3.
Front Robot AI ; 11: 1356345, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38957217

RESUMO

In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm-Stacked Convolutional Neural Network-Bi-Long Short Term Memory-Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.

4.
Materials (Basel) ; 15(18)2022 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-36143505

RESUMO

Fatigue cracks are a major defect in metal alloys, and specifically, their study poses defect evaluation challenges in aluminum aircraft alloys. Existing inline inspection tools exhibit measurement uncertainties. The physical-based methods for crack growth prediction utilize stress analysis models and the crack growth model governed by Paris' law. These models, when utilized for long-term crack growth prediction, yield sub-optimum solutions and pose several technical limitations to the prediction problems. The metaheuristic optimization algorithms in this study have been conducted in accordance with neural networks to accurately forecast the crack growth rates in aluminum alloys. Through experimental data, the performance of the hybrid metaheuristic optimization-neural networks has been tested. A dynamic Levy flight function has been incorporated with a chimp optimization algorithm to accurately train the deep neural network. The performance of the proposed predictive model has been tested using 7055 T7511 and 6013 T651 alloys against four competing techniques. Results show the proposed predictive model achieves lower correlation error, least relative error, mean absolute error, and root mean square error values while shortening the run time by 11.28%. It is evident through experimental study and statistical analysis that the crack length and growth rates are predicted with high fidelity and very high resolution.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA