Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.197
Filtrar
2.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39000981

RESUMO

This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and a machine learning algorithm (MLA). This paper describes the design of the inductive sensor array integrated into a flexible and wearable sleeve. The sensor array consists of coils sewn onto the sleeve, which form an LC tank circuit along with the externally connected inductors and capacitors. Changes in the elbow position modulate the inductance of these coils, allowing the sensor array to capture a range of elbow movements. The signal processing and random forest MLA to recognize 10 different elbow gestures are described. Rigorous evaluation on 8 subjects and data augmentation, which leveraged the dataset to 1270 trials per gesture, enabled the system to achieve remarkable accuracy of 98.3% and 98.5% using 5-fold cross-validation and leave-one-subject-out cross-validation, respectively. The test performance was then assessed using data collected from five new subjects. The high classification accuracy of 94% demonstrates the generalizability of the designed system. The proposed solution addresses the limitations of existing elbow gesture recognition designs and offers a practical and effective approach for intuitive human-machine interaction.


Assuntos
Algoritmos , Cotovelo , Gestos , Aprendizado de Máquina , Humanos , Cotovelo/fisiologia , Dispositivos Eletrônicos Vestíveis , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Masculino , Adulto , Feminino
3.
Comput Biol Med ; 179: 108817, 2024 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-39004049

RESUMO

Force myography (FMG) is increasingly gaining importance in gesture recognition because of it's ability to achieve high classification accuracy without having a direct contact with the skin. In this study, we investigate the performance of a bracelet with only six commercial force sensitive resistors (FSR) sensors for classifying many hand gestures representing all letters and numbers from 0 to 10 in the American sign language. For this, we introduce an optimized feature selection in combination with the Extreme Learning Machine (ELM) as a classifier by investigating three swarm intelligence algorithms, which are the binary grey wolf optimizer (BGWO), binary grasshopper optimizer (BGOA), and binary hybrid grey wolf particle swarm optimizer (BGWOPSO), which is used as an optimization method for ELM for the first time in this study. The findings reveal that the BGWOPSO, in which PSO supports the GWO optimizer by controlling its exploration and exploitation using inertia constant to improve the convergence speed to reach the best global optima, outperformed the other investigated algorithms. In addition, the results show that optimizing ELM with BGWOPSO for feature selection can efficiently improve the performance of ELM to enhance the classification accuracy from 32% to 69.84% for classifying 37 gestures collected from multiple volunteers and using only a band with 6 FSR sensors.

4.
Int J Biol Macromol ; 276(Pt 1): 133802, 2024 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-38992552

RESUMO

Pursuing high-performance conductive hydrogels is still hot topic in development of advanced flexible wearable devices. Herein, a tough, self-healing, adhesive double network (DN) conductive hydrogel (named as OSA-(Gelatin/PAM)-Ca, O-(G/P)-Ca) was prepared by bridging gelatin and polyacrylamide network with functionalized polysaccharide (oxidized sodium alginate, OSA) through Schiff base reaction. Thanks to the presence of multiple interactions (Schiff base bond, hydrogen bond, and metal coordination) within the network, the prepared hydrogel showed outstanding mechanical properties (tensile strain of 2800 % and stress of 630 kPa), high conductivity (0.72 S/m), repeatable adhesion performance and excellent self-healing ability (83.6 %/79.0 % of the original tensile strain/stress after self-healing). Moreover, the hydrogel-based sensor exhibited high strain sensitivity (GF = 3.66) and fast response time (<0.5 s), which can be used to monitor a wide range of human physiological signals. Based on this, excellent compression sensitivity (GF = 0.41 kPa-1 in the range of 90-120 kPa), a three-dimensional (3D) array of flexible sensor was designed to monitor the intensity of pressure and spatial force distribution. In addition, a gel-based wearable sensor was accurately classified and recognized ten types of gestures, achieving an accuracy rate of >96.33 % both before and after self-healing under three machine learning models (the decision tree, SVM, and KNN). This paper provides a simple method to prepare tough and self-healing conductive hydrogel as flexible multifunctional sensor devices for versatile applications in fields such as healthcare monitoring, human-computer interaction, and artificial intelligence.

5.
Cogn Sci ; 48(7): e13479, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38980965

RESUMO

Gestures-hand movements that accompany speech and express ideas-can help children learn how to solve problems, flexibly generalize learning to novel problem-solving contexts, and retain what they have learned. But does it matter who is doing the gesturing? We know that producing gesture leads to better comprehension of a message than watching someone else produce gesture. But we do not know how producing versus observing gesture impacts deeper learning outcomes such as generalization and retention across time. Moreover, not all children benefit equally from gesture instruction, suggesting that there are individual differences that may play a role in who learns from gesture. Here, we consider two factors that might impact whether gesture leads to learning, generalization, and retention after mathematical instruction: (1) whether children see gesture or do gesture and (2) whether a child spontaneously gestures before instruction when explaining their problem-solving reasoning. For children who spontaneously gestured before instruction, both doing and seeing gesture led to better generalization and retention of the knowledge gained than a comparison manipulative action. For children who did not spontaneously gesture before instruction, doing gesture was less effective than the comparison action for learning, generalization, and retention. Importantly, this learning deficit was specific to gesture, as these children did benefit from doing the comparison manipulative action. Our findings are the first evidence that a child's use of a particular representational format for communication (gesture) directly predicts that child's propensity to learn from using the same representational format.


Assuntos
Gestos , Aprendizagem , Resolução de Problemas , Humanos , Feminino , Masculino , Matemática , Criança , Pré-Escolar , Generalização Psicológica/fisiologia
6.
Adv Sci (Weinh) ; : e2402175, 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38981031

RESUMO

A self-powered mechanoreceptor array is demonstrated using four mechanoreceptor cells for recognition of dynamic touch gestures. Each cell consists of a triboelectric nanogenerator (TENG) for touch sensing and a bi-stable resistor (biristor) for spike encoding. It produces informative spike signals by sensing a force of an external touch and encoding the force into the number of spikes. An array of the mechanoreceptor cells is utilized to monitor various touch gestures and it successfully generated spike signals corresponding to all the gestures. To validate the practicality of the mechanoreceptor array, a spiking neural network (SNN), highly attractive for power consumption compared to the conventional von Neumann architecture, is used for the identification of touch gestures. The measured spiking signals are reflected as inputs for the SNN simulations. Consequently, touch gestures are classified with a high accuracy rate of 92.5%. The proposed mechanoreceptor array emerges as a promising candidate for a building block of tactile in-sensor computing in the era of the Internet of Things (IoT), due to the low cost and high manufacturability of the TENG. This eliminates the need for a power supply, coupled with the intrinsic high throughput of the Si-based biristor employing complementary metal-oxide-semiconductor (CMOS) technology.

7.
Math Biosci Eng ; 21(4): 5712-5734, 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38872555

RESUMO

This research introduces a novel dual-pathway convolutional neural network (DP-CNN) architecture tailored for robust performance in Log-Mel spectrogram image analysis derived from raw multichannel electromyography signals. The primary objective is to assess the effectiveness of the proposed DP-CNN architecture across three datasets (NinaPro DB1, DB2, and DB3), encompassing both able-bodied and amputee subjects. Performance metrics, including accuracy, precision, recall, and F1-score, are employed for comprehensive evaluation. The DP-CNN demonstrates notable mean accuracies of 94.93 ± 1.71% and 94.00 ± 3.65% on NinaPro DB1 and DB2 for healthy subjects, respectively. Additionally, it achieves a robust mean classification accuracy of 85.36 ± 0.82% on amputee subjects in DB3, affirming its efficacy. Comparative analysis with previous methodologies on the same datasets reveals substantial improvements of 28.33%, 26.92%, and 39.09% over the baseline for DB1, DB2, and DB3, respectively. The DP-CNN's superior performance extends to comparisons with transfer learning models for image classification, reaffirming its efficacy. Across diverse datasets involving both able-bodied and amputee subjects, the DP-CNN exhibits enhanced capabilities, holding promise for advancing myoelectric control.


Assuntos
Algoritmos , Amputados , Eletromiografia , Gestos , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Extremidade Superior , Humanos , Eletromiografia/métodos , Extremidade Superior/fisiologia , Masculino , Adulto , Feminino , Adulto Jovem , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
8.
Sensors (Basel) ; 24(11)2024 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-38894205

RESUMO

By integrating sensing capability into wireless communication, wireless sensing technology has become a promising contactless and non-line-of-sight sensing paradigm to explore the dynamic characteristics of channel state information (CSI) for recognizing human behaviors. In this paper, we develop an effective device-free human gesture recognition (HGR) system based on WiFi wireless sensing technology in which the complementary CSI amplitude and phase of communication link are jointly exploited. To improve the quality of collected CSI, a linear transform-based data processing method is first used to eliminate the phase offset and noise and to reduce the impact of multi-path effects. Then, six different time and frequency domain features are chosen for both amplitude and phase, including the mean, variance, root mean square, interquartile range, energy entropy and power spectral entropy, and a feature selection algorithm to remove irrelevant and redundant features is proposed based on filtering and principal component analysis methods, resulting in the construction of a feature subspace to distinguish different gestures. On this basis, a support vector machine-based stacking algorithm is proposed for gesture classification based on the selected and complementary amplitude and phase features. Lastly, we conduct experiments under a practical scenario with one transmitter and receiver. The results demonstrate that the average accuracy of the proposed HGR system is 98.3% and that the F1-score is over 97%.

9.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894423

RESUMO

Gesture recognition using electromyography (EMG) signals has prevailed recently in the field of human-computer interactions for controlling intelligent prosthetics. Currently, machine learning and deep learning are the two most commonly employed methods for classifying hand gestures. Despite traditional machine learning methods already achieving impressive performance, it is still a huge amount of work to carry out feature extraction manually. The existing deep learning methods utilize complex neural network architectures to achieve higher accuracy, which will suffer from overfitting, insufficient adaptability, and low recognition accuracy. To improve the existing phenomenon, a novel lightweight model named dual stream LSTM feature fusion classifier is proposed based on the concatenation of five time-domain features of EMG signals and raw data, which are both processed with one-dimensional convolutional neural networks and LSTM layers to carry out the classification. The proposed method can effectively capture global features of EMG signals using a simple architecture, which means less computational cost. An experiment is conducted on a public DB1 dataset with 52 gestures, and each of the 27 subjects repeats every gesture 10 times. The accuracy rate achieved by the model is 89.66%, which is comparable to that achieved by more complex deep learning neural networks, and the inference time for each gesture is 87.6 ms, which can also be implied in a real-time control system. The proposed model is validated using a subject-wise experiment on 10 out of the 40 subjects in the DB2 dataset, achieving a mean accuracy of 91.74%. This is illustrated by its ability to fuse time-domain features and raw data to extract more effective information from the sEMG signal and select an appropriate, efficient, lightweight network to enhance the recognition results.


Assuntos
Aprendizado Profundo , Eletromiografia , Gestos , Redes Neurais de Computação , Eletromiografia/métodos , Humanos , Processamento de Sinais Assistido por Computador , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Aprendizado de Máquina , Mãos/fisiologia , Memória de Curto Prazo/fisiologia
10.
Sensors (Basel) ; 24(11)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38894429

RESUMO

Effective feature extraction and selection are crucial for the accurate classification and prediction of hand gestures based on electromyographic signals. In this paper, we systematically compare six filter and wrapper feature evaluation methods and investigate their respective impacts on the accuracy of gesture recognition. The investigation is based on several benchmark datasets and one real hand gesture dataset, including 15 hand force exercises collected from 14 healthy subjects using eight commercial sEMG sensors. A total of 37 time- and frequency-domain features were extracted from each sEMG channel. The benchmark dataset revealed that the minimum Redundancy Maximum Relevance (mRMR) feature evaluation method had the poorest performance, resulting in a decrease in classification accuracy. However, the RFE method demonstrated the potential to enhance classification accuracy across most of the datasets. It selected a feature subset comprising 65 features, which led to an accuracy of 97.14%. The Mutual Information (MI) method selected 200 features to reach an accuracy of 97.38%. The Feature Importance (FI) method reached a higher accuracy of 97.62% but selected 140 features. Further investigations have shown that selecting 65 and 75 features with the RFE methods led to an identical accuracy of 97.14%. A thorough examination of the selected features revealed the potential for three additional features from three specific sensors to enhance the classification accuracy to 97.38%. These results highlight the significance of employing an appropriate feature selection method to significantly reduce the number of necessary features while maintaining classification accuracy. They also underscore the necessity for further analysis and refinement to achieve optimal solutions.


Assuntos
Eletromiografia , Gestos , Mãos , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Algoritmos , Masculino , Adulto , Feminino , Processamento de Sinais Assistido por Computador
11.
J Exp Child Psychol ; 246: 105989, 2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38889478

RESUMO

When solving mathematical problems, young children will perform better when they can use gestures that match mental representations. However, despite their increasing prevalence in educational settings, few studies have explored this effect in touchscreen-based interactions. Thus, we investigated the impact on young children's performance of dragging (where a continuous gesture is performed that is congruent with the change in number) and tapping (involving a discrete gesture that is incongruent) on a touchscreen device when engaged in a continuous number line estimation task. By examining differences in the set size and position of the number line estimation, we were also able to explore the boundary conditions for the superiority effect of congruent gestures. We used a 2 (Gesture Type: drag or tap) × 2 (Set Size: Set 0-10 or Set 0-20) × 2 (Position: left of midpoint or right of midpoint) mixed design. A total of 70 children aged 5 and 6 years (33 girls) were recruited and randomly assigned to either the Drag or Tap group. We found that the congruent gesture (drag) generally facilitated better performance with the touchscreen but with boundary conditions. When completing difficult estimations (right side in the large set size), the Drag group was more accurate, responded to the stimulus faster, and spent more time manipulating than the Tap group. These findings suggest that when children require explicit scaffolding, congruent touchscreen gestures help to release mental resources for strategic adjustments, decrease the difficulty of numerical estimation, and support constructing mental representations.

12.
Sensors (Basel) ; 24(12)2024 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-38931542

RESUMO

This review explores the historical and current significance of gestures as a universal form of communication with a focus on hand gestures in virtual reality applications. It highlights the evolution of gesture detection systems from the 1990s, which used computer algorithms to find patterns in static images, to the present day where advances in sensor technology, artificial intelligence, and computing power have enabled real-time gesture recognition. The paper emphasizes the role of hand gestures in virtual reality (VR), a field that creates immersive digital experiences through the Ma blending of 3D modeling, sound effects, and sensing technology. This review presents state-of-the-art hardware and software techniques used in hand gesture detection, primarily for VR applications. It discusses the challenges in hand gesture detection, classifies gestures as static and dynamic, and grades their detection difficulty. This paper also reviews the haptic devices used in VR and their advantages and challenges. It provides an overview of the process used in hand gesture acquisition, from inputs and pre-processing to pose detection, for both static and dynamic gestures.


Assuntos
Gestos , Mãos , Realidade Virtual , Humanos , Mãos/fisiologia , Algoritmos , Interface Usuário-Computador , Inteligência Artificial
13.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38931754

RESUMO

Electromyography-based gesture recognition has become a challenging problem in the decoding of fine hand movements. Recent research has focused on improving the accuracy of gesture recognition by increasing the complexity of network models. However, training a complex model necessitates a significant amount of data, thereby escalating both user burden and computational costs. Moreover, owing to the considerable variability of surface electromyography (sEMG) signals across different users, conventional machine learning approaches reliant on a single feature fail to meet the demand for precise gesture recognition tailored to individual users. Therefore, to solve the problems of large computational cost and poor cross-user pattern recognition performance, we propose a feature selection method that combines mutual information, principal component analysis and the Pearson correlation coefficient (MPP). This method can filter out the optimal subset of features that match a specific user while combining with an SVM classifier to accurately and efficiently recognize the user's gesture movements. To validate the effectiveness of the above method, we designed an experiment including five gesture actions. The experimental results show that compared to the classification accuracy obtained using a single feature, we achieved an improvement of about 5% with the optimally selected feature as the input to any of the classifiers. This study provides an effective guarantee for user-specific fine hand movement decoding based on sEMG signals.


Assuntos
Eletromiografia , Antebraço , Gestos , Mãos , Reconhecimento Automatizado de Padrão , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Antebraço/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Masculino , Adulto , Análise de Componente Principal , Feminino , Algoritmos , Movimento/fisiologia , Adulto Jovem , Máquina de Vetores de Suporte , Aprendizado de Máquina
14.
Cognition ; 250: 105855, 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38865912

RESUMO

People are more likely to gesture when their speech is disfluent. Why? According to an influential proposal, speakers gesture when they are disfluent because gesturing helps them to produce speech. Here, we test an alternative proposal: People may gesture when their speech is disfluent because gestures serve as a pragmatic signal, telling the listener that the speaker is having problems with speaking. To distinguish between these proposals, we tested the relationship between gestures and speech disfluencies when listeners could see speakers' gestures and when they were prevented from seeing their gestures. If gesturing helps speakers to produce words, then the relationship between gesture and disfluency should persist regardless of whether gestures can be seen. Alternatively, if gestures during disfluent speech are pragmatically motivated, then the tendency to gesture more when speech is disfluent should disappear when the speaker's gestures are invisible to the listener. Results showed that speakers were more likely to gesture when their speech was disfluent, but only when the listener could see their gestures and not when the listener was prevented from seeing them, supporting a pragmatic account of the relationship between gestures and disfluencies. People tend to gesture more when speaking is difficult, not because gesturing facilitates speech production, but rather because gestures comment on the speaker's difficulty presenting an utterance to the listener.

15.
J Electr Bioimpedance ; 15(1): 63-74, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38863504

RESUMO

Gesture recognition is a crucial aspect in the advancement of virtual reality, healthcare, and human-computer interaction, and requires innovative methodologies to meet the increasing demands for precision. This paper presents a novel approach that combines Impedance Signal Spectrum Analysis (ISSA) with machine learning to improve gesture recognition precision. A diverse dataset that included participants from various demographic backgrounds (five individuals) who were each executing a range of predefined gestures. The predefined gestures were designed to encompass a broad spectrum of hand movements, including intricate and subtle variations, to challenge the robustness of the proposed methodology. The machine learning model using the K-Nearest Neighbors (KNN), Gradient Boosting Machine (GBM), Naive Bayes (NB), Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) algorithms demonstrated notable precision in performance evaluations. The individual accuracy values for each algorithm are as follows: KNN, 86%; GBM, 86%; NB, 84%; LR, 89%; RF, 87%; and SVM, 87%. These results emphasize the importance of impedance features in the refinement of gesture recognition. The adaptability of the model was confirmed under different conditions, highlighting its broad applicability.

16.
Top Cogn Sci ; 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38855879

RESUMO

Gesture and speech are tightly linked and form a single system in typical development. In this review, we ask whether and how the role of gesture and relations between speech and gesture vary in atypical development by focusing on two groups of children: those with peri- or prenatal unilateral brain injury (children with BI) and preterm born (PT) children. We describe the gestures of children with BI and PT children and the relations between gesture and speech, as well as highlight various cognitive and motor antecedents of the speech-gesture link observed in these populations. We then examine possible factors contributing to the variability in gesture production of these atypically developing children. Last, we discuss the potential role of seeing others' gestures, particularly those of parents, in mediating the predictive relationships between early gestures and upcoming changes in speech. We end the review by charting new areas for future research that will help us better understand the robust roles of gestures for typical and atypically-developing child populations.

17.
Biomed Tech (Berl) ; 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38826069

RESUMO

OBJECTIVES: The objective of this study is to develop a system for automatic sign language recognition to improve the quality of life for the mute-deaf community in Egypt. The system aims to bridge the communication gap by identifying and converting right-hand gestures into audible sounds or displayed text. METHODS: To achieve the objectives, a convolutional neural network (CNN) model is employed. The model is trained to recognize right-hand gestures captured by an affordable web camera. A dataset was created with the help of six volunteers for training, testing, and validation purposes. RESULTS: The proposed system achieved an impressive average accuracy of 99.65 % in recognizing right-hand gestures, with high precision value of 95.11 %. The system effectively addressed the issue of gesture similarity between certain alphabets by successfully distinguishing between their respective gestures. CONCLUSIONS: The proposed system offers a promising solution for automatic sign language recognition, benefiting the mute-deaf community in Egypt. By accurately identifying and converting right-hand gestures, the system facilitates communication and interaction with the wider world. This technology has the potential to greatly enhance the quality of life for individuals who are unable to speak or hear, promoting inclusivity and accessibility.

18.
J Neuroeng Rehabil ; 21(1): 100, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38867287

RESUMO

BACKGROUND: In-home rehabilitation systems are a promising, potential alternative to conventional therapy for stroke survivors. Unfortunately, physiological differences between participants and sensor displacement in wearable sensors pose a significant challenge to classifier performance, particularly for people with stroke who may encounter difficulties repeatedly performing trials. This makes it challenging to create reliable in-home rehabilitation systems that can accurately classify gestures. METHODS: Twenty individuals who suffered a stroke performed seven different gestures (mass flexion, mass extension, wrist volar flexion, wrist dorsiflexion, forearm pronation, forearm supination, and rest) related to activities of daily living. They performed these gestures while wearing EMG sensors on the forearm, as well as FMG sensors and an IMU on the wrist. We developed a model based on prototypical networks for one-shot transfer learning, K-Best feature selection, and increased window size to improve model accuracy. Our model was evaluated against conventional transfer learning with neural networks, as well as subject-dependent and subject-independent classifiers: neural networks, LGBM, LDA, and SVM. RESULTS: Our proposed model achieved 82.2% hand-gesture classification accuracy, which was better (P<0.05) than one-shot transfer learning with neural networks (63.17%), neural networks (59.72%), LGBM (65.09%), LDA (63.35%), and SVM (54.5%). In addition, our model performed similarly to subject-dependent classifiers, slightly lower than SVM (83.84%) but higher than neural networks (81.62%), LGBM (80.79%), and LDA (74.89%). Using K-Best features improved the accuracy in 3 of the 6 classifiers used for evaluation, while not affecting the accuracy in the other classifiers. Increasing the window size improved the accuracy of all the classifiers by an average of 4.28%. CONCLUSION: Our proposed model showed significant improvements in hand-gesture recognition accuracy in individuals who have had a stroke as compared with conventional transfer learning, neural networks and traditional machine learning approaches. In addition, K-Best feature selection and increased window size can further improve the accuracy. This approach could help to alleviate the impact of physiological differences and create a subject-independent model for stroke survivors that improves the classification accuracy of wearable sensors. TRIAL REGISTRATION NUMBER: The study was registered in Chinese Clinical Trial Registry with registration number CHiCTR1800017568 in 2018/08/04.


Assuntos
Gestos , Mãos , Redes Neurais de Computação , Reabilitação do Acidente Vascular Cerebral , Humanos , Reabilitação do Acidente Vascular Cerebral/métodos , Reabilitação do Acidente Vascular Cerebral/instrumentação , Mãos/fisiopatologia , Masculino , Feminino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Idoso , Aprendizado de Máquina , Transferência de Experiência/fisiologia , Adulto , Eletromiografia , Dispositivos Eletrônicos Vestíveis
20.
BMC Med Educ ; 24(1): 509, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38715008

RESUMO

BACKGROUND: In this era of rapid technological development, medical schools have had to use modern technology to enhance traditional teaching. Online teaching was preferred by many medical schools. However due to the complexity of intracranial anatomy, it was challenging for the students to study this part online, and the students were likely to be tired of neurosurgery, which is disadvantageous to the development of neurosurgery. Therefore, we developed this database to help students learn better neuroanatomy. MAIN BODY: The data were sourced from Rhoton's Cranial Anatomy and Surgical Approaches and Neurosurgery Tricks of the Trade in this database. Then we designed many hand gesture figures connected with the atlas of anatomy. Our database was divided into three parts: intracranial arteries, intracranial veins, and neurosurgery approaches. Each section below contains an atlas of anatomy, and gestures represent vessels and nerves. Pictures of hand gestures and atlas of anatomy are available to view on GRAVEN ( www.graven.cn ) without restrictions for all teachers and students. We recruited 50 undergraduate students and randomly divided them into two groups: using traditional teaching methods or GRAVEN database combined with above traditional teaching methods. Results revealed a significant improvement in academic performance in using GRAVEN database combined with traditional teaching methods compared to the traditional teaching methods. CONCLUSION: This database was vital to help students learn about intracranial anatomy and neurosurgical approaches. Gesture teaching can effectively simulate the relationship between human organs and tissues through the flexibility of hands and fingers, improving anatomy interest and education.


Assuntos
Bases de Dados Factuais , Educação de Graduação em Medicina , Gestos , Neurocirurgia , Humanos , Neurocirurgia/educação , Educação de Graduação em Medicina/métodos , Estudantes de Medicina , Neuroanatomia/educação , Ensino , Feminino , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA