Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.211
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(47): e2218799120, 2023 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-37956297

RESUMO

Human language is a powerful communicative and cognitive tool. Scholars have long sought to characterize its uniqueness, but each time a property is proposed to set human language apart (e.g., reference, syntax), some (attenuated) version of that property is found in animals. Recently, the uniqueness argument has shifted from linguistic rules to cognitive capacities underlying them. Scholars argue that human language is unique because it relies on ostension and inference, while animal communication depends on simple associations and largely hardwired signals. Such characterizations are often borne out in published data, but these empirical findings are driven by radical differences in the ways animal and human communication are studied. The field of animal communication has been dramatically shaped by the "code model," which imagines communication as involving information packets that are encoded, transmitted, decoded, and interpreted. This framework standardized methods for studying meaning in animal signals, but it does not allow for the nuance, ambiguity, or contextual variation seen in humans. The code model is insidious. It is rarely referenced directly, but it significantly shapes how we study animals. To compare animal communication and human language, we must acknowledge biases resulting from the different theoretical models used. By incorporating new approaches that break away from searching for codes, we may find that animal communication and human language are characterized by differences of degree rather than kind.


Assuntos
Hominidae , Idioma , Animais , Humanos , Comunicação Animal , Linguística , Viés
2.
Proc Natl Acad Sci U S A ; 119(47): e2206486119, 2022 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-36375066

RESUMO

Humans are argued to be unique in their ability and motivation to share attention with others about external entities-sharing attention for sharing's sake. Indeed, in humans, using referential gestures declaratively to direct the attention of others toward external objects and events emerges in the first year of life. In contrast, wild great apes seldom use referential gestures, and when they do, it seems to be exclusively for imperative purposes. This apparent species difference has fueled the argument that the motivation and ability to share attention with others is a human-specific trait with important downstream consequences for the evolution of our complex cognition [M. Tomasello, Becoming Human (2019)]. Here, we report evidence of a wild ape showing a conspecific an item of interest. We provide video evidence of an adult female chimpanzee, Fiona, showing a leaf to her mother, Sutherland, in the context of leaf grooming in Kibale Forest, Uganda. We use a dataset of 84 similar leaf-grooming events to explore alternative explanations for the behavior, including food sharing and initiating dyadic grooming or playing. Our observations suggest that in highly specific social conditions, wild chimpanzees, like humans, may use referential showing gestures to direct others' attention to objects simply for the sake of sharing. The difference between humans and our closest living relatives in this regard may be quantitative rather than qualitative, with ramifications for our understanding of the evolution of human social cognition.


Assuntos
Hominidae , Pan troglodytes , Feminino , Humanos , Animais , Gestos , Comunicação Animal , Mães
3.
Dev Sci ; 27(5): e13507, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38629500

RESUMO

Blind adults display language-specificity in their packaging and ordering of events in speech. These differences affect the representation of events in co-speech gesture-gesturing with speech-but not in silent gesture-gesturing without speech. Here we examine when in development blind children begin to show adult-like patterns in co-speech and silent gesture. We studied speech and gestures produced by 30 blind and 30 sighted children learning Turkish, equally divided into 3 age groups: 5-6, 7-8, 9-10 years. The children were asked to describe three-dimensional spatial event scenes (e.g., running out of a house) first with speech, and then without speech using only their hands. We focused on physical motion events, which, in blind adults, elicit cross-linguistic differences in speech and co-speech gesture, but cross-linguistic similarities in silent gesture. Our results showed an effect of language on gesture when it was accompanied by speech (co-speech gesture), but not when it was used without speech (silent gesture) across both blind and sighted learners. The language-specific co-speech gesture pattern for both packaging and ordering semantic elements was present at the earliest ages we tested the blind and sighted children. The silent gesture pattern appeared later for blind children than sighted children for both packaging and ordering. Our findings highlight gesture as a robust and integral aspect of the language acquisition process at the early ages and provide insight into when language does and does not have an effect on gesture, even in blind children who lack visual access to gesture. RESEARCH HIGHLIGHTS: Gestures, when produced with speech (i.e., co-speech gesture), follow language-specific patterns in event representation in both blind and sighted children. Gestures, when produced without speech (i.e., silent gesture), do not follow the language-specific patterns in event representation in both blind and sighted children. Language-specific patterns in speech and co-speech gestures are observable at the same time in blind and sighted children. The cross-linguistic similarities in silent gestures begin slightly later in blind children than in sighted children.


Assuntos
Cegueira , Gestos , Desenvolvimento da Linguagem , Fala , Humanos , Criança , Masculino , Feminino , Pré-Escolar , Fala/fisiologia , Cegueira/fisiopatologia , Visão Ocular/fisiologia , Idioma
4.
Cereb Cortex ; 33(14): 8942-8955, 2023 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-37183188

RESUMO

Advancements in deep learning algorithms over the past decade have led to extensive developments in brain-computer interfaces (BCI). A promising imaging modality for BCI is magnetoencephalography (MEG), which is a non-invasive functional imaging technique. The present study developed a MEG sensor-based BCI neural network to decode Rock-Paper-scissors gestures (MEG-RPSnet). Unique preprocessing pipelines in tandem with convolutional neural network deep-learning models accurately classified gestures. On a single-trial basis, we found an average of 85.56% classification accuracy in 12 subjects. Our MEG-RPSnet model outperformed two state-of-the-art neural network architectures for electroencephalogram-based BCI as well as a traditional machine learning method, and demonstrated equivalent and/or better performance than machine learning methods that have employed invasive, electrocorticography-based BCI using the same task. In addition, MEG-RPSnet classification performance using an intra-subject approach outperformed a model that used a cross-subject approach. Remarkably, we also found that when using only central-parietal-occipital regional sensors or occipitotemporal regional sensors, the deep learning model achieved classification performances that were similar to the whole-brain sensor model. The MEG-RSPnet model also distinguished neuronal features of individual hand gestures with very good accuracy. Altogether, these results show that noninvasive MEG-based BCI applications hold promise for future BCI developments in hand-gesture decoding.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Humanos , Magnetoencefalografia , Gestos , Eletroencefalografia/métodos , Algoritmos
5.
Skin Res Technol ; 30(2): e13625, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38385865

RESUMO

INTRODUCTION: The application of artificial intelligence to facial aesthetics has been limited by the inability to discern facial zones of interest, as defined by complex facial musculature and underlying structures. Although semantic segmentation models (SSMs) could potentially overcome this limitation, existing facial SSMs distinguish only three to nine facial zones of interest. METHODS: We developed a new supervised SSM, trained on 669 high-resolution clinical-grade facial images; a subset of these images was used in an iterative process between facial aesthetics experts and manual annotators that defined and labeled 33 facial zones of interest. RESULTS: Because some zones overlap, some pixels are included in multiple zones, violating the one-to-one relationship between a given pixel and a specific class (zone) required for SSMs. The full facial zone model was therefore used to create three sub-models, each with completely non-overlapping zones, generating three outputs for each input image that can be treated as standalone models. For each facial zone, the output demonstrating the best Intersection Over Union (IOU) value was selected as the winning prediction. CONCLUSIONS: The new SSM demonstrates mean IOU values superior to manual annotation and landmark analyses, and it is more robust than landmark methods in handling variances in facial shape and structure.


Assuntos
Inteligência Artificial , Semântica , Humanos , Face/diagnóstico por imagem , Músculos Faciais
6.
J Exp Child Psychol ; 246: 105989, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38889478

RESUMO

When solving mathematical problems, young children will perform better when they can use gestures that match mental representations. However, despite their increasing prevalence in educational settings, few studies have explored this effect in touchscreen-based interactions. Thus, we investigated the impact on young children's performance of dragging (where a continuous gesture is performed that is congruent with the change in number) and tapping (involving a discrete gesture that is incongruent) on a touchscreen device when engaged in a continuous number line estimation task. By examining differences in the set size and position of the number line estimation, we were also able to explore the boundary conditions for the superiority effect of congruent gestures. We used a 2 (Gesture Type: drag or tap) × 2 (Set Size: Set 0-10 or Set 0-20) × 2 (Position: left of midpoint or right of midpoint) mixed design. A total of 70 children aged 5 and 6 years (33 girls) were recruited and randomly assigned to either the Drag or Tap group. We found that the congruent gesture (drag) generally facilitated better performance with the touchscreen but with boundary conditions. When completing difficult estimations (right side in the large set size), the Drag group was more accurate, responded to the stimulus faster, and spent more time manipulating than the Tap group. These findings suggest that when children require explicit scaffolding, congruent touchscreen gestures help to release mental resources for strategic adjustments, decrease the difficulty of numerical estimation, and support constructing mental representations.


Assuntos
Gestos , Humanos , Feminino , Masculino , Pré-Escolar , Criança , Resolução de Problemas , Desempenho Psicomotor
7.
J Exp Child Psychol ; 241: 105859, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38325061

RESUMO

Infants as young as 14 months can track cross-situational statistics between sets of words and objects to acquire word-referent mappings. However, in naturalistic word learning situations, words and objects occur with a host of additional information, sometimes noisy, present in the environment. In this study, we tested the effect of this environmental variability on infants' word learning. Fourteen-month-old infants (N = 32) were given a cross-situational word learning task with additional gestural, prosodic, and distributional cues that occurred reliably or variably. In the reliable cue condition, infants were able to process this additional environmental information to learn the words, attending to the target object during test trials. But when the presence of these cues was variable, infants paid greater attention to the gestural cue during training and subsequently switched preference to attend more to novel word-object mappings rather than familiar ones at test. Environmental variation may be key to enhancing infants' exploration of new information.


Assuntos
Aprendizagem , Aprendizagem Verbal , Lactente , Humanos , Sinais (Psicologia)
8.
J Exp Child Psychol ; 242: 105892, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38492555

RESUMO

Recent evidence suggests that using finger-based strategies is beneficial for the acquisition of basic numerical skills. There are basically two finger-based strategies to be distinguished: (a) finger counting (i.e., extending single fingers successively) and (b) finger number gesturing (i.e., extending fingers simultaneously to represent magnitudes). In this study, we investigated both spontaneous and prompted finger counting and finger number gesturing as well as their contribution to basic numerical skills in 3- to 5-year-olds (N = 156). Results revealed that only 6% of children spontaneously used their fingers for counting when asked to name a specific number of animals, whereas 59% applied finger number gesturing to show their age. This indicates that the spontaneous use of finger-based strategies depends heavily on the specific context. Moreover, children performed significantly better in prompted finger counting than in finger number gesturing, suggesting that both strategies build on each other. Finally, both prompted finger counting and finger number gesturing significantly and individually predicted counting, cardinal number knowledge, and basic arithmetic. These results indicate that finger counting and finger number gesturing follow and positively relate to numerical development.


Assuntos
Dedos , Conhecimento , Criança , Humanos , Pré-Escolar , Estudos Transversais , Matemática
9.
J Med Internet Res ; 26: e58390, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38742989

RESUMO

Posttraumatic stress disorder (PTSD) is a significant public health concern, with only a third of patients recovering within a year of treatment. While PTSD often disrupts the sense of body ownership and sense of agency (SA), attention to the SA in trauma has been lacking. This perspective paper explores the loss of the SA in PTSD and its relevance in the development of symptoms. Trauma is viewed as a breakdown of the SA, related to a freeze response, with peritraumatic dissociation increasing the risk of PTSD. Drawing from embodied cognition, we propose an enactive perspective of PTSD, suggesting therapies that restore the SA through direct engagement with the body and environment. We discuss the potential of agency-based therapies and innovative technologies such as gesture sonification, which translates body movements into sounds to enhance the SA. Gesture sonification offers a screen-free, noninvasive approach that could complement existing trauma-focused therapies. We emphasize the need for interdisciplinary collaboration and clinical research to further explore these approaches in preventing and treating PTSD.


Assuntos
Transtornos de Estresse Pós-Traumáticos , Humanos , Transtornos de Estresse Pós-Traumáticos/terapia , Transtornos de Estresse Pós-Traumáticos/psicologia , Gestos
10.
J Neuroeng Rehabil ; 21(1): 100, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38867287

RESUMO

BACKGROUND: In-home rehabilitation systems are a promising, potential alternative to conventional therapy for stroke survivors. Unfortunately, physiological differences between participants and sensor displacement in wearable sensors pose a significant challenge to classifier performance, particularly for people with stroke who may encounter difficulties repeatedly performing trials. This makes it challenging to create reliable in-home rehabilitation systems that can accurately classify gestures. METHODS: Twenty individuals who suffered a stroke performed seven different gestures (mass flexion, mass extension, wrist volar flexion, wrist dorsiflexion, forearm pronation, forearm supination, and rest) related to activities of daily living. They performed these gestures while wearing EMG sensors on the forearm, as well as FMG sensors and an IMU on the wrist. We developed a model based on prototypical networks for one-shot transfer learning, K-Best feature selection, and increased window size to improve model accuracy. Our model was evaluated against conventional transfer learning with neural networks, as well as subject-dependent and subject-independent classifiers: neural networks, LGBM, LDA, and SVM. RESULTS: Our proposed model achieved 82.2% hand-gesture classification accuracy, which was better (P<0.05) than one-shot transfer learning with neural networks (63.17%), neural networks (59.72%), LGBM (65.09%), LDA (63.35%), and SVM (54.5%). In addition, our model performed similarly to subject-dependent classifiers, slightly lower than SVM (83.84%) but higher than neural networks (81.62%), LGBM (80.79%), and LDA (74.89%). Using K-Best features improved the accuracy in 3 of the 6 classifiers used for evaluation, while not affecting the accuracy in the other classifiers. Increasing the window size improved the accuracy of all the classifiers by an average of 4.28%. CONCLUSION: Our proposed model showed significant improvements in hand-gesture recognition accuracy in individuals who have had a stroke as compared with conventional transfer learning, neural networks and traditional machine learning approaches. In addition, K-Best feature selection and increased window size can further improve the accuracy. This approach could help to alleviate the impact of physiological differences and create a subject-independent model for stroke survivors that improves the classification accuracy of wearable sensors. TRIAL REGISTRATION NUMBER: The study was registered in Chinese Clinical Trial Registry with registration number CHiCTR1800017568 in 2018/08/04.


Assuntos
Gestos , Mãos , Redes Neurais de Computação , Reabilitação do Acidente Vascular Cerebral , Humanos , Reabilitação do Acidente Vascular Cerebral/métodos , Reabilitação do Acidente Vascular Cerebral/instrumentação , Mãos/fisiopatologia , Masculino , Feminino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/complicações , Acidente Vascular Cerebral/fisiopatologia , Idoso , Aprendizado de Máquina , Transferência de Experiência/fisiologia , Adulto , Eletromiografia , Dispositivos Eletrônicos Vestíveis
11.
Artigo em Inglês | MEDLINE | ID: mdl-38572787

RESUMO

BACKGROUND: In typically developing (TD) children, gesture emerges around 9 months of age, allowing children to communicate prior to speech. Due to the important role gesture plays in the early communication of autistic and TD children, various tasks have been used to assess gesture ability. However, few data exist on whether and how tasks differentially elicit gesture, particularly for samples of racially and ethnically diverse autistic children. AIMS: In this study, we explored if task (a naturalistic parent-child interaction [NPCI]; structured assessment of child communication) differentially elicited rate or type of gesture production for young autistic children. METHODS AND PROCEDURES: This secondary analysis included baseline data from 80 racially and ethnically diverse autistic children aged 18-59 months who participated in one of two larger studies. Video recordings of NPCIs and an assessment of child communication with standardised administration procedures were collected at baseline. Child gesture rate (number of gestures produced per 10 min) and type were extracted from these recordings and analysed. OUTCOMES AND RESULTS: The structured assessment elicited more gestures than the NPCI. In terms of gesture type, points, gives, and reaches accounted for 76% of child gestures. Points (which are developmentally more advanced than reaches and gives) were produced at the highest rates within book exploration. Distal points (which are more developmentally advanced than proximal or contact points) were produced at the highest rates when children were tempted to request. CONCLUSIONS AND IMPLICATIONS: Our findings indicate elicitation tasks differentially elicit type and rate of gesture for young autistic children. To assess the gesture production of young autistic children, a structured task designed to elicit child requests will probe the developmental sophistication of the child's gesture repertoire, eliciting both the most gestures and the most developmentally advanced gestures. WHAT THIS PAPER ADDS: What is already known on the subject Because of the importance of gesture in early communication for autistic and typically developing children, various tasks have been used to assess it. However, little is known about whether tasks differentially elicit type or rate of gesture for young autistic children from diverse racial and ethnic backgrounds. What this paper adds to existing knowledge Elicitation tasks differentially elicit type and rate of gesture for young autistic children in the early stages of gesture. What are the potential or actual clinical implications of this work? We recommend a structured task designed to elicit child requests to assess the developmental sophistication of a child's gesture repertoire.

12.
BMC Med Educ ; 24(1): 509, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38715008

RESUMO

BACKGROUND: In this era of rapid technological development, medical schools have had to use modern technology to enhance traditional teaching. Online teaching was preferred by many medical schools. However due to the complexity of intracranial anatomy, it was challenging for the students to study this part online, and the students were likely to be tired of neurosurgery, which is disadvantageous to the development of neurosurgery. Therefore, we developed this database to help students learn better neuroanatomy. MAIN BODY: The data were sourced from Rhoton's Cranial Anatomy and Surgical Approaches and Neurosurgery Tricks of the Trade in this database. Then we designed many hand gesture figures connected with the atlas of anatomy. Our database was divided into three parts: intracranial arteries, intracranial veins, and neurosurgery approaches. Each section below contains an atlas of anatomy, and gestures represent vessels and nerves. Pictures of hand gestures and atlas of anatomy are available to view on GRAVEN ( www.graven.cn ) without restrictions for all teachers and students. We recruited 50 undergraduate students and randomly divided them into two groups: using traditional teaching methods or GRAVEN database combined with above traditional teaching methods. Results revealed a significant improvement in academic performance in using GRAVEN database combined with traditional teaching methods compared to the traditional teaching methods. CONCLUSION: This database was vital to help students learn about intracranial anatomy and neurosurgical approaches. Gesture teaching can effectively simulate the relationship between human organs and tissues through the flexibility of hands and fingers, improving anatomy interest and education.


Assuntos
Bases de Dados Factuais , Educação de Graduação em Medicina , Gestos , Neurocirurgia , Humanos , Neurocirurgia/educação , Educação de Graduação em Medicina/métodos , Estudantes de Medicina , Neuroanatomia/educação , Ensino , Feminino , Masculino
13.
Sensors (Basel) ; 24(12)2024 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-38931542

RESUMO

This review explores the historical and current significance of gestures as a universal form of communication with a focus on hand gestures in virtual reality applications. It highlights the evolution of gesture detection systems from the 1990s, which used computer algorithms to find patterns in static images, to the present day where advances in sensor technology, artificial intelligence, and computing power have enabled real-time gesture recognition. The paper emphasizes the role of hand gestures in virtual reality (VR), a field that creates immersive digital experiences through the Ma blending of 3D modeling, sound effects, and sensing technology. This review presents state-of-the-art hardware and software techniques used in hand gesture detection, primarily for VR applications. It discusses the challenges in hand gesture detection, classifies gestures as static and dynamic, and grades their detection difficulty. This paper also reviews the haptic devices used in VR and their advantages and challenges. It provides an overview of the process used in hand gesture acquisition, from inputs and pre-processing to pose detection, for both static and dynamic gestures.


Assuntos
Gestos , Mãos , Realidade Virtual , Humanos , Mãos/fisiologia , Algoritmos , Interface Usuário-Computador , Inteligência Artificial
14.
Sensors (Basel) ; 24(13)2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-39000981

RESUMO

This work presents a novel approach for elbow gesture recognition using an array of inductive sensors and a machine learning algorithm (MLA). This paper describes the design of the inductive sensor array integrated into a flexible and wearable sleeve. The sensor array consists of coils sewn onto the sleeve, which form an LC tank circuit along with the externally connected inductors and capacitors. Changes in the elbow position modulate the inductance of these coils, allowing the sensor array to capture a range of elbow movements. The signal processing and random forest MLA to recognize 10 different elbow gestures are described. Rigorous evaluation on 8 subjects and data augmentation, which leveraged the dataset to 1270 trials per gesture, enabled the system to achieve remarkable accuracy of 98.3% and 98.5% using 5-fold cross-validation and leave-one-subject-out cross-validation, respectively. The test performance was then assessed using data collected from five new subjects. The high classification accuracy of 94% demonstrates the generalizability of the designed system. The proposed solution addresses the limitations of existing elbow gesture recognition designs and offers a practical and effective approach for intuitive human-machine interaction.


Assuntos
Algoritmos , Cotovelo , Gestos , Aprendizado de Máquina , Humanos , Cotovelo/fisiologia , Dispositivos Eletrônicos Vestíveis , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Masculino , Adulto , Feminino
15.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732846

RESUMO

Brain-computer interfaces (BCIs) allow information to be transmitted directly from the human brain to a computer, enhancing the ability of human brain activity to interact with the environment. In particular, BCI-based control systems are highly desirable because they can control equipment used by people with disabilities, such as wheelchairs and prosthetic legs. BCIs make use of electroencephalograms (EEGs) to decode the human brain's status. This paper presents an EEG-based facial gesture recognition method based on a self-organizing map (SOM). The proposed facial gesture recognition uses α, ß, and θ power bands of the EEG signals as the features of the gesture. The SOM-Hebb classifier is utilized to classify the feature vectors. We utilized the proposed method to develop an online facial gesture recognition system. The facial gestures were defined by combining facial movements that are easy to detect in EEG signals. The recognition accuracy of the system was examined through experiments. The recognition accuracy of the system ranged from 76.90% to 97.57% depending on the number of gestures recognized. The lowest accuracy (76.90%) occurred when recognizing seven gestures, though this is still quite accurate when compared to other EEG-based recognition systems. The implemented online recognition system was developed using MATLAB, and the system took 5.7 s to complete the recognition flow.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Gestos , Humanos , Eletroencefalografia/métodos , Face/fisiologia , Algoritmos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Encéfalo/fisiologia , Masculino
16.
Sensors (Basel) ; 24(15)2024 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-39124090

RESUMO

Human-Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test-time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable.


Assuntos
Algoritmos , Ultrassonografia , Humanos , Ultrassonografia/métodos , Interface Usuário-Computador , Punho/fisiologia , Punho/diagnóstico por imagem , Redes Neurais de Computação , Dedos/fisiologia , Sistemas Homem-Máquina , Gestos
17.
Sensors (Basel) ; 24(4)2024 Feb 08.
Artigo em Inglês | MEDLINE | ID: mdl-38400278

RESUMO

Commercial, high-tech upper limb prostheses offer a lot of functionality and are equipped with high-grade control mechanisms. However, they are relatively expensive and are not accessible to the majority of amputees. Therefore, more affordable, accessible, open-source, and 3D-printable alternatives are being developed. A commonly proposed approach to control these prostheses is to use bio-potentials generated by skeletal muscles, which can be measured using surface electromyography (sEMG). However, this control mechanism either lacks accuracy when a single sEMG sensor is used or involves the use of wires to connect to an array of multiple nodes, which hinders patients' movements. In order to mitigate these issues, we have developed a circular, wireless s-EMG array that is able to collect sEMG potentials on an array of electrodes that can be spread (not) uniformly around the circumference of a patient's arm. The modular sEMG system is combined with a Bluetooth Low Energy System on Chip, motion sensors, and a battery. We have benchmarked this system with a commercial, wired, state-of-the-art alternative and found an r = 0.98 (p < 0.01) Spearman correlation between the root-mean-squared (RMS) amplitude of sEMG measurements measured by both devices for the same set of 20 reference gestures, demonstrating that the system is accurate in measuring sEMG. Additionally, we have demonstrated that the RMS amplitudes of sEMG measurements between the different nodes within the array are uncorrelated, indicating that they contain independent information that can be used for higher accuracy in gesture recognition. We show this by training a random forest classifier that can distinguish between 6 gestures with an accuracy of 97%. This work is important for a large and growing group of amputees whose quality of life could be improved using this technology.


Assuntos
Amputados , Membros Artificiais , Humanos , Eletromiografia , Qualidade de Vida , Músculo Esquelético/fisiologia , Gestos , Mãos/fisiologia
18.
Sensors (Basel) ; 24(4)2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38400416

RESUMO

Interest in developing techniques for acquiring and decoding biological signals is on the rise in the research community. This interest spans various applications, with a particular focus on prosthetic control and rehabilitation, where achieving precise hand gesture recognition using surface electromyography signals is crucial due to the complexity and variability of surface electromyography data. Advanced signal processing and data analysis techniques are required to effectively extract meaningful information from these signals. In our study, we utilized three datasets: NinaPro Database 1, CapgMyo Database A, and CapgMyo Database B. These datasets were chosen for their open-source availability and established role in evaluating surface electromyography classifiers. Hand gesture recognition using surface electromyography signals draws inspiration from image classification algorithms, leading to the introduction and development of the Novel Signal Transformer. We systematically investigated two feature extraction techniques for surface electromyography signals: the Fast Fourier Transform and wavelet-based feature extraction. Our study demonstrated significant advancements in surface electromyography signal classification, particularly in the Ninapro database 1 and CapgMyo dataset A, surpassing existing results in the literature. The newly introduced Signal Transformer outperformed traditional Convolutional Neural Networks by excelling in capturing structural details and incorporating global information from image-like signals through robust basis functions. Additionally, the inclusion of an attention mechanism within the Signal Transformer highlighted the significance of electrode readings, improving classification accuracy. These findings underscore the potential of the Signal Transformer as a powerful tool for precise and effective surface electromyography signal classification, promising applications in prosthetic control and rehabilitation.


Assuntos
Gestos , Redes Neurais de Computação , Eletromiografia/métodos , Algoritmos , Processamento de Sinais Assistido por Computador
19.
Sensors (Basel) ; 24(9)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38732933

RESUMO

This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices.


Assuntos
Algoritmos , Braço , Eletromiografia , Movimento , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Humanos , Eletromiografia/métodos , Braço/fisiologia , Movimento/fisiologia , Gestos , Masculino , Adulto
20.
Sensors (Basel) ; 24(8)2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38676024

RESUMO

In recent decades, technological advancements have transformed the industry, highlighting the efficiency of automation and safety. The integration of augmented reality (AR) and gesture recognition has emerged as an innovative approach to create interactive environments for industrial equipment. Gesture recognition enhances AR applications by allowing intuitive interactions. This study presents a web-based architecture for the integration of AR and gesture recognition, designed to interact with industrial equipment. Emphasizing hardware-agnostic compatibility, the proposed structure offers an intuitive interaction with equipment control systems through natural gestures. Experimental validation, conducted using Google Glass, demonstrated the practical viability and potential of this approach in industrial operations. The development focused on optimizing the system's software and implementing techniques such as normalization, clamping, conversion, and filtering to achieve accurate and reliable gesture recognition under different usage conditions. The proposed approach promotes safer and more efficient industrial operations, contributing to research in AR and gesture recognition. Future work will include improving the gesture recognition accuracy, exploring alternative gestures, and expanding the platform integration to improve the user experience.


Assuntos
Realidade Aumentada , Gestos , Humanos , Indústrias , Software , Reconhecimento Automatizado de Padrão/métodos , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA