Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.755
Filtrar
1.
J Acoust Soc Am ; 156(3): 1720-1733, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-39283150

RESUMO

Previous research has shown that prosodic structure can regulate the relationship between co-speech gestures and speech itself. Most co-speech studies have focused on manual gestures, but head movements have also been observed to accompany speech events by Munhall, Jones, Callan, Kuratate, and Vatikiotis-Bateson [(2004). Psychol. Sci. 15(2), 133-137], and these co-verbal gestures may be linked to prosodic prominence, as shown by Esteve-Gibert, Borrás-Comes, Asor, Swerts, and Prieto [(2017). J. Acoust. Soc. Am. 141(6), 4727-4739], Hadar, Steiner, Grant, and Rose [(1984). Hum. Mov. Sci. 3, 237-245], and House, Beskow, and Granström [(2001). Lang. Speech 26(2), 117-129]. This study examines how the timing and magnitude of head nods may be related to degrees of prosodic prominence connected to different focus conditions. Using electromagnetic articulometry, a time-varying signal of vertical head movement for 12 native French speakers was generated to examine the relationship between head nod gestures and F0 peaks. The results suggest that speakers use two different alignment strategies, which integrate both temporal and magnitudinal aspects of the gesture. Some evidence of inter-speaker preferences in the use of the two strategies was observed, although the inter-speaker variability is not categorical. Importantly, prosodic prominence itself is not the cause of the difference between the two strategies, but instead magnifies their inherent differences. In this way, the use of co-speech head nod gestures under French focus conditions can be considered as a method of prosodic enhancement.


Assuntos
Movimentos da Cabeça , Acústica da Fala , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Medida da Produção da Fala/métodos , Fatores de Tempo , Gestos , Qualidade da Voz , França , Idioma
2.
Cogn Sci ; 48(9): e13484, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39228272

RESUMO

When people talk about kinship systems, they often use co-speech gestures and other representations to elaborate. This paper investigates such polysemiotic (spoken, gestured, and drawn) descriptions of kinship relations, to see if they display recurring patterns of conventionalization that capture specific social structures. We present an exploratory hypothesis-generating study of descriptions produced by a lesser-known ethnolinguistic community to the cognitive sciences: the Paamese people of Vanuatu. Forty Paamese speakers were asked to talk about their family in semi-guided kinship interviews. Analyses of the speech, gesture, and drawings produced during these interviews revealed that lineality (i.e., mother's side vs. father's side) is lateralized in the speaker's gesture space. In other words, kinship members of the speaker's matriline are placed on the left side of the speaker's body and those of the patriline are placed on their right side, when they are mentioned in speech. Moreover, we find that the gesture produced by Paamese participants during verbal descriptions of marital relations are performed significantly more often on two diagonal directions of the sagittal axis. We show that these diagonals are also found in the few diagrams that participants drew on the ground to augment their verbo-gestural descriptions of marriage practices with drawing. We interpret this behavior as evidence of a spatial template, which Paamese speakers activate to think and communicate about family relations. We therefore argue that extending investigations of kinship structures beyond kinship terminologies alone can unveil additional key factors that shape kinship cognition and communication and hereby provide further insights into the diversity of social structures.


Assuntos
Cognição , Comunicação , Família , Gestos , Humanos , Masculino , Feminino , Família/psicologia , Adulto , Fala , Pessoa de Meia-Idade
3.
Artigo em Inglês | MEDLINE | ID: mdl-39196739

RESUMO

The objective of this work is to develop a novel myoelectric pattern recognition (MPR) method to mitigate the concurrent interference of electrode shift and loosening, thereby improving the practicality of MPR-based gestural interfaces towards intelligent control. A Siamese auto-encoder network (SAEN) was established to learn robust feature representations against random occurrences of both electrode shift and loosening. The SAEN model was trained with a variety of shifted-view and masked-view feature maps, which were simulated through feature transformation operated on the original feature maps. Specifically, three mean square error (MSE) losses were devised to warrant the trained model's capability in adaptive recovery of any given interfered data. The SAEN was deployed as an independent feature extractor followed by a common support vector machine acting as the classifier. To evaluate the effectiveness of the proposed method, an eight-channel armband was adopted to collect surface electromyography (EMG) signals from nine subjects performing six gestures. Under the condition of concurrent interference, the proposed method achieved the highest classification accuracy in both offline and online testing compared to five common methods, with statistical significance (p <0.05). The proposed method was demonstrated to be effective in mitigating the electrode shift and loosening interferences. Our work offers a valuable solution for enhancing the robustness of myoelectric control systems.


Assuntos
Algoritmos , Eletrodos , Eletromiografia , Gestos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Máquina de Vetores de Suporte , Humanos , Eletromiografia/métodos , Reconhecimento Automatizado de Padrão/métodos , Masculino , Adulto , Feminino , Adulto Jovem , Reprodutibilidade dos Testes
4.
F1000Res ; 13: 798, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39139467

RESUMO

Background: The consensus in scientific literature is that each child undergoes a unique linguistic development path, albeit with shared developmental stages. Some children excel or lag behind their peers in language skills. Consequently, a key challenge in language acquisition research is pinpointing factors influencing individual differences in language development. Methods: We observed children longitudinally from 3 to 24 months of life to explore early predictors of vocabulary size. Based on the productive vocabulary size of children at 24 months, 30 children met our sample selection criteria: 10 late talkers and 10 early talkers, and we compared them with 10 typical talkers. We evaluated interactive behaviors at 3, 6, 9 and 12 months, considering vocal production, gaze at mother's face, and gestural production during mother-child interactions, and we considered mothers' report of children's actions and gestures and receptive-vocabulary size at 15 and 18 months. Results: Results indicated early precursors of language outcome at 24 months identifiable as early as 3 months in vocal productions, 6 months for gaze at mother's face and 12 months for gestural productions. Conclusions: Our research highlights both theoretical and practical implications. Theoretically, identifying the early indicators of belonging to the group of late or early talkers underscores the significant role of this developmental period for future studies. On a practical note, our findings emphasize the crucial need for early investigations to identify predictors of vocabulary development before the typical age at which lexical delay is identified.


Assuntos
Desenvolvimento da Linguagem , Humanos , Lactente , Feminino , Masculino , Pré-Escolar , Vocabulário , Relações Mãe-Filho , Fala/fisiologia , Estudos Longitudinais , Gestos
5.
Sensors (Basel) ; 24(16)2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39204920

RESUMO

Medication adherence is an essential aspect of healthcare for patients and is important for achieving medical objectives. However, the lack of standard techniques for measuring adherence is a global concern, making it challenging to accurately monitor and measure patient medication regimens. The use of sensor technology for medication adherence monitoring has received much attention lately since it makes it possible to continuously observe patients' medication adherence behavior. Sensor devices or smart wearables utilize state-of-the-art machine learning (ML) methods to analyze intricate data patterns and provide predictions accurately. The key aim of this work is to develop a sensor-based hand gesture recognition model to predict medication activities. In this research, a smart sensor device-based hand gesture prediction model is developed to recognize medication intake activities. The device includes a tri-axial gyroscope, geometric, and accelerometer sensors to sense and gather data from hand gestures. A smartphone application gathers hand gesture data from the sensor device, which is then stored in the cloud database in a .csv format. These data are collected, processed, and classified to recognize the medication intake activity using the proposed novel neural network model called Sea Horse Optimization-Deep Neural Network (SHO-DNN). The SHO technique is implemented to update the biases and weights and the number of hidden layers in the DNN model. By updating these parameters, the DNN model is improved in classifying the samples of hand gestures to identify the medication activities. The research model demonstrates impressive performance, with an accuracy of 98.59%, sensitivity of 97.82%, precision of 98.69%, and an F1 score of 98.48%. Hence, the proposed model outperformed the most available models in all the aforementioned aspects. The results indicate that this model is a promising approach for medication adherence monitoring in healthcare applications, instilling confidence in its effectiveness.


Assuntos
Gestos , Mãos , Adesão à Medicação , Redes Neurais de Computação , Humanos , Mãos/fisiologia , Smartphone , Dispositivos Eletrônicos Vestíveis , Algoritmos , Aplicativos Móveis , Aprendizado de Máquina
6.
Sensors (Basel) ; 24(16)2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39204927

RESUMO

This study delves into decoding hand gestures using surface electromyography (EMG) signals collected via a precision Myo-armband sensor, leveraging machine learning algorithms. The research entails rigorous data preprocessing to extract features and labels from raw EMG data. Following partitioning into training and testing sets, four traditional machine learning models are scrutinized for their efficacy in classifying finger movements across seven distinct gestures. The analysis includes meticulous parameter optimization and five-fold cross-validation to evaluate model performance. Among the models assessed, the Random Forest emerges as the top performer, consistently delivering superior precision, recall, and F1-score values across gesture classes, with ROC-AUC scores surpassing 99%. These findings underscore the Random Forest model as the optimal classifier for our EMG dataset, promising significant advancements in healthcare rehabilitation engineering and enhancing human-computer interaction technologies.


Assuntos
Algoritmos , Eletromiografia , Gestos , Mãos , Aprendizado de Máquina , Humanos , Eletromiografia/métodos , Mãos/fisiologia , Masculino , Feminino , Adulto , Processamento de Sinais Assistido por Computador , Adulto Jovem , Reconhecimento Automatizado de Padrão/métodos , Movimento/fisiologia
7.
Artigo em Inglês | MEDLINE | ID: mdl-39172614

RESUMO

Surface electromyography (sEMG), a human-machine interface for gesture recognition, has shown promising potential for decoding motor intentions, but a variety of nonideal factors restrict its practical application in assistive robots. In this paper, we summarized the current mainstream gesture recognition strategies and proposed a gesture recognition method based on multimodal canonical correlation analysis feature fusion classification (MCAFC) for a nonideal condition that occurs in daily life, i.e., posture variations. The deep features of the sEMG and acceleration signals were first extracted via convolutional neural networks. A canonical correlation analysis was subsequently performed to associate the deep features of the two modalities. The transformed features were utilized as inputs to a linear discriminant analysis classifier to recognize the corresponding gestures. Both offline and real-time experiments were conducted on eight non-disabled subjects. The experimental results indicated that MCAFC achieved an average classification accuracy, average motion completion rate, and average motion completion time of 93.44%, 94.05%, and 1.38 s, respectively, with multiple dynamic postures, indicating significantly better performance than that of comparable methods. The results demonstrate the feasibility and superiority of the proposed multimodal signal feature fusion method for gesture recognition with posture variations, providing a new scheme for myoelectric control.


Assuntos
Algoritmos , Eletromiografia , Gestos , Mãos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Postura , Humanos , Postura/fisiologia , Mãos/fisiologia , Masculino , Reconhecimento Automatizado de Padrão/métodos , Adulto , Feminino , Adulto Jovem , Análise Discriminante , Aprendizado Profundo , Voluntários Saudáveis
8.
Sci Rep ; 14(1): 20247, 2024 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-39215011

RESUMO

Long-term electroencephalography (EEG) recordings have primarily been used to study resting-state fluctuations. These recordings provide valuable insights into various phenomena such as sleep stages, cognitive processes, and neurological disorders. However, this study explores a new angle, focusing for the first time on the evolving nature of EEG dynamics over time within the context of movement. Twenty-two healthy individuals were measured six times from 2 p.m. to 12 a.m. with intervals of 2 h while performing four right-hand gestures. Analysis of movement-related cortical potentials (MRCPs) revealed a reduction in amplitude for the motor and post-motor potential during later hours of the day. Evaluation in source space displayed an increase in the activity of M1 of the contralateral hemisphere and the SMA of both hemispheres until 8 p.m. followed by a decline until midnight. Furthermore, we investigated how changes over time in MRCP dynamics affect the ability to decode motor information. This was achieved by developing classification schemes to assess performance across different scenarios. The observed variations in classification accuracies over time strongly indicate the need for adaptive decoders. Such adaptive decoders would be instrumental in delivering robust results, essential for the practical application of BCIs during day and nighttime usage.


Assuntos
Eletroencefalografia , Gestos , Mãos , Humanos , Eletroencefalografia/métodos , Masculino , Feminino , Mãos/fisiologia , Adulto , Adulto Jovem , Movimento/fisiologia , Córtex Motor/fisiologia , Interfaces Cérebro-Computador
9.
Cogn Sci ; 48(8): e13486, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39155515

RESUMO

Research shows that high- and low-pitch sounds can be associated with various meanings. For example, high-pitch sounds are associated with small concepts, whereas low-pitch sounds are associated with large concepts. This study presents three experiments revealing that high-pitch sounds are also associated with open concepts and opening hand actions, while low-pitch sounds are associated with closed concepts and closing hand actions. In Experiment 1, this sound-meaning correspondence effect was shown using the two-alternative forced-choice task, while Experiments 2 and 3 used reaction time tasks to show this interaction. In Experiment 2, high-pitch vocalizations were found to facilitate opening hand gestures, and low-pitch vocalizations were found to facilitate closing hand gestures, when performed simultaneously. In Experiment 3, high-pitched vocalizations were produced particularly rapidly when the visual target stimulus presented an open object, and low-pitched vocalizations were produced particularly rapidly when the target presented a closed object. These findings are discussed concerning the meaning of intonational cues. They are suggested to be based on cross-modally representing conceptual spatial knowledge in sensory, motor, and affective systems. Additionally, this pitch-opening effect might share cognitive processes with other pitch-meaning effects.


Assuntos
Tempo de Reação , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Percepção da Altura Sonora/fisiologia , Percepção Espacial/fisiologia , Gestos , Som , Estimulação Acústica , Sinais (Psicologia)
10.
Sci Rep ; 14(1): 18564, 2024 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-39122791

RESUMO

High-density electromyography (HD-EMG) can provide a natural interface to enhance human-computer interaction (HCI). This study aims to demonstrate the capability of a novel HD-EMG forearm sleeve equipped with up to 150 electrodes to capture high-resolution muscle activity, decode complex hand gestures, and estimate continuous hand position via joint angle predictions. Ten able-bodied participants performed 37 hand movements and grasps while EMG was recorded using the HD-EMG sleeve. Simultaneously, an 18-sensor motion capture glove calculated 23 joint angles from the hand and fingers across all movements for training regression models. For classifying across the 37 gestures, our decoding algorithm was able to differentiate between sequential movements with 97.3 ± 0.3 % accuracy calculated on a 100 ms bin-by-bin basis. In a separate mixed dataset consisting of 19 movements randomly interspersed, decoding performance achieved an average bin-wise accuracy of 92.8 ± 0.8 % . When evaluating decoders for use in real-time scenarios, we found that decoders can reliably decode both movements and movement transitions, achieving an average accuracy of 93.3 ± 0.9 % on the sequential set and 88.5 ± 0.9 % on the mixed set. Furthermore, we estimated continuous joint angles from the EMG sleeve data, achieving a R 2 of 0.884 ± 0.003 in the sequential set and 0.750 ± 0.008 in the mixed set. Median absolute error (MAE) was kept below 10° across all joints, with a grand average MAE of 1.8 ± 0 . 04 ∘ and 3.4 ± 0 . 07 ∘ for the sequential and mixed datasets, respectively. We also assessed two algorithm modifications to address specific challenges for EMG-driven HCI applications. To minimize decoder latency, we used a method that accounts for reaction time by dynamically shifting cue labels in time. To reduce training requirements, we show that pretraining models with historical data provided an increase in decoding performance compared with models that were not pretrained when reducing the in-session training data to only one attempt of each movement. The HD-EMG sleeve, combined with sophisticated machine learning algorithms, can be a powerful tool for hand gesture recognition and joint angle estimation. This technology holds significant promise for applications in HCI, such as prosthetics, assistive technology, rehabilitation, and human-robot collaboration.


Assuntos
Eletromiografia , Gestos , Mãos , Dispositivos Eletrônicos Vestíveis , Humanos , Eletromiografia/métodos , Masculino , Feminino , Adulto , Mãos/fisiologia , Algoritmos , Movimento/fisiologia , Adulto Jovem
11.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39123896

RESUMO

For successful human-robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human-robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS.


Assuntos
Gestos , Robótica , Robótica/métodos , Humanos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência Artificial
12.
Sensors (Basel) ; 24(15)2024 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-39124090

RESUMO

Human-Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test-time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable.


Assuntos
Algoritmos , Ultrassonografia , Humanos , Ultrassonografia/métodos , Interface Usuário-Computador , Punho/fisiologia , Punho/diagnóstico por imagem , Redes Neurais de Computação , Dedos/fisiologia , Sistemas Homem-Máquina , Gestos
13.
Artigo em Inglês | MEDLINE | ID: mdl-39186426

RESUMO

Hand motor impairment has seriously affected the daily life of the elderly. We developed an electromyography (EMG) exosuit system with bidirectional hand support for bilateral coordination assistance based on a dynamic gesture recognition model using graph convolutional network (GCN) and long short-term memory network (LSTM). The system included a hardware subsystem and a software subsystem. The hardware subsystem included an exosuit jacket, a backpack module, an EMG recognition module, and a bidirectional support glove. The software subsystem based on the dynamic gesture recognition model was designed to identify dynamic and static gestures by extracting the spatio-temporal features of the patient's EMG signals and to control glove movement. The offline training experiment built the gesture recognition models for each subject and evaluated the feasibility of the recognition model; the online control experiments verified the effectiveness of the exosuit system. The experimental results showed that the proposed model achieve a gesture recognition rate of 96.42% ± 3.26 %, which is higher than the other three traditional recognition models. All subjects successfully completed two daily tasks within a short time and the success rate of bilateral coordination assistance are 88.75% and 86.88%. The exosuit system can effectively help patients by bidirectional hand support strategy for bilateral coordination assistance in daily tasks, and the proposed method can be applied to various limb assistance scenarios.


Assuntos
Eletromiografia , Gestos , Mãos , Humanos , Mãos/fisiologia , Masculino , Feminino , Exoesqueleto Energizado , Adulto , Algoritmos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Software , Atividades Cotidianas , Adulto Jovem , Estudos de Viabilidade
14.
Philos Trans R Soc Lond B Biol Sci ; 379(1911): 20230156, 2024 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-39155717

RESUMO

The gestures we produce serve a variety of functions-they affect our communication, guide our attention and help us think and change the way we think. Gestures can consequently also help us learn, generalize what we learn and retain that knowledge over time. The effects of gesture-based instruction in mathematics have been well studied. However, few of these studies are directly applicable to classroom environments. Here, we review literature that highlights the benefits of producing and observing gestures when teaching and learning mathematics, and we provide suggestions for designing research studies with an eye towards how gestures can feasibly be applied to classroom learning. This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.


Assuntos
Gestos , Aprendizagem , Matemática , Humanos , Criança , Matemática/educação , Ensino , Professores Escolares/psicologia , Cognição , Instituições Acadêmicas
15.
Am J Speech Lang Pathol ; 33(5): 2636-2644, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39007707

RESUMO

PURPOSE: Interactive songs are a common shared activity for many families and within early childhood classrooms. These activities have the potential to be rich sources of vocabulary input for children with and without language impairments. However, little information is known about the how caregivers currently provide input for different types of vocabulary during these activities. The purpose of this research note is to provide preliminary information on how caregivers provide input related to verbs within an interactive song activity. METHOD: Observations of caregivers engaging in song activities with their child were collected. The gestures used during the interactions were coded. RESULTS: The results show that, when given examples, caregivers provide gestural input both frequently and consistently. CONCLUSIONS: Clinical implications and future directions for exploring songs as an intervention context are discussed.


Assuntos
Gestos , Vocabulário , Humanos , Pré-Escolar , Masculino , Feminino , Linguagem Infantil , Canto , Cuidadores/psicologia , Música
16.
Hum Brain Mapp ; 45(11): e26797, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39041175

RESUMO

Speech comprehension is crucial for human social interaction, relying on the integration of auditory and visual cues across various levels of representation. While research has extensively studied multisensory integration (MSI) using idealised, well-controlled stimuli, there is a need to understand this process in response to complex, naturalistic stimuli encountered in everyday life. This study investigated behavioural and neural MSI in neurotypical adults experiencing audio-visual speech within a naturalistic, social context. Our novel paradigm incorporated a broader social situational context, complete words, and speech-supporting iconic gestures, allowing for context-based pragmatics and semantic priors. We investigated MSI in the presence of unimodal (auditory or visual) or complementary, bimodal speech signals. During audio-visual speech trials, compared to unimodal trials, participants more accurately recognised spoken words and showed a more pronounced suppression of alpha power-an indicator of heightened integration load. Importantly, on the neural level, these effects surpassed mere summation of unimodal responses, suggesting non-linear MSI mechanisms. Overall, our findings demonstrate that typically developing adults integrate audio-visual speech and gesture information to facilitate speech comprehension in noisy environments, highlighting the importance of studying MSI in ecologically valid contexts.


Assuntos
Gestos , Percepção da Fala , Humanos , Feminino , Masculino , Percepção da Fala/fisiologia , Adulto Jovem , Adulto , Percepção Visual/fisiologia , Eletroencefalografia , Compreensão/fisiologia , Estimulação Acústica , Fala/fisiologia , Encéfalo/fisiologia , Estimulação Luminosa/métodos
17.
ACS Appl Mater Interfaces ; 16(29): 38780-38791, 2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39010653

RESUMO

Flexible strain sensors have been widely researched in fields such as smart wearables, human health monitoring, and biomedical applications. However, achieving a wide sensing range and high sensitivity of flexible strain sensors simultaneously remains a challenge, limiting their further applications. To address these issues, a cross-scale combinatorial bionic hierarchical design featuring microscale morphology combined with a macroscale base to balance the sensing range and sensitivity is presented. Inspired by the combination of serpentine and butterfly wing structures, this study employs three-dimensional printing, prestretching, and mold transfer processes to construct a combinatorial bionic hierarchical flexible strain sensor (CBH-sensor) with serpentine-shaped inverted-V-groove/wrinkling-cracking structures. The CBH-sensor has a high wide sensing range of 150% and high sensitivity with a gauge factor of up to 2416.67. In addition, it demonstrates the application of the CBH-sensor array in sign language gesture recognition, successfully identifying nine different sign language gestures with an impressive accuracy of 100% with the assistance of machine learning. The CBH-sensor exhibits considerable promise for use in enabling unobstructed communication between individuals who use sign language and those who do not. Furthermore, it has wide-ranging possibilities for use in the field of gesture-driven interactions in human-computer interfaces.


Assuntos
Aprendizado de Máquina , Língua de Sinais , Humanos , Biônica , Dispositivos Eletrônicos Vestíveis , Gestos , Impressão Tridimensional
18.
Infancy ; 29(5): 693-712, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39030871

RESUMO

Infants' use of pointing gestures to direct and share attention develops during the first 2 years of life. Shyness, defined as an approach-avoidance motivational conflict during social interactions, may influence infants' use of pointing. Recent research distinguished between positive (gaze and/or head aversions while smiling) and non-positive (gaze and/or head aversions without smiling) shyness, which are related to different social and cognitive skills. We investigated whether positive and non-positive shyness in 12-month-old (n = 38; 15 girls) and 15-month-old (n = 45; 15 girls) infants were associated with their production of pointing gestures. Infants' expressions of shyness were observed during a social-exposure task in which the infant entered the laboratory room in their parent's arms and was welcomed by an unfamiliar person who provided attention and compliments. Infants' pointing was measured with a pointing task involving three stimuli: pleasant, unpleasant, and neutral. Positive shyness was positively associated with overall pointing at 15 months, especially in combination with high levels of non-positive shyness. In addition, infants who displayed more non-positive shyness pointed more frequently to direct the attention of the social partner to an unpleasant (vs. neutral) stimulus at both ages. Results indicate that shyness influences the early use of pointing to emotionally charged stimuli.


Assuntos
Gestos , Timidez , Humanos , Feminino , Masculino , Lactente , Comportamento do Lactente , Desenvolvimento Infantil , Interação Social , Atenção
19.
Artigo em Inglês | MEDLINE | ID: mdl-39028609

RESUMO

Motor imagery (MI) based brain computer interface (BCI) has been extensively studied to improve motor recovery for stroke patients by inducing neuroplasticity. However, due to the lower spatial resolution and signal-to-noise ratio (SNR) of electroencephalograph (EEG), MI based BCI system that involves decoding hand movements within the same limb remains lower classification accuracy and poorer practicality. To overcome the limitations, an adaptive hybrid BCI system combining MI and steady-state visually evoked potential (SSVEP) is developed to improve decoding accuracy while enhancing neural engagement. On the one hand, the SSVEP evoked by visual stimuli based on action-state flickering coding approach significantly improves the recognition accuracy compared to the pure MI based BCI. On the other hand, to reduce the impact of SSVEP on MI due to the dual-task interference effect, the event-related desynchronization (ERD) based neural engagement is monitored and employed for feedback in real-time to ensure the effective execution of MI tasks. Eight healthy subjects and six post-stroke patients were recruited to verify the effectiveness of the system. The results showed that the four-class gesture recognition accuracies of healthy individuals and patients could be improved to 94.37 ± 4.77 % and 79.38 ± 6.26 %, respectively. Moreover, the designed hybrid BCI could maintain the same degree of neural engagement as observed when subjects solely performed MI tasks. These phenomena demonstrated the interactivity and clinical utility of the developed system for the rehabilitation of hand function in stroke patients.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Potenciais Evocados Visuais , Mãos , Reabilitação do Acidente Vascular Cerebral , Humanos , Reabilitação do Acidente Vascular Cerebral/métodos , Masculino , Eletroencefalografia/métodos , Feminino , Potenciais Evocados Visuais/fisiologia , Pessoa de Meia-Idade , Adulto , Algoritmos , Imaginação/fisiologia , Acidente Vascular Cerebral/fisiopatologia , Gestos , Idoso , Voluntários Saudáveis , Adulto Jovem , Estimulação Luminosa , Razão Sinal-Ruído , Reprodutibilidade dos Testes
20.
J Speech Lang Hear Res ; 67(8): 2583-2599, 2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39038241

RESUMO

PURPOSE: Gesture delay in autistic infants and toddlers has been widely reported. The developmental trajectory of gesture production during early childhood is understudied. Thus, little is known about the possible changes of gesture production over time. The present study aimed to document the development of gesture production in autistic children and examine whether child-based factors (chronological age and initial language skills) predicted gesture development. METHOD: A total of 33 Chinese-speaking autistic children (Mage = 56.39 months, SD = 8.54 months) played with their parents at four time points over a 9-month period. Their speech was transcribed, and their gestures were coded from parent-child interaction. Multilevel modeling analysis was used to investigate the development of gesture and its associated factors. RESULTS: The total number of gestures produced by autistic children decreased over time. Among different factors, children's initial age significantly and negatively predicted children's gesture production, while initial language positively predicted children's gesture production. CONCLUSIONS: Gesture delay persists in preschool age. The decline in gesture production was associated with children's age and initial language ability. These findings shed light on the difficulties surrounding gesture use in autistic children.


Assuntos
Gestos , Humanos , Pré-Escolar , Masculino , Feminino , Fatores Etários , Transtorno Autístico/psicologia , Desenvolvimento da Linguagem , Linguagem Infantil , População do Leste Asiático
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA