Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.682
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Nature ; 620(7976): 1037-1046, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37612505

RESUMO

Speech neuroprostheses have the potential to restore communication to people living with paralysis, but naturalistic speed and expressivity are elusive1. Here we use high-density surface recordings of the speech cortex in a clinical-trial participant with severe limb and vocal paralysis to achieve high-performance real-time decoding across three complementary speech-related output modalities: text, speech audio and facial-avatar animation. We trained and evaluated deep-learning models using neural data collected as the participant attempted to silently speak sentences. For text, we demonstrate accurate and rapid large-vocabulary decoding with a median rate of 78 words per minute and median word error rate of 25%. For speech audio, we demonstrate intelligible and rapid speech synthesis and personalization to the participant's pre-injury voice. For facial-avatar animation, we demonstrate the control of virtual orofacial movements for speech and non-speech communicative gestures. The decoders reached high performance with less than two weeks of training. Our findings introduce a multimodal speech-neuroprosthetic approach that has substantial promise to restore full, embodied communication to people living with severe paralysis.


Assuntos
Face , Próteses Neurais , Paralisia , Fala , Humanos , Córtex Cerebral/fisiologia , Córtex Cerebral/fisiopatologia , Ensaios Clínicos como Assunto , Comunicação , Aprendizado Profundo , Gestos , Movimento , Próteses Neurais/normas , Paralisia/fisiopatologia , Paralisia/reabilitação , Vocabulário , Voz
2.
PLoS Biol ; 21(1): e3001939, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36693024

RESUMO

In the comparative study of human and nonhuman communication, ape gesturing provided the first demonstrations of flexible, intentional communication outside human language. Rich repertoires of these gestures have been described in all ape species, bar one: us. Given that the majority of great ape gestural signals are shared, and their form appears biologically inherited, this creates a conundrum: Where did the ape gestures go in human communication? Here, we test human recognition and understanding of 10 of the most frequently used ape gestures. We crowdsourced data from 5,656 participants through an online game, which required them to select the meaning of chimpanzee and bonobo gestures in 20 videos. We show that humans may retain an understanding of ape gestural communication (either directly inherited or part of more general cognition), across gesture types and gesture meanings, with information on communicative context providing only a marginal improvement in success. By assessing comprehension, rather than production, we accessed part of the great ape gestural repertoire for the first time in adult humans. Cognitive access to an ancestral system of gesture appears to have been retained after our divergence from other apes, drawing deep evolutionary continuity between their communication and our own.


Assuntos
Hominidae , Animais , Humanos , Gestos , Comunicação Animal , Pan troglodytes , Pan paniscus
3.
Proc Natl Acad Sci U S A ; 120(42): e2300243120, 2023 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-37824522

RESUMO

Nonhuman great apes inform one another in ways that can seem very humanlike. Especially in the gestural domain, their behavior exhibits many similarities with human communication, meeting widely used empirical criteria for intentionality. At the same time, there remain some manifest differences, most obviously the enormous range and scope of human expression. How to account for these similarities and differences in a unified way remains a major challenge. Here, we make a key distinction between the expression of intentions (Ladyginian) and the expression of specifically informative intentions (Gricean), and we situate this distinction within a "special case of" framework for classifying different modes of attention manipulation. We hence describe how the attested tendencies of great ape interaction-for instance, to be dyadic rather than triadic, to be about the here-and-now rather than "displaced," and to have a high degree of perceptual resemblance between form and meaning-are products of its Ladyginian but not Gricean character. We also reinterpret video footage of great ape gesture as Ladyginian but not Gricean, and we distinguish several varieties of meaning that are continuous with one another. We conclude that the evolutionary origins of linguistic meaning lie not in gradual changes in communication systems, but rather in gradual changes in social cognition, and specifically in what modes of attention manipulation are enabled by a species' cognitive phenotype: first Ladyginian and in turn Gricean. The second of these shifts rendered humans, and only humans, "language ready."


Assuntos
Comunicação Animal , Hominidae , Animais , Humanos , Evolução Biológica , Idioma , Gestos
4.
Brain ; 147(1): 297-310, 2024 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-37715997

RESUMO

Despite human's praxis abilities are unique among primates, comparative observations suggest that these cognitive motor skills could have emerged from exploitation and adaptation of phylogenetically older building blocks, namely the parieto-frontal networks subserving prehension and manipulation. Within this framework, investigating to which extent praxis and prehension-manipulation overlap and diverge within parieto-frontal circuits could help in understanding how human cognition shapes hand actions. This issue has never been investigated by combining lesion mapping and direct electrophysiological approaches in neurosurgical patients. To this purpose, 79 right-handed left-brain tumour patient candidates for awake neurosurgery were selected based on inclusion criteria. First, a lesion mapping was performed in the early postoperative phase to localize the regions associated with an impairment in praxis (imitation of meaningless and meaningful intransitive gestures) and visuo-guided prehension (reaching-to-grasping) abilities. Then, lesion results were anatomically matched with intraoperatively identified cortical and white matter regions, whose direct electrical stimulation impaired the Hand Manipulation Task. The lesion mapping analysis showed that prehension and praxis impairments occurring in the early postoperative phase were associated with specific parietal sectors. Dorso-mesial parietal resections, including the superior parietal lobe and precuneus, affected prehension performance, while resections involving rostral intraparietal and inferior parietal areas affected praxis abilities (covariate clusters, 5000 permutations, cluster-level family-wise error correction P < 0.05). The dorsal bank of the rostral intraparietal sulcus was associated with both prehension and praxis (overlap of non-covariate clusters). Within praxis results, while resection involving inferior parietal areas affected mainly the imitation of meaningful gestures, resection involving intraparietal areas affected both meaningless and meaningful gesture imitation. In parallel, the intraoperative electrical stimulation of the rostral intraparietal and the adjacent inferior parietal lobe with their surrounding white matter during the hand manipulation task evoked different motor impairments, i.e. the arrest and clumsy patterns, respectively. When integrating lesion mapping and intraoperative stimulation results, it emerges that imitation of praxis gestures first depends on the integrity of parietal areas within the dorso-ventral stream. Among these areas, the rostral intraparietal and the inferior parietal area play distinct roles in praxis and sensorimotor process controlling manipulation. Due to its visuo-motor 'attitude', the rostral intraparietal sulcus, putative human homologue of monkey anterior intraparietal, might enable the visuo-motor conversion of the observed gesture (direct pathway). Moreover, its functional interaction with the adjacent, phylogenetic more recent, inferior parietal areas might contribute to integrate the semantic-conceptual knowledge (indirect pathway) within the sensorimotor workflow, contributing to the cognitive upgrade of hand actions.


Assuntos
Córtex Cerebral , Desempenho Psicomotor , Humanos , Desempenho Psicomotor/fisiologia , Filogenia , Lobo Parietal , Cognição , Mapeamento Encefálico , Imageamento por Ressonância Magnética , Gestos
5.
Proc Natl Acad Sci U S A ; 119(48): e2216035119, 2022 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-36417442

RESUMO

Since their emergence a few years ago, artificial intelligence (AI)-synthesized media-so-called deep fakes-have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campaigns. Of particular concern is how deep fakes will be weaponized against world leaders during election cycles or times of armed conflict. We describe an identity-based approach for protecting world leaders from deep-fake imposters. Trained on several hours of authentic video, this approach captures distinct facial, gestural, and vocal mannerisms that we show can distinguish a world leader from an impersonator or deep-fake imposter.


Assuntos
Inteligência Artificial , Enganação , Gestos
6.
Proc Natl Acad Sci U S A ; 119(47): e2206486119, 2022 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-36375066

RESUMO

Humans are argued to be unique in their ability and motivation to share attention with others about external entities-sharing attention for sharing's sake. Indeed, in humans, using referential gestures declaratively to direct the attention of others toward external objects and events emerges in the first year of life. In contrast, wild great apes seldom use referential gestures, and when they do, it seems to be exclusively for imperative purposes. This apparent species difference has fueled the argument that the motivation and ability to share attention with others is a human-specific trait with important downstream consequences for the evolution of our complex cognition [M. Tomasello, Becoming Human (2019)]. Here, we report evidence of a wild ape showing a conspecific an item of interest. We provide video evidence of an adult female chimpanzee, Fiona, showing a leaf to her mother, Sutherland, in the context of leaf grooming in Kibale Forest, Uganda. We use a dataset of 84 similar leaf-grooming events to explore alternative explanations for the behavior, including food sharing and initiating dyadic grooming or playing. Our observations suggest that in highly specific social conditions, wild chimpanzees, like humans, may use referential showing gestures to direct others' attention to objects simply for the sake of sharing. The difference between humans and our closest living relatives in this regard may be quantitative rather than qualitative, with ramifications for our understanding of the evolution of human social cognition.


Assuntos
Hominidae , Pan troglodytes , Feminino , Humanos , Animais , Gestos , Comunicação Animal , Mães
7.
Hum Brain Mapp ; 45(11): e26797, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39041175

RESUMO

Speech comprehension is crucial for human social interaction, relying on the integration of auditory and visual cues across various levels of representation. While research has extensively studied multisensory integration (MSI) using idealised, well-controlled stimuli, there is a need to understand this process in response to complex, naturalistic stimuli encountered in everyday life. This study investigated behavioural and neural MSI in neurotypical adults experiencing audio-visual speech within a naturalistic, social context. Our novel paradigm incorporated a broader social situational context, complete words, and speech-supporting iconic gestures, allowing for context-based pragmatics and semantic priors. We investigated MSI in the presence of unimodal (auditory or visual) or complementary, bimodal speech signals. During audio-visual speech trials, compared to unimodal trials, participants more accurately recognised spoken words and showed a more pronounced suppression of alpha power-an indicator of heightened integration load. Importantly, on the neural level, these effects surpassed mere summation of unimodal responses, suggesting non-linear MSI mechanisms. Overall, our findings demonstrate that typically developing adults integrate audio-visual speech and gesture information to facilitate speech comprehension in noisy environments, highlighting the importance of studying MSI in ecologically valid contexts.


Assuntos
Gestos , Percepção da Fala , Humanos , Feminino , Masculino , Percepção da Fala/fisiologia , Adulto Jovem , Adulto , Percepção Visual/fisiologia , Eletroencefalografia , Compreensão/fisiologia , Estimulação Acústica , Fala/fisiologia , Encéfalo/fisiologia , Estimulação Luminosa/métodos
8.
Hum Brain Mapp ; 45(11): e26762, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39037079

RESUMO

Hierarchical models have been proposed to explain how the brain encodes actions, whereby different areas represent different features, such as gesture kinematics, target object, action goal, and meaning. The visual processing of action-related information is distributed over a well-known network of brain regions spanning separate anatomical areas, attuned to specific stimulus properties, and referred to as action observation network (AON). To determine the brain organization of these features, we measured representational geometries during the observation of a large set of transitive and intransitive gestures in two independent functional magnetic resonance imaging experiments. We provided evidence for a partial dissociation between kinematics, object characteristics, and action meaning in the occipito-parietal, ventro-temporal, and lateral occipito-temporal cortex, respectively. Importantly, most of the AON showed low specificity to all the explored features, and representational spaces sharing similar information content were spread across the cortex without being anatomically adjacent. Overall, our results support the notion that the AON relies on overlapping and distributed coding and may act as a unique representational space instead of mapping features in a modular and segregated manner.


Assuntos
Mapeamento Encefálico , Gestos , Imageamento por Ressonância Magnética , Humanos , Masculino , Feminino , Fenômenos Biomecânicos/fisiologia , Adulto , Adulto Jovem , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Estimulação Luminosa/métodos , Sensibilidade e Especificidade
9.
Proc Biol Sci ; 291(2016): 20232345, 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38351806

RESUMO

Joking draws on complex cognitive abilities: understanding social norms, theory of mind, anticipating others' responses and appreciating the violation of others' expectations. Playful teasing, which is present in preverbal infants, shares many of these cognitive features. There is some evidence that great apes can tease in structurally similar ways, but no systematic study exists. We developed a coding system to identify playful teasing and applied it to video of zoo-housed great apes. All four species engaged in intentionally provocative behaviour, frequently accompanied by characteristics of play. We found playful teasing to be characterized by attention-getting, one-sidedness, response looking, repetition and elaboration/escalation. It takes place mainly in relaxed contexts, has a wide variety of forms, and differs from play in several ways (e.g. asymmetry, low rates of play signals like the playface and absence of movement-final 'holds' characteristic of intentional gestures). As playful teasing is present in all extant great ape genera, it is likely that the cognitive prerequisites for joking evolved in the hominoid lineage at least 13 million years ago.


Assuntos
Hominidae , Humanos , Lactente , Animais , Cognição , Gestos , Atenção
10.
Proc Biol Sci ; 291(2020): 20240250, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38565151

RESUMO

Communication needs to be complex enough to be functional while minimizing learning and production costs. Recent work suggests that the vocalizations and gestures of some songbirds, cetaceans and great apes may conform to linguistic laws that reflect this trade-off between efficiency and complexity. In studies of non-human communication, though, clustering signals into types cannot be done a priori, and decisions about the appropriate grain of analysis may affect statistical signals in the data. The aim of this study was to assess the evidence for language-like efficiency and structure in house finch (Haemorhous mexicanus) song across three levels of granularity in syllable clustering. The results show strong evidence for Zipf's rank-frequency law, Zipf's law of abbreviation and Menzerath's law. Additional analyses show that house finch songs have small-world structure, thought to reflect systematic structure in syntax, and the mutual information decay of sequences is consistent with a combination of Markovian and hierarchical processes. These statistical patterns are robust across three levels of granularity in syllable clustering, pointing to a limited form of scale invariance. In sum, it appears that house finch song has been shaped by pressure for efficiency, possibly to offset the costs of female preferences for complexity.


Assuntos
Tentilhões , Animais , Feminino , Idioma , Linguística , Aprendizagem , Gestos , Cetáceos , Vocalização Animal
11.
Anim Cogn ; 27(1): 18, 2024 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-38429467

RESUMO

Gestures play a central role in the communication systems of several animal families, including primates. In this study, we provide a first assessment of the gestural systems of a Platyrrhine species, Geoffroy's spider monkeys (Ateles geoffroyi). We observed a wild group of 52 spider monkeys and assessed the distribution of visual and tactile gestures in the group, the size of individual repertoires and the intentionality and effectiveness of individuals' gestural production. Our results showed that younger spider monkeys were more likely than older ones to use tactile gestures. In contrast, we found no inter-individual differences in the probability of producing visual gestures. Repertoire size did not vary with age, but the probability of accounting for recipients' attentional state was higher for older monkeys than for younger ones, especially for gestures in the visual modality. Using vocalizations right before the gesture increased the probability of gesturing towards attentive recipients and of receiving a response, although age had no effect on the probability of gestures being responded. Overall, our study provides first evidence of gestural production in a Platyrrhine species, and confirms this taxon as a valid candidate for research on animal communication.


Assuntos
Ateles geoffroyi , Atelinae , Humanos , Animais , Gestos , Comunicação Animal , Individualidade
12.
Exp Brain Res ; 242(8): 1831-1840, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38842756

RESUMO

Recent studies on the imitation of intransitive gestures suggest that the body part effect relies mainly upon the direct route of the dual-route model through a visuo-transformation mechanism. Here, we test the visuo-constructive hypothesis which posits that the visual complexity may directly potentiate the body part effect for meaningless gestures. We predicted that the difference between imitation of hand and finger gestures would increase with the visuo-spatial complexity of gestures. Second, we aimed to identify some of the visuo-spatial predictors of meaningless finger imitation skills. Thirty-eight participants underwent an imitation task containing three distinct set of gestures, that is, meaningful gestures, meaningless gestures with low visual complexity, and meaningless gestures with higher visual complexity than the first set of meaningless gestures. Our results were in general agreement with the visuo-constructive hypothesis, showing an increase in the difference between hand and finger gestures, but only for meaningless gestures with higher visuo-spatial complexity. Regression analyses confirm that imitation accuracy decreases with resource-demanding visuo-spatial factors. Taken together, our results suggest that the body part effect is highly dependent on the visuo-spatial characteristics of the gestures.


Assuntos
Gestos , Comportamento Imitativo , Percepção Espacial , Humanos , Masculino , Feminino , Comportamento Imitativo/fisiologia , Adulto Jovem , Adulto , Percepção Espacial/fisiologia , Desempenho Psicomotor/fisiologia , Mãos/fisiologia , Percepção Visual/fisiologia
13.
Dev Sci ; 27(3): e13457, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37941084

RESUMO

Acquisition of visual attention-following skills, notably gaze- and point-following, contributes to infants' ability to share attention with caregivers, which in turn contributes to social learning and communication. However, the development of gaze- and point-following in the first 18 months remains controversial, in part because of different testing protocols and standards. To address this, we longitudinally tested N = 43 low-risk, North American middle-class infants' tendency to follow gaze direction, pointing gestures, and gaze-and-point combinations. Infants were tested monthly from 4 to 12 months of age. To control motivational differences, infants were taught to expect contingent reward videos in the target locations. No-cue trials were included to estimate spontaneous target fixation rates. A comparison sample (N = 23) was tested at 9 and 12 months to estimate practice effects. Results showed gradual increases in both gaze- and point-following starting around 7 months, and modest month-to-month individual stability from 8 to 12 months. However, attention-following did not exceed chance levels until after 6 months. Infants rarely followed cues to locations behind them, even at 12 months. Infants followed combined gaze-and-point cues more than gaze alone, and followed points at intermediate levels (not reliably different from the other cues). The comparison group's results showed that practice effects did not explain the age-related increase in attention-following. The results corroborate and extend previous findings that North American middle-class infants' attention-following in controlled laboratory settings increases slowly and incrementally between 6 and 12 months of age. RESEARCH HIGHLIGHTS: A longitudinal experimental study documented the emergence and developmental trajectories of North American middle-class infants' visual attention-following skills, including gaze-following, point-following, and gaze-and-point-following. A new paradigm controlled for factors including motivation, attentiveness, and visual-search baserates. Motor development was ruled out as a predictor or limiter of the emergence of attention-following. Infants did not follow attention reliably until after 6 months, and following increased slowly from 7 to 12 months. Infants' individual trajectories showed modest month-to-month stability from 8 to 12 months of age.


Assuntos
Sinais (Psicologia) , Gestos , Lactente , Humanos , Estudos Longitudinais , Fixação Ocular
14.
Dev Sci ; 27(5): e13515, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38618899

RESUMO

Everyday caregiver-infant interactions are dynamic and multidimensional. However, existing research underestimates the dimensionality of infants' experiences, often focusing on one or two communicative signals (e.g., speech alone, or speech and gesture together). Here, we introduce "infant-directed communication" (IDC): the suite of communicative signals from caregivers to infants including speech, action, gesture, emotion, and touch. We recorded 10 min of at-home play between 44 caregivers and their 18- to 24-month-old infants from predominantly white, middle-class, English-speaking families in the United States. Interactions were coded for five dimensions of IDC as well as infants' gestures and vocalizations. Most caregivers used all five dimensions of IDC throughout the interaction, and these dimensions frequently overlapped. For example, over 60% of the speech that infants heard was accompanied by one or more non-verbal communicative cues. However, we saw marked variation across caregivers in their use of IDC, likely reflecting tailored communication to the behaviors and abilities of their infant. Moreover, caregivers systematically increased the dimensionality of IDC, using more overlapping cues in response to infant gestures and vocalizations, and more IDC with infants who had smaller vocabularies. Understanding how and when caregivers use all five signals-together and separately-in interactions with infants has the potential to redefine how developmental scientists conceive of infants' communicative environments, and enhance our understanding of the relations between caregiver input and early learning. RESEARCH HIGHLIGHTS: Infants' everyday interactions with caregivers are dynamic and multimodal, but existing research has underestimated the multidimensionality (i.e., the diversity of simultaneously occurring communicative cues) inherent in infant-directed communication. Over 60% of the speech that infants encounter during at-home, free play interactions overlap with one or more of a variety of non-speech communicative cues. The multidimensionality of caregivers' communicative cues increases in response to infants' gestures and vocalizations, providing new information about how infants' own behaviors shape their input. These findings emphasize the importance of understanding how caregivers use a diverse set of communicative behaviors-both separately and together-during everyday interactions with infants.


Assuntos
Cuidadores , Comunicação , Gestos , Comportamento do Lactente , Humanos , Lactente , Cuidadores/psicologia , Feminino , Masculino , Comportamento do Lactente/fisiologia , Fala , Adulto , Comunicação não Verbal , Desenvolvimento Infantil/fisiologia , Pré-Escolar , Sinais (Psicologia)
15.
Dev Sci ; 27(5): e13507, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38629500

RESUMO

Blind adults display language-specificity in their packaging and ordering of events in speech. These differences affect the representation of events in co-speech gesture-gesturing with speech-but not in silent gesture-gesturing without speech. Here we examine when in development blind children begin to show adult-like patterns in co-speech and silent gesture. We studied speech and gestures produced by 30 blind and 30 sighted children learning Turkish, equally divided into 3 age groups: 5-6, 7-8, 9-10 years. The children were asked to describe three-dimensional spatial event scenes (e.g., running out of a house) first with speech, and then without speech using only their hands. We focused on physical motion events, which, in blind adults, elicit cross-linguistic differences in speech and co-speech gesture, but cross-linguistic similarities in silent gesture. Our results showed an effect of language on gesture when it was accompanied by speech (co-speech gesture), but not when it was used without speech (silent gesture) across both blind and sighted learners. The language-specific co-speech gesture pattern for both packaging and ordering semantic elements was present at the earliest ages we tested the blind and sighted children. The silent gesture pattern appeared later for blind children than sighted children for both packaging and ordering. Our findings highlight gesture as a robust and integral aspect of the language acquisition process at the early ages and provide insight into when language does and does not have an effect on gesture, even in blind children who lack visual access to gesture. RESEARCH HIGHLIGHTS: Gestures, when produced with speech (i.e., co-speech gesture), follow language-specific patterns in event representation in both blind and sighted children. Gestures, when produced without speech (i.e., silent gesture), do not follow the language-specific patterns in event representation in both blind and sighted children. Language-specific patterns in speech and co-speech gestures are observable at the same time in blind and sighted children. The cross-linguistic similarities in silent gestures begin slightly later in blind children than in sighted children.


Assuntos
Cegueira , Gestos , Desenvolvimento da Linguagem , Fala , Humanos , Criança , Masculino , Feminino , Pré-Escolar , Fala/fisiologia , Cegueira/fisiopatologia , Visão Ocular/fisiologia , Idioma
16.
Methods ; 218: 39-47, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37479003

RESUMO

CONTEXT: Surface electromyography (sEMG) signals contain rich information recorded from muscle movements and therefore reflect the user's intention. sEMG has seen dominant applications in rehabilitation, clinical diagnosis as well as human engineering, etc. However, current feature extraction methods for sEMG signals have been seriously limited by their stochasticity, transiency, and non-stationarity. OBJECTIVE: Our objective is to combat the difficulties induced by the aforementioned downsides of sEMG and thereby extract representative features for various downstream movement recognition. METHOD: We propose a novel 3-axis view of sEMG features composed of temporal, spatial, and channel-wise summary. We leverage the state-of-the-art architecture Transformer to enforce efficient parallel search and to get rid of limitations imposed by previous work in gesture classification. The transformer model is designed on top of an attention-based module, which allows for the extraction of global contextual relevance among channels and the use of this relevance for sEMG recognition. RESULTS: We compared the proposed method against existing methods on two Ninapro datasets consisting of data from both healthy people and amputees. Experimental results show the proposed method attains the state-of-the-art (SOTA) accuracy on both datasets. We further show that the proposed method enjoys strong generalization ability: a new SOTA is achieved by pretraining the model on a different dataset followed by fine-tuning it on the target dataset.


Assuntos
Algoritmos , Gestos , Humanos , Eletromiografia/métodos
17.
Macromol Rapid Commun ; 45(15): e2400109, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38594026

RESUMO

This work reports a highly-strain flexible fiber sensor with a core-shell structure utilizes a unique swelling diffusion technique to infiltrate carbon nanotubes (CNTs) into the surface layer of Ecoflex fibers. Compared with traditional blended Ecoflex/CNTs fibers, this manufacturing process ensures that the sensor maintains the mechanical properties (923% strain) of the Ecoflex fiber while also improving sensitivity (gauge factor is up to 3716). By adjusting the penetration time during fabrication, the sensor can be customized for different uses. As an application demonstration, the fiber sensor is integrated into the glove to develop a wearable gesture language recognition system with high sensitivity and precision. Additionally, the authors successfully monitor the pressure distribution on the curved surface of a soccer ball by winding the fiber sensor along the ball's surface.


Assuntos
Gestos , Nanotubos de Carbono , Pressão , Propriedades de Superfície , Dispositivos Eletrônicos Vestíveis , Nanotubos de Carbono/química , Humanos
18.
Cereb Cortex ; 33(14): 8942-8955, 2023 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-37183188

RESUMO

Advancements in deep learning algorithms over the past decade have led to extensive developments in brain-computer interfaces (BCI). A promising imaging modality for BCI is magnetoencephalography (MEG), which is a non-invasive functional imaging technique. The present study developed a MEG sensor-based BCI neural network to decode Rock-Paper-scissors gestures (MEG-RPSnet). Unique preprocessing pipelines in tandem with convolutional neural network deep-learning models accurately classified gestures. On a single-trial basis, we found an average of 85.56% classification accuracy in 12 subjects. Our MEG-RPSnet model outperformed two state-of-the-art neural network architectures for electroencephalogram-based BCI as well as a traditional machine learning method, and demonstrated equivalent and/or better performance than machine learning methods that have employed invasive, electrocorticography-based BCI using the same task. In addition, MEG-RPSnet classification performance using an intra-subject approach outperformed a model that used a cross-subject approach. Remarkably, we also found that when using only central-parietal-occipital regional sensors or occipitotemporal regional sensors, the deep learning model achieved classification performances that were similar to the whole-brain sensor model. The MEG-RSPnet model also distinguished neuronal features of individual hand gestures with very good accuracy. Altogether, these results show that noninvasive MEG-based BCI applications hold promise for future BCI developments in hand-gesture decoding.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Humanos , Magnetoencefalografia , Gestos , Eletroencefalografia/métodos , Algoritmos
19.
J Exp Child Psychol ; 246: 105989, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38889478

RESUMO

When solving mathematical problems, young children will perform better when they can use gestures that match mental representations. However, despite their increasing prevalence in educational settings, few studies have explored this effect in touchscreen-based interactions. Thus, we investigated the impact on young children's performance of dragging (where a continuous gesture is performed that is congruent with the change in number) and tapping (involving a discrete gesture that is incongruent) on a touchscreen device when engaged in a continuous number line estimation task. By examining differences in the set size and position of the number line estimation, we were also able to explore the boundary conditions for the superiority effect of congruent gestures. We used a 2 (Gesture Type: drag or tap) × 2 (Set Size: Set 0-10 or Set 0-20) × 2 (Position: left of midpoint or right of midpoint) mixed design. A total of 70 children aged 5 and 6 years (33 girls) were recruited and randomly assigned to either the Drag or Tap group. We found that the congruent gesture (drag) generally facilitated better performance with the touchscreen but with boundary conditions. When completing difficult estimations (right side in the large set size), the Drag group was more accurate, responded to the stimulus faster, and spent more time manipulating than the Tap group. These findings suggest that when children require explicit scaffolding, congruent touchscreen gestures help to release mental resources for strategic adjustments, decrease the difficulty of numerical estimation, and support constructing mental representations.


Assuntos
Gestos , Humanos , Feminino , Masculino , Pré-Escolar , Criança , Resolução de Problemas , Desempenho Psicomotor
20.
Psychol Res ; 88(2): 535-546, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37857913

RESUMO

Existing research is inconsistent regarding the effects of gesture production on narrative recall. Most studies have examined the effects of gesture production during a recall phase, not during encoding, and findings regarding gesture's effects are mixed. The present study examined whether producing gestures at encoding could benefit an individual's narrative recall and whether this effect is moderated by verbal memory and spatial ability. This study also investigated whether producing certain types of gesture is most beneficial to recalling details of a narrative. Participants read a narrative aloud while producing their own gestures at pre-specified phrases in the narrative (Instructed Gesture condition), while placing both their hands behind their backs (No Gesture condition) or with no specific instructions regarding gesture (Spontaneous Gesture condition). Participants completed measures of spatial ability and verbal memory. Recall was measured through both free recall, and specific recall questions related to particular phrases in the narrative. Spontaneous gesture production at encoding benefited free recall, while instructed gestures provided the greatest benefit for recall of specific phrases where gesture had been prompted during encoding. Conversely, for recall of specific phrases where gesture had not been prompted during encoding, instructions to either gesture or not gesture suppressed recall for those higher in verbal memory. Finally, producing iconic and deictic gestures provided benefits for narrative recall, whilst beat gestures had no effect. Gestures play an important role in how we encode and subsequently recall information, providing an opportunity to support cognitive capacity.


Assuntos
Gestos , Navegação Espacial , Humanos , Rememoração Mental , Memória , Mãos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA