Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.515
Filtrar
1.
PLoS One ; 15(6): e0234695, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32559213

RESUMO

When looking at a speaking person, the analysis of facial kinematics contributes to language discrimination and to the decoding of the time flow of visual speech. To disentangle these two factors, we investigated behavioural and fMRI responses to familiar and unfamiliar languages when observing speech gestures with natural or reversed kinematics. Twenty Italian volunteers viewed silent video-clips of speech shown as recorded (Forward, biological motion) or reversed in time (Backward, non-biological motion), in Italian (familiar language) or Arabic (non-familiar language). fMRI revealed that language (Italian/Arabic) and time-rendering (Forward/Backward) modulated distinct areas in the ventral occipito-temporal cortex, suggesting that visual speech analysis begins in this region, earlier than previously thought. Left premotor ventral (superior subdivision) and dorsal areas were preferentially activated with the familiar language independently of time-rendering, challenging the view that the role of these regions in speech processing is purely articulatory. The left premotor ventral region in the frontal operculum, thought to include part of the Broca's area, responded to the natural familiar language, consistent with the hypothesis of motor simulation of speech gestures.


Assuntos
Área de Broca/fisiologia , Gestos , Idioma , Córtex Motor/fisiologia , Lobo Occipital/fisiologia , Fala/fisiologia , Lobo Temporal/fisiologia , Adulto , Comportamento , Discriminação Psicológica , Feminino , Humanos , Modelos Lineares , Imagem por Ressonância Magnética , Masculino , Análise e Desempenho de Tarefas , Adulto Jovem
2.
PLoS One ; 15(6): e0233892, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32484842

RESUMO

The development of large-scale corpora has led to a quantum leap in our understanding of speech in recent years. By contrast, the analysis of massive datasets has so far had a limited impact on the study of gesture and other visual communicative behaviors. We utilized the UCLA-Red Hen Lab multi-billion-word repository of video recordings, all of them showing communicative behavior that was not elicited in a lab, to quantify speech-gesture co-occurrence frequency for a subset of linguistic expressions in American English. First, we objectively establish a systematic relationship in the high degree of co-occurrence between gesture and speech in our subset of expressions, which consists of temporal phrases. Second, we show that there is a systematic alignment between the informativity of co-speech gestures and that of the verbal expressions with which they co-occur. By exposing deep, systematic relations between the modalities of gesture and speech, our results pave the way for the data-driven integration of multimodal behavior into our understanding of human communication.


Assuntos
Idioma , Linguística , Percepção da Fala/fisiologia , Fala/fisiologia , Comunicação , Expressão Facial , Gestos , Humanos , Semântica , Processamento de Sinais Assistido por Computador , Gravação em Vídeo
3.
Stud Health Technol Inform ; 270: 756-760, 2020 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-32570484

RESUMO

This paper presents a convolutional neural network-based classification of the hand flexion and extension gestures used in wrist recovery after injury. The hand gesture recognition device used in our study is the Leap Motion controller. The Leap Motion device's inability to accurately differentiate the left hand from the right hand when performing hand rotation gestures was eliminated by introducing hand and thumb direction vectors into the database used to train the neural network. A 3D environment was created for the introduction of the data describing the gesture into the database. A classification accuracy of 95% was achieved for the hand flexion and extension gesture divided into three levels for each hand. The populated database may also be used to classify other gestures involving hand rotation.


Assuntos
Gestos , Redes Neurais de Computação , Bases de Dados Factuais , Mãos , Humanos , Movimento (Física) , Punho
4.
Anim Cogn ; 23(5): 833-841, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32451634

RESUMO

In this review, we have analyzed the studies on the "mismatch paradigm" or "contrasting paradigm", in which the word indicates an intent that is opposite to the gesture in dogs and children. The studies on children highlighted the importance of the type of gestural messages that, when delivered in a non-ostensive manner, assume less value than the verbal indication; whereas, when more emphasis is given to the gestures, it produces opposite results. Word-trained dogs appear to rely more on words, but in the absence of such specific training, dogs rely more on gestures either in transitive or intransitive actions. Moreover, gestural communication appears easier to generalize, since dogs respond equally well to the gestural messages of familiar persons and strangers, whereas their performance lowers when a stranger provides a vocal message. Visual signals trigger faster responses than auditory signals, whereas verbal indications can at most equal the gestural latencies, but never overcome them. Female dogs appeared to be more proficient in the interpretation of gestural commands, while males performed better in the case of verbal commands. Based on a PRISMA analyses from the Web of Science database, three papers on children and four on dogs were retrieved. Our analyses revealed that gestures are more reliable reference points than words for dogs and children. Future studies should focus on choices related to objects of different values for the subjects. Moreover, the choices of dogs should be compared using known and unknown objects, which might help clarify how familiarity with the objects could differently influence their responses.


Assuntos
Gestos , Intenção , Acústica , Animais , Cães , Feminino , Masculino
5.
PLoS One ; 15(4): e0232128, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32324834

RESUMO

The social interactions that we experience from early infancy often involve actions that are not strictly instrumental but engage the recipient by eliciting a (complementary) response. Interactive gestures may have privileged access to our perceptual and motor systems either because of their intrinsically engaging nature or as a result of extensive social learning. We compared these two hypotheses in a series of behavioral experiments by presenting individuals with interactive gestures that call for motor responses to complement the interaction ('hand shaking', 'requesting', 'high-five') and with communicative gestures that are equally socially relevant and salient, but do not strictly require a response from the recipient ('Ok', 'Thumbs up', 'Peace'). By means of a spatial compatibility task, we measured the interfering power of these task-irrelevant stimuli on the behavioral responses of individuals asked to respond to a target. Across three experiments, our results showed that the interactive gestures impact on response selection and reduce spatial compatibility effects as compared to the communicative (non-interactive) gestures. Importantly, this effect was independent of the activation of specific social scripts that may interfere with response selection. Overall, our results show that interactive gestures have privileged access to our perceptual and motor systems, possibly because they entail an automatic preparation to respond that involuntary engages the motor system of the observers. We discuss the implications from a developmental and neurophysiological point of view.


Assuntos
Gestos , Relações Interpessoais , Desempenho Psicomotor/fisiologia , Adulto , Feminino , Lateralidade Funcional , Humanos , Masculino , Percepção de Movimento , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
6.
PLoS One ; 15(3): e0229486, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32150573

RESUMO

When questioning the veracity of an utterance, we perceive certain non-linguistic behaviours to indicate that a speaker is being deceptive. Recent work has highlighted that listeners' associations between speech disfluency and dishonesty are detectable at the earliest stages of reference comprehension, suggesting that the manner of spoken delivery influences pragmatic judgements concurrently with the processing of lexical information. Here, we investigate the integration of a speaker's gestures into judgements of deception, and ask if and when associations between nonverbal cues and deception emerge. Participants saw and heard a video of a potentially dishonest speaker describe treasure hidden behind an object, while also viewing images of both the named object and a distractor object. Their task was to click on the object behind which they believed the treasure to actually be hidden. Eye and mouse movements were recorded. Experiment 1 investigated listeners' associations between visual cues and deception, using a variety of static and dynamic cues. Experiment 2 focused on adaptor gestures. We show that a speaker's nonverbal behaviour can have a rapid and direct influence on listeners' pragmatic judgements, supporting the idea that communication is fundamentally multimodal.


Assuntos
Percepção Auditiva/fisiologia , Sinais (Psicologia) , Decepção , Movimentos Oculares/fisiologia , Gestos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Atenção , Compreensão , Humanos
7.
PLoS One ; 15(2): e0228869, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32074124

RESUMO

Human activity recognition is an important and difficult topic to study because of the important variability between tasks repeated several times by a subject and between subjects. This work is motivated by providing time-series signal classification and a robust validation and test approaches. This study proposes to classify 60 signs from the American Sign Language based on data provided by the LeapMotion sensor by using different conventional machine learning and deep learning models including a model called DeepConvLSTM that integrates convolutional and recurrent layers with Long-Short Term Memory cells. A kinematic model of the right and left forearm/hand/fingers/thumb is proposed as well as the use of a simple data augmentation technique to improve the generalization of neural networks. DeepConvLSTM and convolutional neural network demonstrated the highest accuracy compared to other models with 91.1 (3.8) and 89.3 (4.0) % respectively compared to the recurrent neural network or multi-layer perceptron. Integrating convolutional layers in a deep learning model seems to be an appropriate solution for sign language recognition with depth sensors data.


Assuntos
Redes Neurais de Computação , Línguas de Sinais , Algoritmos , Fenômenos Biomecânicos , Aprendizado Profundo , Gestos , Mãos , Humanos , Aprendizado de Máquina , Masculino , Movimento
8.
Brain Cogn ; 139: 105516, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31935628

RESUMO

The production of pantomime is a sensible task to detect praxis deficits. It is usually assessed by presenting objects visually or by verbal command. Verbal instructions are given either by providing the name of the object (e.g., "Show me how to use a pen") or by requiring the object function (e.g., "Show me how to write"). These modes of testing are used interchangeably. The aim of this study is to investigate whether the different instructions generate different performances. Fifty-one healthy participants (17-89 years old) were assessed on three pantomime production tasks differing for the instruction given: two with verbal instructions (Pantomime by Name and Pantomime by Function) and one with the object visually presented (Pantomime by Object). Results showed that Pantomime by Function produced the poorest performance and the highest frequency of Body Parts as Tool (BPT) errors, suggesting that the way the instructions are given may determine the performance in a task. Nuances in test instructions could result in misleading outcome.


Assuntos
Apraxias/fisiopatologia , Gestos , Idioma , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Voluntários Saudáveis , Humanos , Masculino , Memória de Curto Prazo , Pessoa de Meia-Idade , Semântica , Adulto Jovem
9.
PLoS One ; 15(1): e0227039, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31929544

RESUMO

To facilitate hand gesture recognition, we investigated the use of acoustic signals with an accelerometer and gyroscope at the human wrist. As a proof-of-concept, the prototype consisted of 10 microphone units in contact with the skin placed around the wrist along with an inertial measurement unit (IMU). The gesture recognition performance was evaluated through the identification of 13 gestures used in daily life. The optimal area for acoustic sensor placement at the wrist was examined using the minimum redundancy and maximum relevance feature selection algorithm. We recruited 10 subjects to perform over 10 trials for each set of hand gestures. The accuracy was 75% for a general model with the top 25 features selected, and the intra-subject average classification accuracy was over 80% with the same features using one microphone unit at the mid-anterior wrist and an IMU. These results indicate that acoustic signatures from the human wrist can aid IMU sensing for hand gesture recognition, and the selection of a few common features for all subjects could help with building a general model. The proposed multimodal framework helps address the single IMU sensing bottleneck for hand gestures during arm movement and/or locomotion.


Assuntos
Acústica , Gestos , Mãos/fisiologia , Reconhecimento Fisiológico de Modelo , Dispositivos Eletrônicos Vestíveis , Articulação do Punho/fisiologia , Adulto , Feminino , Humanos , Masculino , Movimento , Adulto Jovem
10.
Brain Stimul ; 13(2): 457-463, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31911072

RESUMO

BACKGROUND: Imaging studies point to a posture (finger vs. hand) and domain-specific neural basis of gestures. Furthermore, modulation of gestures by theta burst stimulation (TBS) may depend on interhemispheric disinhibition. OBJECTIVE/HYPOTHESIS: In this randomized sham-controlled study, we hypothesized that dual site continuous TBS over left inferior frontal gyrus (IFG-L) and right inferior parietal gyrus (IPL-R) predominantly affects pantomime of finger postures. Furthermore, we predicted that dual cTBS improves imitation of hand gestures if the effect correlates with measures of callosal connectivity. METHODS: Forty-six healthy subjects participated in this study and were targeted with one train of TBS in different experimental sessions: baseline, sham, single site IFG-L, dual IFG-L/IPL-R, single site IPL-R. Gestures were evaluated by blinded raters using the Test for Upper Limb Apraxia (TULIA) and Postural Imitation Test (PIT). Callosal connectivity was analyzed by diffusion tensor imaging (DTI). RESULTS: Dual cTBS significantly improved TULIAtotal (F [3, 28] = 4.118, p = .009), but did not affect TULIApantomime. The beneficial effect was driven by the cTBS over IPL-R, which improved TULIAimitation (p = .038). Furthermore, TULIAimitation significantly correlated with the microstructure (fractional anisotropy) of the splenium (r = 0.420, p = .026), corrected for age and whole brain volume. CONCLUSIONS: The study suggests that inhibition of IPL-R largely accounted for improved gesturing, possibly through transcallosal facilitation of IPL-L. Therefore, the findings may be relevant for the treatment of apraxic stroke patients. Gesture pantomime and postural gestures escaped the modulation by dual cTBS, suggesting a more widespread and/or variable neural representation.


Assuntos
Imagem de Tensor de Difusão , Lateralidade Funcional , Gestos , Ritmo Teta , Adulto , Mapeamento Encefálico , Feminino , Dedos/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Lobo Parietal/fisiologia , Postura
11.
Acta Psychol (Amst) ; 203: 102988, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31935659

RESUMO

Our recent study within the embodiment perspective showed that the evaluation of true and false information activates the simulation of vertical and horizontal head movements involved in nodding and shaking of the head (Moretti & Greco, 2018). This result was found in an explicit evaluation task where motion detection software was deployed to enable participants to assess a series of objectively true or false statements by moving them with the head vertically and horizontally on a computer screen, under conditions of compatibility and incompatibility between simulated and performed action. This study replicated that experiment, but with subjective statements about liked and disliked food, in both explicit and implicit evaluation tasks. Two experiments, plus one control experiment, were devised to test the presence of a motor-affective compatibility effect (vertical-liked; horizontal-disliked) and whether the motor-semantic compatibility found with objective statements (vertical-true; horizontal-false) could be a sub-effect of a more general and automatic association (vertical-accepted; horizontal-refused). As expected, response times were shorter when statements about liked foods and disliked foods were moved vertically and horizontally respectively by making head movements, even when participants were not explicitly required to evaluate them. In contrast, the truth compatibility effect only occurred in the explicit evaluation task. Overall results support the idea that head-nodding and shaking are simulated approach-avoidance responses. Different aspects of the meaning of these gestures and the practical implications of the study for cognitive and social research are discussed.


Assuntos
Aprendizagem da Esquiva/fisiologia , Gestos , Movimentos da Cabeça/fisiologia , Estimulação Luminosa/métodos , Adolescente , Adulto , Emoções/fisiologia , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Adulto Jovem
12.
13.
Dev Sci ; 23(2): e12894, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31408564

RESUMO

The study employed four gestural models using frame-by-frame microanalytic methods, and followed how the behaviours unfolded over time. Forty-two human newborns (0-3 days) were examined for their imitation of tongue protrusion, 'head tilt with looking up', three-finger and two-finger gestures. The results showed that all three gesture groups were imitated. Results of the temporal analyses revealed an early and a later, second stage of responses. Later responses were characterized by a suppression of similar, but non-matching movements. Perinatal imitation is not a phenomenon served by a single underlying mechanism; it has at least two different stages. An early phase is followed by voluntary matching behaviour by the neonatal infant.


Assuntos
Gestos , Comportamento Imitativo/fisiologia , Fatores Etários , Feminino , Dedos/fisiologia , Humanos , Recém-Nascido , Masculino , Movimento/fisiologia
14.
Arch Gerontol Geriatr ; 87: 103996, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31855713

RESUMO

BACKGROUND: Gesture-based human-robot interaction (HRI) depends on the technical performance of the robot-integrated gesture recognition system (GRS) and on the gestural performance of the robot user, which has been shown to be rather low in older adults. Training of gestural commands (GCs) might improve the quality of older users' input for gesture-based HRI, which in turn may lead to an overall improved HRI. OBJECTIVE: To evaluate the effects of a user training on gesture-based HRI between an assistive bathing robot and potential elderly robot users. METHODS: Twenty-five older adults with bathing disability participated in this quasi-experimental, single-group, pre-/post-test study and underwent a specific user training (10-15 min) on GCs for HRI with the assistive bathing robot. Outcomes measured before and after training included participants' gestural performance assessed by a scoring method of an established test of gesture production (TULIA) and sensor-based gestural performance (SGP) scores derived from the GRS-recorded data, and robot's command recognition rate (CRR). RESULTS: Gestural performance (TULIA = +57.1 ±â€¯56.2 %, SGP scores = +41.1 ±â€¯74.4 %) and CRR (+31.9 ±â€¯51.2 %) significantly improved over training (p < .001). Improvements in gestural performance and CRR were highly associated with each other (r = 0.80-0.81, p < .001). Participants with lower initial gestural performance and higher gerontechnology anxiety benefited most from the training. CONCLUSIONS: Our study highlights that training in gesture-based HRI with an assistive bathing robot is highly beneficial for the quality of older users' GCs, leading to higher CRRs of the robot-integrated GRS, and thus to an overall improved HRI.


Assuntos
Banhos/métodos , Capacitação de Usuário de Computador/métodos , Gestos , Robótica/métodos , Atividades Cotidianas , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Equipamentos de Autoajuda
15.
J Autism Dev Disord ; 50(4): 1147-1158, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31872323

RESUMO

Children with autism spectrum disorder (ASD) produce fewer deictic gestures, accompanied by delays/deviations in speech development, compared to typically-developing (TD) children. We ask whether children with ASD-like TD children-show right-hand preference in gesturing and whether right-handed gestures predict their vocabulary size in speech. Our analysis of handedness in gesturing in children with ASD (n = 23, Mage = 30-months) and with TD (n = 23, Mage = 18-months) during mother-child play showed a right-hand preference for TD children-but not for children with ASD. Nonetheless, right-handed deictic gestures predicted expressive vocabulary 1 year later in both children with ASD and with TD. Handedness for gesture, both hand preference and amount of right-handed pointing, may be an important indicator of language development in autism and typical development.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Transtorno do Espectro Autista/psicologia , Lateralidade Funcional/fisiologia , Gestos , Desenvolvimento da Linguagem , Relações Mãe-Filho/psicologia , Criança , Desenvolvimento Infantil/fisiologia , Pré-Escolar , Feminino , Humanos , Lactente , Estudos Longitudinais , Masculino , Valor Preditivo dos Testes , Fala/fisiologia , Vocabulário
16.
Dev Sci ; 23(1): e12843, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31045301

RESUMO

What aspects of infants' prelinguistic communication are most valuable for learning to speak, and why? We test whether early vocalizations and gestures drive the transition to word use because, in addition to indicating motoric readiness, they (a) are early instances of intentional communication and (b) elicit verbal responses from caregivers. In study 1, 11 month olds (N = 134) were observed to coordinate vocalizations and gestures with gaze to their caregiver's face at above chance rates, indicating that they are plausibly intentionally communicative. Study 2 tested whether those infant communicative acts that were gaze-coordinated best predicted later expressive vocabulary. We report a novel procedure for predicting vocabulary via multi-model inference over a comprehensive set of infant behaviours produced at 11 and 12 months (n = 58). This makes it possible to establish the relative predictive value of different behaviours that are hierarchically organized by level of granularity. Gaze-coordinated vocalizations were the most valuable predictors of expressive vocabulary size up to 24 months. Study 3 established that caregivers were more likely to respond to gaze-coordinated behaviours. Moreover, the dyadic combination of infant gaze-coordinated vocalization and caregiver response was by far the best predictor of later vocabulary size. We conclude that practice with prelinguistic intentional communication facilitates the leap to symbol use. Learning is optimized when caregivers respond to intentional vocalizations with appropriate language.


Assuntos
Cuidadores/psicologia , Comunicação , Comportamento do Lactente , Desenvolvimento da Linguagem , Feminino , Fixação Ocular , Gestos , Humanos , Lactente , Idioma , Masculino , Vocabulário
17.
Tidsskr Nor Laegeforen ; 139(18)2019 Dec 10.
Artigo em Norueguês | MEDLINE | ID: mdl-31823574

Assuntos
Mãos , Idioma , Gestos , Humanos
18.
Proc Natl Acad Sci U S A ; 116(51): 26072-26077, 2019 12 17.
Artigo em Inglês | MEDLINE | ID: mdl-31792169

RESUMO

How the world's 6,000+ natural languages have arisen is mostly unknown. Yet, new sign languages have emerged recently among deaf people brought together in a community, offering insights into the dynamics of language evolution. However, documenting the emergence of these languages has mostly consisted of studying the end product; the process by which ad hoc signs are transformed into a structured communication system has not been directly observed. Here we show how young children create new communication systems that exhibit core features of natural languages in less than 30 min. In a controlled setting, we blocked the possibility of using spoken language. In order to communicate novel messages, including abstract concepts, dyads of children spontaneously created novel gestural signs. Over usage, these signs became increasingly arbitrary and conventionalized. When confronted with the need to communicate more complex meanings, children began to grammatically structure their gestures. Together with previous work, these results suggest that children have the basic skills necessary, not only to acquire a natural language, but also to spontaneously create a new one. The speed with which children create these structured systems has profound implications for theorizing about language evolution, a process which is generally thought to span across many generations, if not millennia.


Assuntos
Comunicação , Desenvolvimento da Linguagem , Idioma , Criança , Pré-Escolar , Surdez , Gestos , Humanos , Negociação , Semântica , Línguas de Sinais
19.
Sensors (Basel) ; 19(23)2019 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-31801226

RESUMO

Recent research on hand detection and gesture recognition has attracted increasing interest due to its broad range of potential applications, such as human-computer interaction, sign language recognition, hand action analysis, driver hand behavior monitoring, and virtual reality. In recent years, several approaches have been proposed with the aim of developing a robust algorithm which functions in complex and cluttered environments. Although several researchers have addressed this challenging problem, a robust system is still elusive. Therefore, we propose a deep learning-based architecture to jointly detect and classify hand gestures. In the proposed architecture, the whole image is passed through a one-stage dense object detector to extract hand regions, which, in turn, pass through a lightweight convolutional neural network (CNN) for hand gesture recognition. To evaluate our approach, we conducted extensive experiments on four publicly available datasets for hand detection, including the Oxford, 5-signers, EgoHands, and Indian classical dance (ICD) datasets, along with two hand gesture datasets with different gesture vocabularies for hand gesture recognition, namely, the LaRED and TinyHands datasets. Here, experimental results demonstrate that the proposed architecture is efficient and robust. In addition, it outperforms other approaches in both the hand detection and gesture classification tasks.


Assuntos
Aprendizado Profundo , Gestos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Mãos/fisiologia , Humanos , Redes Neurais de Computação
20.
Augment Altern Commun ; 35(4): 285-298, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31884826

RESUMO

Some school-age children with multiple disabilities communicate predominately through the display of prelinguistic behaviors such as gestures, vocalizations, facial expressions, and eye gaze. Increasing the frequency and complexity of these behaviors may be one approach to building communication and transitioning toward linguistic communication (i.e., symbolic language). The current preliminary study used a single-subject ABB'B" design nested within a multiple baseline across participants design with randomization to evaluate a multi-phase intervention aimed at increasing social gaze behaviors. The participants were 5 school-age children with multiple disabilities. Participants appeared to demonstrate increases in both the frequency and complexity of their social gaze behavior during the intervention according to Improvement Rate Difference calculations that largely maintained four months after intervention ended. More research is needed, but the intervention shows promise as one aspect of AAC intervention for children who are prelinguistic communicators. Future research is critical to evaluating this or related interventions with a larger number of individuals and across a larger range of profiles and ages.


Assuntos
Auxiliares de Comunicação para Pessoas com Deficiência , Transtornos da Comunicação/reabilitação , Fixação Ocular , Comportamento Social , Paralisia Cerebral/reabilitação , Criança , Encefalomalacia/reabilitação , Expressão Facial , Feminino , Gestos , Humanos , Desenvolvimento da Linguagem , Masculino , Retardo Mental Ligado ao Cromossomo X/reabilitação , Comunicação não Verbal , Talassemia alfa/reabilitação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA