ABSTRACT
This review explores the historical and current significance of gestures as a universal form of communication with a focus on hand gestures in virtual reality applications. It highlights the evolution of gesture detection systems from the 1990s, which used computer algorithms to find patterns in static images, to the present day where advances in sensor technology, artificial intelligence, and computing power have enabled real-time gesture recognition. The paper emphasizes the role of hand gestures in virtual reality (VR), a field that creates immersive digital experiences through the Ma blending of 3D modeling, sound effects, and sensing technology. This review presents state-of-the-art hardware and software techniques used in hand gesture detection, primarily for VR applications. It discusses the challenges in hand gesture detection, classifies gestures as static and dynamic, and grades their detection difficulty. This paper also reviews the haptic devices used in VR and their advantages and challenges. It provides an overview of the process used in hand gesture acquisition, from inputs and pre-processing to pose detection, for both static and dynamic gestures.
Subject(s)
Gestures , Hand , Virtual Reality , Humans , Hand/physiology , Algorithms , User-Computer Interface , Artificial IntelligenceABSTRACT
In recent decades, technological advancements have transformed the industry, highlighting the efficiency of automation and safety. The integration of augmented reality (AR) and gesture recognition has emerged as an innovative approach to create interactive environments for industrial equipment. Gesture recognition enhances AR applications by allowing intuitive interactions. This study presents a web-based architecture for the integration of AR and gesture recognition, designed to interact with industrial equipment. Emphasizing hardware-agnostic compatibility, the proposed structure offers an intuitive interaction with equipment control systems through natural gestures. Experimental validation, conducted using Google Glass, demonstrated the practical viability and potential of this approach in industrial operations. The development focused on optimizing the system's software and implementing techniques such as normalization, clamping, conversion, and filtering to achieve accurate and reliable gesture recognition under different usage conditions. The proposed approach promotes safer and more efficient industrial operations, contributing to research in AR and gesture recognition. Future work will include improving the gesture recognition accuracy, exploring alternative gestures, and expanding the platform integration to improve the user experience.
Subject(s)
Augmented Reality , Gestures , Humans , Industry , Software , Pattern Recognition, Automated/methods , User-Computer InterfaceABSTRACT
Children all over the world learn language, yet the contexts in which they do so vary substantially. This variation needs to be systematically quantified to build robust and generalizable theories of language acquisition. We compared communicative interactions between parents and their 2-year-old children (N = 99 families) during mealtime across five cultural settings (Brazil, Ecuador, Argentina, Germany, and Japan) and coded the amount of talk and gestures as well as their conversational embedding (interlocutors, function, and themes). We found a comparable pattern of communicative interactions across cultural settings, which were modified in ways that are consistent with local norms and values. These results suggest that children encounter similarly structured communicative environments across diverse cultural contexts and will inform theories of language learning. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Subject(s)
Cross-Cultural Comparison , Parent-Child Relations , Humans , Child, Preschool , Female , Male , Parent-Child Relations/ethnology , Communication , Argentina , Ecuador , Language Development , Japan , Germany , Meals , Gestures , Adult , ParentsABSTRACT
Este estudo avaliou o reconhecimento (imitação, identidade e identificação) e a nomeação de estímulos emocionais de valência negativa (raiva e tristeza) e positiva (alegria e surpresa) em conjunto com a influência dos tipos de estímulos utilizados (social-feminino, social-masculino, familiar e emoji) em crianças e jovens adultos com autismo ou síndrome de Down, por meio de tarefas aplicadas pela família e mediadas por recursos tecnológicos durante a pandemia de covid-19. Participaram cinco crianças e dois jovens adultos com autismo e uma criança e dois jovens adultos com síndrome de Down. Foram implementadas tarefas de identidade, reconhecimento, nomeação e imitação, com estímulos faciais de função avaliativa (sem consequência diferencial) e de ensino (com consequência diferencial, uso de dicas e critério de aprendizagem), visando a emergência da nomeação emocional por meio do ensino das tarefas de reconhecimento. Os resultados da linha de base identificaram que, para os participantes que apresentaram menor tempo de resposta para o mesmo gênero, a diferença de tempo de resposta foi em média 57,28% menor. Em relação à valência emocional, 50% dos participantes apresentaram diferenças nos acertos, a depender da valência positiva e negativa, sendo que 66,66% apresentaram diferenças para o tempo de resposta a depender da valência emocional. Após o procedimento de ensino, os participantes mostraram maior número de acertos nas tarefas, independentemente do gênero de estímulo e valência emocional, criando ocasião para generalização da aprendizagem de reconhecimento e nomeação de emoções, além de consolidar a viabilidade de estratégias de ensino mediadas por recursos tecnológicos e aplicadas por familiares.(AU)
This study evaluated the recognition (imitation, identity, and identification) and naming of negative (anger and sadness) and positive (joy and surprise) emotional stimuli alongside the influence of the types of stimuli (social-female, social-male, family, and emoji) in children and young adults with autism and Down syndrome, via tasks applied by the family and mediated by technological resources, during the COVID-19 pandemic. Five children and two young adults with autism and one child and two young adults with Down syndrome participated. Identity, recognition, naming, and imitation tasks were planned and implemented using facial stimuli with evaluative (without differential consequence) and teaching (with differential consequence, tips, and learning criteria) functions, aiming at the emergence of emotional naming from the recognition teaching tasks. The baseline results showed that, for participants who had a shorter response time for the same gender, the response time difference was on average 57.28% lower. Regarding the emotional valence, 50% of the participants showed differences in the correct answers, depending on the positive and negative valence, and 66.66% showed differences in the response time depending on the emotional valence. After the teaching procedure, the participants showed a greater number of correct answers in the tasks, regardless of the stimulus type and emotional valence, creating an opportunity for generalizing learning of emotion recognition and naming, in addition to consolidating the feasibility of teaching strategies mediated by technological resources and applied by family members.(AU)
Este estudio evaluó el reconocimiento (imitación, identidad e identificación) y la denominación de estímulos emocionales negativos (enfado y tristeza) y positivos (alegría y sorpresa) y la influencia de los tipos de estímulos utilizados (social-femenino, social-masculino, familiar y emoji ) de niños y jóvenes con autismo o síndrome de Down, a través de tareas aplicadas por la familia, mediadas por recursos tecnológicos durante la pandemia de la covid-19. Participaron cinco niños y dos adultos jóvenes con autismo, y un niño y dos adultos jóvenes con síndrome de Down. Se planificaron e implementaron tareas de identidad, reconocimiento, nombramiento e imitación con estímulos faciales con función evaluativa (sin consecuencia diferencial) y enseñanza (con consecuencia diferencial, uso de ayudas y criterios de aprendizaje), buscando la emergencia del nombramiento emocional después de la enseñanza de tareas de reconocimiento. Los resultados de la línea de base identificaron que para los participantes que tenían un tiempo de respuesta más corto para el mismo género, la diferencia en el tiempo de respuesta fue un 57,28% menor. En cuanto a la valencia emocional, el 50% de los participantes mostraron diferencias en las respuestas correctas, en función de la valencia positiva y negativa, y el 66,66% tuvieron diferencias en el tiempo de respuesta, en función de la valencia emocional. Después del procedimiento de enseñanza, los participantes mostraron mayor número de aciertos en las tareas evaluadas, independientemente del tipo de estímulo o valencia emocional, lo que genera una oportunidad para la generalización del aprendizaje de reconocimiento y denominación de emociones, además de consolidar la viabilidad de estrategias de enseñanza mediadas por recursos tecnológicos y aplicadas por la familia.(AU)
Subject(s)
Humans , Male , Female , Child, Preschool , Child , Adolescent , Adult , Young Adult , Autistic Disorder , Family , Down Syndrome , Expressed Emotion , Emotions , Anxiety , Parent-Child Relations , Parents , Perception , Perceptual Distortion , Personality , Play and Playthings , Prejudice , Psychiatry , Psychology , Psychology, Social , Attention , Audiovisual Aids , Signs and Symptoms , Social Desirability , Social Environment , Social Values , Socialization , Stereotyping , Task Performance and Analysis , Visual Perception , Women , Behavior , Body Image , Image Processing, Computer-Assisted , Symbolism , Activities of Daily Living , Artificial Intelligence , Adaptation, Psychological , Grief , Attitude , Cognitive Behavioral Therapy , Child , Child Rearing , Chromosomes , Clinical Trial , Mental Competency , Caregivers , Cognition , Signal Detection, Psychological , Communication , Conscience , Intuition , Observation , Stereotypic Movement Disorder , Chromosome Disorders , Personal Autonomy , Adult Children , Trust , Comprehension , Personnel Delegation , Data Compression , Education , Education of Intellectually Disabled , Education, Special , Ego , Empathy , Exploratory Behavior , Face , Facial Expression , Cultural Competency , Young Adult , Fear , Feedback , Emotional Intelligence , Social Stigma , Pandemics , Social Skills , Social Norms , Emotional Adjustment , Optimism , Metacognition , Facial Recognition , Autism Spectrum Disorder , Applied Behavior Analysis , Self-Management , Respect , Emotional Regulation , Generalization, Psychological , Genetics , Social Interaction , Identity Recognition , COVID-19 , Gestures , Cognitive Training , Family Support , Processing Speed , Handling, Psychological , Imagination , Interpersonal Relations , Language , Life Change Events , Memory, Short-Term , Men , Mental Disorders , Mental Processes , Intellectual Disability , Nervous System Diseases , Neurologic Manifestations , Neurology , Neuropsychological Tests , Nonverbal CommunicationABSTRACT
This work addresses the design and implementation of a novel PhotoBiological Filter Classifier (PhBFC) to improve the accuracy of a static sign language translation system. The captured images are preprocessed by a contrast enhancement algorithm inspired by the capacity of retinal photoreceptor cells from mammals, which are responsible for capturing light and transforming it into electric signals that the brain can interpret as images. This sign translation system not only supports the effective communication between an agent and an operator but also between a community with hearing disabilities and other people. Additionally, this technology could be integrated into diverse devices and applications, further broadening its scope, and extending its benefits for the community in general. The bioinspired photoreceptor model is evaluated under different conditions. To validate the advantages of applying photoreceptors cells, 100 tests were conducted per letter to be recognized, on three different models (V1, V2, and V3), obtaining an average of 91.1% of accuracy on V3, compared to 63.4% obtained on V1, and an average of 55.5 Frames Per Second (FPS) in each letter classification iteration for V1, V2, and V3, demonstrating that the use of photoreceptor cells does not affect the processing time while also improving the accuracy. The great application potential of this system is underscored, as it can be employed, for example, in Deep Learning (DL) for pattern recognition or agent decision-making trained by reinforcement learning, etc.
Subject(s)
Gestures , Sign Language , Humans , Animals , Neural Networks, Computer , Photoreceptor Cells , Algorithms , MammalsABSTRACT
Surgical Instrument Signaling (SIS) is compounded by specific hand gestures used by the communication between the surgeon and surgical instrumentator. With SIS, the surgeon executes signals representing determined instruments in order to avoid error and communication failures. This work presented the feasibility of an SIS gesture recognition system using surface electromyographic (sEMG) signals acquired from the Myo armband, aiming to build a processing routine that aids telesurgery or robotic surgery applications. Unlike other works that use up to 10 gestures to represent and classify SIS gestures, a database with 14 selected gestures for SIS was recorded from 10 volunteers, with 30 repetitions per user. Segmentation, feature extraction, feature selection, and classification were performed, and several parameters were evaluated. These steps were performed by taking into account a wearable application, for which the complexity of pattern recognition algorithms is crucial. The system was tested offline and verified as to its contribution for all databases and each volunteer individually. An automatic segmentation algorithm was applied to identify the muscle activation; thus, 13 feature sets and 6 classifiers were tested. Moreover, 2 ensemble techniques aided in separating the sEMG signals into the 14 SIS gestures. Accuracy of 76% was obtained for the Support Vector Machine classifier for all databases and 88% for analyzing the volunteers individually. The system was demonstrated to be suitable for SIS gesture recognition using sEMG signals for wearable applications.
Subject(s)
Gestures , Pattern Recognition, Automated , Humans , Electromyography/methods , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Algorithms , Surgical Instruments , HandABSTRACT
Hand gesture recognition (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) has been investigated for human-machine applications in the last few years. The information obtained from the HGR systems has the potential to be helpful to control machines such as video games, vehicles, and even robots. Therefore, the key idea of the HGR system is to identify the moment in which a hand gesture was performed and it's class. Several human-machine state-of-the-art approaches use supervised machine learning (ML) techniques for the HGR system. However, the use of reinforcement learning (RL) approaches to build HGR systems for human-machine interfaces is still an open problem. This work presents a reinforcement learning (RL) approach to classify EMG-IMU signals obtained using a Myo Armband sensor. For this, we create an agent based on the Deep Q-learning algorithm (DQN) to learn a policy from online experiences to classify EMG-IMU signals. The HGR proposed system accuracy reaches up to [Formula: see text] and [Formula: see text] for classification and recognition respectively, with an average inference time per window observation of 20 ms. and we also demonstrate that our method outperforms other approaches in the literature. Then, we test the HGR system to control two different robotic platforms. The first is a three-degrees-of-freedom (DOF) tandem helicopter test bench, and the second is a virtual six-degree-of-freedom (DOF) UR5 robot. We employ the designed hand gesture recognition (HGR) system and the inertial measurement unit (IMU) integrated into the Myo sensor to command and control the motion of both platforms. The movement of the helicopter test bench and the UR5 robot is controlled under a PID controller scheme. Experimental results show the effectiveness of using the proposed HGR system based on DQN for controlling both platforms with a fast and accurate response.
Subject(s)
Robotic Surgical Procedures , Robotics , Humans , Gestures , Algorithms , Upper Extremity , Electromyography/methods , HandABSTRACT
This work provides a complete dataset containing surface electromyography (sEMG) signals acquired from the forearm with a sampling frequency of 1000 Hz. The dataset is named WyoFlex sEMG Hand Gesture and recorded the data of 28 participants between 18 and 37 years old without neuromuscular diseases or cardiovascular problems. The test protocol consisted of sEMG signals acquisition corresponding to ten wrist and grasping movements (extension, flexion, ulnar deviation, radial deviation, hook grip, power grip, spherical grip, precision grip, lateral grip, and pinch grip), considering three repetitions for each gesture. Also, the dataset contains general information such as anthropometric measures of the upper limb, gender, age, laterally of the person, and physical condition. Likewise, the implemented acquisition system consists of a portable armband with four sEMG channels distributed equidistantly for each forearm. The database could be used for the recognition of hand gestures, evaluation of the evolution of patients in rehabilitation processes, control of upper limb orthoses or prostheses, and biomechanical analysis of the forearm.
Subject(s)
Artificial Limbs , Forearm , Humans , Adolescent , Young Adult , Adult , Electromyography/methods , Wrist , Gestures , HandABSTRACT
In recent years, hand gesture recognition (HGR) technologies that use electromyography (EMG) signals have been of considerable interest in developing human-machine interfaces. Most state-of-the-art HGR approaches are based mainly on supervised machine learning (ML). However, the use of reinforcement learning (RL) techniques to classify EMGs is still a new and open research topic. Methods based on RL have some advantages such as promising classification performance and online learning from the user's experience. In this work, we propose a user-specific HGR system based on an RL-based agent that learns to characterize EMG signals from five different hand gestures using Deep Q-network (DQN) and Double-Deep Q-Network (Double-DQN) algorithms. Both methods use a feed-forward artificial neural network (ANN) for the representation of the agent policy. We also performed additional tests by adding a long-short-term memory (LSTM) layer to the ANN to analyze and compare its performance. We performed experiments using training, validation, and test sets from our public dataset, EMG-EPN-612. The final accuracy results demonstrate that the best model was DQN without LSTM, obtaining classification and recognition accuracies of up to 90.37%±10.7% and 82.52%±10.9%, respectively. The results obtained in this work demonstrate that RL methods such as DQN and Double-DQN can obtain promising results for classification and recognition problems based on EMG signals.
Subject(s)
Gestures , Neural Networks, Computer , Humans , Electromyography/methods , Algorithms , Memory, Long-Term , HandABSTRACT
The Bohemian writer Milan Kundera narrates, more than once, an experience from his years of life under an authoritarian regime. It is the memory of a violent fantasy of rape, one in which libido and destruction are mingled. Based on this memory and how he wrote about it, we present two forms of mental illnesses (by activation and by passivation) and relate them to the model proposed by Green to think about depressive states through passivation. The first form of mental illness, by activation, is the result of an overly successful active defense against anxiety. The second form, by passivation, is a paradoxical reaction to agony in the face of deadly psychic states. Arguing that this second form of mental illness is frequently identified in individuals during periods of political change, we consider that the intricacy between the drives of destruction and the libido, even when it generates fantasies or brutal gestures, can reveal itself as an episodic attempt of an active defense amid the predominance of passivation generated by post-traumatic helplessness.
Subject(s)
Gestures , Mental Disorders , Male , Humans , Libido , FantasyABSTRACT
Political apologies have been theorized to play an important role in healing and reconciliation processes in post-conflict settings. Whether they actually fulfil this function, however, remains unclear as the voices and perspectives of victim communities have largely been underrepresented in research. To address this, we examined the role of apologies that were offered for the El Mozote massacre (El Salvador), the Jeju 4.3 massacres (Republic of Korea) and Bloody Sunday (United Kingdom), according to members of these communities and the broader public. Although we anticipated that victim community members should find the apology more valuable and meaningful and should, therefore, be more positive about its role in healing and reconciliation processes, we found that this varies across countries. This variation could be explained by people's trust in the country's institutions. Across the samples, we found that the apology was seen as a relatively important gesture. For the apology to be perceived as impactful, however, it had to be seen as a meaningful (i.e. sincere) gesture. Our findings suggest that apologies have a role to play in the aftermath of human rights violations, but that it is essential to take the broader context into account.
Subject(s)
Gestures , Trust , Humans , El Salvador , Republic of Korea , United KingdomABSTRACT
Hand gesture recognition systems (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) have been studied for different applications in recent years. Most commonly, cutting-edge HGR methods are based on supervised machine learning methods. However, the potential benefits of reinforcement learning (RL) techniques have shown that these techniques could be a viable option for classifying EMGs. Methods based on RL have several advantages such as promising classification performance and online learning from experience. In this work, we developed an HGR system made up of the following stages: pre-processing, feature extraction, classification, and post-processing. For the classification stage, we built an RL-based agent capable of learning to classify and recognize eleven hand gestures-five static and six dynamic-using a deep Q-network (DQN) algorithm based on EMG and IMU information. The proposed system uses a feed-forward artificial neural network (ANN) for the representation of the agent policy. We carried out the same experiments with two different types of sensors to compare their performance, which are the Myo armband sensor and the G-force sensor. We performed experiments using training, validation, and test set distributions, and the results were evaluated for user-specific HGR models. The final accuracy results demonstrated that the best model was able to reach up to 97.50%±1.13% and 88.15%±2.84% for the classification and recognition, respectively, with regard to static gestures, and 98.95%±0.62% and 90.47%±4.57% for the classification and recognition, respectively, with regard to dynamic gestures with the Myo armband sensor. The results obtained in this work demonstrated that RL methods such as the DQN are capable of learning a policy from online experience to classify and recognize static and dynamic gestures using EMG and IMU signals.
Subject(s)
Gestures , Neural Networks, Computer , Algorithms , Upper Extremity , Electromyography/methods , HandABSTRACT
The classification of surface myoelectric signals (sEMG) remains a great challenge when focused on its implementation in an electromechanical hand prosthesis, due to its nonlinear and stochastic nature, as well as the great difference between models applied offline and online. In this work, the selection of the set of the features that allowed us to obtain the best results for the classification of this type of signals is presented. In order to compare the results obtained, the Nina PRO DB2 and DB3 databases were used, which contain information on 50 different movements of 40 healthy subjects and 11 amputated subjects, respectively. The sEMG of each subject was acquired through 12 channels in a bipolar configuration. To carry out the classification, a convolutional neural network (CNN) was used and a comparison of four sets of features extracted in the time domain was made, three of which have shown good performance in previous works and one more that was used for the first time to train this type of network. Set one is composed of six features in the time domain (TD1), Set two has 10 features also in the time domain (TD2) including the autoregression model (AR), the third set has two features in the time domain derived from spectral moments (TD-PSD1), and finally, a set of five features also has information on the power spectrum of the signal obtained in the time domain (TD-PSD2). The selected features in each set were organized in four different ways for the formation of the training images. The results obtained show that the set of features TD-PSD2 obtained the best performance for all cases. With the set of features and the formation of images proposed, an increase in the accuracies of the models of 8.16% and 8.56% was obtained for the DB2 and DB3 databases, respectively, compared to the current state of the art that has used these databases.
Subject(s)
Amputees , Gestures , Algorithms , Electromyography/methods , Hand , Humans , Movement , Neural Networks, ComputerABSTRACT
Objective: Gesture-based serious games can be based on playful and interactive scenarios to enhance user engagement and experience during exercises, thereby increasing efficiency in the motor rehabilitation process. This study aimed to develop the Rehabilite Game (RG) as a complementary therapy tool for upper limb rehabilitation in clinics and home environments and to evaluate aspects of usability and user experience of it. Materials and Methods: The evaluation consisted of the use of a gesture-based serious game with motor rehabilitation sessions managed in a web platform. Thirty-three participants were recruited (21 physiotherapists and 12 patients). The protocol allowed each participant to have the experience of playing sessions with different combinations of settings. The User Experience Questionnaire (UEQ) was used to evaluate aspects of usability and user experience. The study was approved by the Research Ethics Board of the Federal University of Piaui (number 3,429,494). Results: The level of satisfaction with the RG was positive, with an excellent Net Promoter Score for 85.7% of physiotherapists and 100% of patients. All six UEQ scales (attractiveness, perspicuity, efficiency, dependability, stimulation, and novelty) reflected acceptance. Conclusion: The study demonstrated that, according to the results obtained in the experiments, the RG had positive feedback from physiotherapists and patients, indicating that the game can be used in a clinical trial to be compared with other rehabilitation techniques.
Subject(s)
Stroke Rehabilitation , Telerehabilitation , Video Games , Gestures , Humans , Stroke Rehabilitation/methods , Upper ExtremityABSTRACT
O objetivo do artigo é compreender a noção de linguagem e expressão em Merleau-Ponty, de modo a evidenciar como se dá a gênese de sentido linguístico. Primeiro, optamos em expor o estado da questão na Fenomenologia da Percepção. Introduzimos o corpo como potência expressiva e significativa de mundo, e a linguagem como expressão atrelada ao caráter gestual da palavra. Em seguida, apresentamos a expressão da linguagem após a apropriação feita por Merleau-Ponty da linguística de Saussure. Por fim, enfocamos a articulação entre o sentido gestual da palavra e o caráter sistemático da língua: o sujeito falante, enquanto potência de atualização e criação de sentido, assume a língua à qual pertence, ao mesmo tempo em que a língua forma sua possibilidade de expressão.
This paper aims to understand the notion of language and expression in Merleau-Ponty, in order to show how the genesis of linguistic meaning occurs. First, we chose to showcase the state of this question in Phenomenology of Perception.We introduce the body as an expressive and significant power of world, and the language as an expression linked to the gestural character of the word. Then, we present the language expression after Merleau-Ponty's appropriation of Saussure's linguistics. Finally, we focus on the articulation between the gestural sense of the word and the systematic character of the language: the speaking subject, as a power of updating and creating meaning, assumes the language to which it belongs, at the same time that the language forms its possibility of expression.
Subject(s)
Gestures , Language , Nonverbal CommunicationABSTRACT
O objetivo do artigo é compreender a noção de linguagem e expressão em Merleau-Ponty, de modo a evidenciar como se dá a gênese de sentido linguístico. Primeiro, optamos em expor o estado da questão na Fenomenologia da Percepção. Introduzimos o corpo como potência expressiva e significativa de mundo, e a linguagem como expressão atrelada ao caráter gestual da palavra. Em seguida, apresentamos a expressão da linguagem após a apropriação feita por Merleau-Ponty da linguística de Saussure. Por fim, enfocamos a articulação entre o sentido gestual da palavra e o caráter sistemático da língua: o sujeito falante, enquanto potência de atualização e criação de sentido, assume a língua à qual pertence, ao mesmo tempo em que a língua forma sua possibilidade de expressão. (AU)
This paper aims to understand the notion of language and expression in Merleau-Ponty, in order to show how the genesis of linguistic meaning occurs. First, we chose to showcase the state of this question in Phenomenology of Perception. We introduce the body as an expressive and significant power of world, and the language as an expression linked to the gestural character of the word. Then, we present the language expression after Merleau-Ponty's appropriation of Saussure's linguistics. Finally, we focus on the articulation between the gestural sense of the word and the systematic character of the language: the speaking subject, as a power of updating and creating meaning, assumes the language to which it belongs, at the same time that the language forms its possibility of expression. (AU)
Subject(s)
Psychology , Gestures , Language , CommunicationABSTRACT
This article reports the results of an investigation that used a mixed methodology with microgenetic orientation, to observe the genetic development of small acts of thought and their bodily manifestations. A qualitative design was carried out through a videographic record with 10 participants to explore thought trajectories and their genetic unfolding in gestures. In a second moment, a quantitative sequential analysis was conducted with 50 participants, who were invited to the laboratory to participate in a tachistoscopic presentation. The procedure was videotaped and coded, identifying categories of thought and their respective gestural expressions. An analysis of different trajectories was carried out to observe the transitions that thought takes and its gestural movements. The results show trajectories in the forms of thought that are investigated through a qualitative microgenetic analysis, which shows the anticipation of verbal meaning through gestures and the transitions backwards to then advance into more integrated forms of thought. On the other hand, trajectories between voluntary and involuntary forms of thought, as well as transitions in verbal and imaginative forms of thought are detected in a quantitative sequence analysis. Finally, the results are integrated and the utility of mixed designs to study the microgenesis of the consciousness phenomenon is discussed.
Subject(s)
Gestures , HumansABSTRACT
Este estudo tem por objeto a descrição de traços presentes em perfis artísticos em dança que reconhecemos por sua estabilidade e persistência. Toma como primeira investidura os casos de Lac (Sandro Borelli) e Les Poupées (Marta Soares). Com o modelo do Big Five - os cinco grandes fatores da personalidade - é possível o recorte de grupos de gestos e movimentos, comportamento não verbal, os que persistem como ação e reação, os quais são receptivos e reativos à própria ação, os insistentes a olho nu. Fazemos uso da observação e, consequentemente, interpretação a partir de gravação em DVDs em sistema de forward and rewind - avançar e retroceder - e freeze, o congelar das imagens de gravações em vídeo. Os perfis em arte são habitualmente estudados em acordo com o grau de aproximação e afastamento de um determinado fator da personalidade do artista e não como motivo independente no produto artístico. (AU)
Este estudio tiene como objetivo describir las características que configuran los perfiles artísticos en danza con los que reconocemos su estabilidad y persistencia a lo largo de la producción escénica de un determinado artista. Toma a Lac (Sandro Borelli) y Les Poupées (Marta Soares) como su primera investidura. Con el modelo Big Five, los cinco factores principales de la personalidad, es posible cortar grupos de gestos y movimientos, aquellos que persisten como acción y reacción, qué tan receptivos y reactivos a la acción misma, lo insistente a simple vista. Los perfiles de arte se estudian de acuerdo con el grado de aproximación y distancia de un determinado factor en la personalidad del artista. No parece haber ningún interés en entender el producto artístico como algo para ganar un perfil como un motivo independiente de la personalidad del artista. (AU)
This study aims to describe the features present in artistic profiles in dance which we recognize for their stability and persistence. It takes as his first investiture the cases of Lac (Sandro Borelli) and Les Poupées (Marta Soares). With the Big Five model - the five great personality factors - it is possible to cut out groups of gestures and movements, non-verbal behavior, those that persist as action and reaction, how receptive and reactive they are to the action itself, the insistent ones to the naked eye. We make use of observation and, consequently, interpretation based on recording on DVDs in a forward and rewind system - forward and backward - and freeze, the freezing of the images of video recordings. Profiles in art are usually studied according to the degree of approximation and distance from a certain factor of the artist's personality and not as an independent motive in the artistic product. (AU)
Subject(s)
Humans , Male , Female , Personality , Behavior , Dancing , Gestures , Movement , Video RecordingABSTRACT
Purpose Most toddlers with autism spectrum disorder and other developmental delays receive early intervention at home and may not participate in a clinic-based communication evaluation. However, there is limited research that has prospectively examined communication in very young children with and without autism in a home-based setting. This study used granular observational coding to document the communicative acts performed by toddlers with autism, developmental delay, and typical development in the home environment. Method Children were selected from the archival database of the FIRST WORDS Project (N = 211). At approximately 20 months of age, each child participated in everyday activities with a caregiver during an hour-long, video-recorded, naturalistic home observation. Inventories of unique gestures, rates per minute, and proportions of types of communicative acts and communicative functions were coded and compared using a one-way analysis of variance. Concurrent and prospective relationships between rate of communication and measures of social communication, language development, and autism symptoms were examined. Results A total of 40,738 communicative acts were coded. Children with autism, developmental delay, and typical development used eight, nine, and 12 unique gestures on average, respectively. Children with autism used deictic gestures, vocalizations, and communicative acts for behavior regulation at significantly lower rates than the other groups. Statistically significant correlations were observed between rate of communication and several outcome measures. Conclusion Observation of social communication in the natural environment may improve early identification of children with autism and communication delays, complement clinic-based assessments, and provide useful information about a child's social communication profile and the family's preferred activities and intervention priorities. Supplemental Material https://doi.org/10.23641/asha.14204522.
Subject(s)
Autism Spectrum Disorder , Language Development Disorders , Autism Spectrum Disorder/diagnosis , Autism Spectrum Disorder/therapy , Child, Preschool , Communication , Gestures , Humans , Language Development , Language Development Disorders/diagnosis , Language Development Disorders/therapy , Prospective StudiesABSTRACT
Linguistic input has an immediate effect on child language, making it difficult to discern whatever biases children may bring to language-learning. To discover these biases, we turn to deaf children who cannot acquire spoken language and are not exposed to sign language. These children nevertheless produce gestures, called homesigns, which have structural properties found in natural language. We ask whether these properties can be traced to gestures produced by hearing speakers in Nicaragua, a gesture-rich culture, and in the USA, a culture where speakers rarely gesture without speech. We studied 7 homesigning children and hearing family members in Nicaragua, and 4 in the USA. As expected, family members produced more gestures without speech, and longer gesture strings, in Nicaragua than in the USA. However, in both cultures, homesigners displayed more structural complexity than family members, and there was no correlation between individual homesigners and family members with respect to structural complexity. The findings replicate previous work showing that the gestures hearing speakers produce do not offer a model for the structural aspects of homesign, thus suggesting that children bring biases to construct, or learn, these properties to language-learning. The study also goes beyond the current literature in three ways. First, it extends homesign findings to Nicaragua, where homesigners received a richer gestural model than USA homesigners. Moreover, the relatively large numbers of gestures in Nicaragua made it possible to take advantage of more sophisticated statistical techniques than were used in the original homesign studies. Second, the study extends the discovery of complex noun phrases to Nicaraguan homesign. The almost complete absence of complex noun phrases in the hearing family members of both cultures provides the most convincing evidence to date that homesigners, and not their hearing family members, are the ones who introduce structural properties into homesign. Finally, by extending the homesign phenomenon to Nicaragua, the study offers insight into the gestural precursors of an emerging sign language. The findings shed light on the types of structures that an individual can introduce into communication before that communication is shared within a community of users, and thus sheds light on the roots of linguistic structure.