Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 89
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Psychol Res ; 88(4): 1298-1313, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38538819

ABSTRACT

Hand gestures play an integral role in multimodal language and communication. Even though the self-oriented functions of gestures, such as activating a speaker's lexicon and maintaining visuospatial imagery, have been emphasized, gestures' functions in creative thinking are not well-established. In the current study, we investigated the role of iconic gestures in verbal divergent thinking-a creative thinking process related to generating many novel ideas. Based on previous findings, we hypothesized that iconic gesture use would facilitate divergent thinking in young adults, especially those with high mental imagery skills. Participants performed Guildford's Alternative Uses Task in a gesture-spontaneous and in a gesture-encouraged condition. We measured fluency (number of ideas), originality (uniqueness of ideas), flexibility (number of idea categories), and elaboration (number of details) in divergent thinking. The results showed that producing iconic gestures in the gesture-encouraged condition positively predicted fluency, originality, and elaboration. In the gesture-spontaneous condition, producing iconic gestures also positively predicted elaboration but negatively predicted flexibility. Mental imagery skills did not interact with the effects of gestures on divergent thinking. These results suggest that iconic gestures are a promising candidate for enhancing almost all aspects of divergent thinking. Overall, the current study adds a new dimension to the self-oriented function of iconic gestures, that is, their contribution to creative thinking.


Subject(s)
Creativity , Gestures , Thinking , Humans , Male , Female , Young Adult , Thinking/physiology , Adult , Hand/physiology , Imagination/physiology , Adolescent
2.
PLoS One ; 18(4): e0283859, 2023.
Article in English | MEDLINE | ID: mdl-37023100

ABSTRACT

Using hand gestures benefits children's divergent thinking and enhances verbal improvisation in adults. In the present study, we asked whether gestures were also associated with convergent thinking by activating individuals' verbal lexicon and maintaining their visuospatial imagery. We tested young adults on verbal and visual convergent thinking, controlling for their mental imagery skills. Results showed that gestures and mental imagery skills play a role in verbal but not visual convergent thinking. Regardless of whether gestures were spontaneous or encouraged, we found a negative association between overall gesture frequency and verbal convergent thinking or individuals with low mental imagery, and a positive association for individuals with high mental imagery. Representational gestures benefited verbal convergent thinking for everyone except those who had low mental imagery and no experience with the task. Performing beat gestures hampered verbal convergent thinking in people with lower mental imagery capacity and helped those who had high mental imagery and previous experience with the task. We also found that gesturing can benefit people with lower verbal abilities on verbal convergent thinking, however, high spatial imagery abilities were required for gestures to boost verbal convergent thinking. The current study adds a new dimension to both the embodied creativity literature and the kaleidoscope of individual differences in gesture research.


Subject(s)
Gestures , Thinking , Child , Young Adult , Humans , Cognition , Creativity
3.
Medicine (Baltimore) ; 102(52): e36546, 2023 Dec 29.
Article in English | MEDLINE | ID: mdl-38206692

ABSTRACT

BACKGROUND: Mirror therapy (MT) is an intervention used for upper extremity rehabilitation in stroke patients and has been studied in various fields. Recently, effective MT methods have been introduced in combination with neuromuscular electrical stimulation or with electromyography (EMG)-triggered biofeedback. The purpose of this study was to investigate the effects of functional electrical stimulation (FES)-based MT incorporating a motion recognition biofeedback device on upper extremity motor recovery to chronic stroke patients. METHODS: Twenty-six chronic stroke patients with onset of more than 6 months were randomly assigned into experimental group (n = 13) and control group (n = 13). Both groups participated in conventional rehabilitation program, while the control group received conventional MT intervention and the experimental group received FES-based MT with motion recognition biofeedback device. All interventions were conducted for 30 min/d, 5 d/wk, for 4 weeks. Upper limb motor recovery, upper limb function, active-range of motion (ROM), and activities of daily living independence were measured before and after the intervention and compared between the 2 groups. RESULTS: The Fugl-Meyer assessment (FMA), manual function test (MFT), K-MBI, and active-ROM (excluding deviation) were significantly improved in both groups (P < .05). Only the experimental group showed significant improvement in upper extremity recovery, ulnar and radial deviation (P < .05). There was a significant difference of change in Brunstrom's recovery level, FMA, MFT, and active-ROM in the experimental group compared to the control group (P < .05). CONCLUSION: FES-based MT using gesture recognition biofeedback is an effective intervention method for improving upper extremity motor recovery and function, active-ROM in patients with chronic stroke. This study suggests that incorporating gesture-recognition biofeedback into FES-based MT can provide additional benefits to patients with chronic stroke.


Subject(s)
Stroke Rehabilitation , Stroke , Humans , Stroke Rehabilitation/methods , Activities of Daily Living , Mirror Movement Therapy , Gestures , Recovery of Function , Treatment Outcome , Stroke/therapy , Biofeedback, Psychology , Brain Damage, Chronic , Upper Extremity , Electric Stimulation
4.
Comput Intell Neurosci ; 2022: 3682261, 2022.
Article in English | MEDLINE | ID: mdl-35814540

ABSTRACT

Folk dance is a very unique local culture in China, and dances in different regions have different characteristics. With the development of 3D digital technology and human gesture recognition technology, how to apply it in folk dance is a question worth thinking about. So, this paper recognizes and collects dance movements through human body detection and tracking technology in human gesture recognition technology. Then, this paper writes the data into the AAM model for 3D digital modeling and retains the information by integrating the manifold ordering. Finally, this paper designs a folk dance learning method combined with the Few-Shot learning method. This paper also designs a data set test experiment, an algorithm data set comparison experiment, and a target matching algorithm comparison experiment to optimize the learning method designed in this paper. The final results show that the Few-Shot learning method based on gesture recognition 3D digital modeling of folk dances designed in this paper reduces the learning time by 17% compared with the traditional folk dance learning methods. And the Few-Shot learning method designed in this paper improves the dance action score by 14% compared with the traditional learning method.


Subject(s)
Gestures , Learning , Algorithms , Humans , Movement , Recognition, Psychology
5.
Games Health J ; 11(3): 177-185, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35294849

ABSTRACT

Objective: Gesture-based serious games can be based on playful and interactive scenarios to enhance user engagement and experience during exercises, thereby increasing efficiency in the motor rehabilitation process. This study aimed to develop the Rehabilite Game (RG) as a complementary therapy tool for upper limb rehabilitation in clinics and home environments and to evaluate aspects of usability and user experience of it. Materials and Methods: The evaluation consisted of the use of a gesture-based serious game with motor rehabilitation sessions managed in a web platform. Thirty-three participants were recruited (21 physiotherapists and 12 patients). The protocol allowed each participant to have the experience of playing sessions with different combinations of settings. The User Experience Questionnaire (UEQ) was used to evaluate aspects of usability and user experience. The study was approved by the Research Ethics Board of the Federal University of Piaui (number 3,429,494). Results: The level of satisfaction with the RG was positive, with an excellent Net Promoter Score for 85.7% of physiotherapists and 100% of patients. All six UEQ scales (attractiveness, perspicuity, efficiency, dependability, stimulation, and novelty) reflected acceptance. Conclusion: The study demonstrated that, according to the results obtained in the experiments, the RG had positive feedback from physiotherapists and patients, indicating that the game can be used in a clinical trial to be compared with other rehabilitation techniques.


Subject(s)
Stroke Rehabilitation , Telerehabilitation , Video Games , Gestures , Humans , Stroke Rehabilitation/methods , Upper Extremity
6.
J Holist Nurs ; 40(3): 281-294, 2022 Sep.
Article in English | MEDLINE | ID: mdl-34463166

ABSTRACT

Nursing care historically has not been separated from institutional care costs. Organizations seek to quantify nursing care with no assignation of the value or uniqueness of the individual patient-nurse encounter. New models point to measuring care at this level. Nursing care encompasses tangible evidence that can be easy to quantify but, in the paradigm of healing and caring, and more specifically within the knowledge pool of holistic nursing, significant contributions are intangible and thus hard to measure. Anthroposophic nursing's 12 nursing gestures offer an integration by making intangible nursing practice tangible. They incorporate addressing the whole person and more clearly show the caring and healing aspects of nursing care. Making such intangibles of care tangible contribute to the discussion of nursing value and how it is measured in healthcare organizations. More research is needed, however, to refine and value nursing care to more accurately reflect the connection between caring, healing, and patient outcomes.


Subject(s)
Holistic Nursing , Nursing Care , Gestures , Humans , Nurse-Patient Relations
7.
Psychol Res ; 85(4): 1408-1417, 2021 Jun.
Article in English | MEDLINE | ID: mdl-32451629

ABSTRACT

This study aimed to explore the relationship between action execution and mental rotation modalities. To this end, pantomime gesture (i.e. the mime of the use of an object) was used as its execution relies on imagery processes. Specifically, we tried to clarify the role of visuo-spatial or motor and body-related mental imagery processes in pantomime gestures performed away (AB, e.g. drawing on a sheet) and towards the body (TB, e.g. brushing the teeth). We included an "actual use" condition in which participants were asked to use a toothbrush and make 3, 6, or 9 circular movements close to their mouth (as if they were brushing their teeth) or to use a pencil and make 3, 6, or 9 circular movements on a desk (as if they were drawing circles). Afterwards, participants were asked to pantomime the actual use of the same objects ("pantomime" condition). Finally, they were asked to mentally rotate three different stimuli: hands, faces, and abstract lines. Results showed that participants were faster in AB than TB pantomimes. Moreover, the more accurate and faster the mental rotation of body-related stimuli was, the more similar the temporal duration between both kinds of pantomimes and the actual use of the objects appeared. Instead, the temporal similarity between AB pantomimes and pencil actual use, as well as, the duration of AB pantomime and actual use, were associated with the ability to mentally rotate abstract lines. This was not true for TB movements. These results suggest that the execution of AB and TB pantomimes may involve different mental imagery modalities. Specifically, AB pantomimes would not only require to mentally manipulate images of body-parts in movement but also represent the spatial relations of the object with the external world.


Subject(s)
Gestures , Imagination/physiology , Psychomotor Performance/physiology , Adult , Hand , Humans , Male , Motor Activity/physiology , Movement , Photic Stimulation/methods , Young Adult
8.
Q J Exp Psychol (Hove) ; 74(1): 29-44, 2021 Jan.
Article in English | MEDLINE | ID: mdl-32640872

ABSTRACT

Ageing has effects both on language and gestural communication skills. Although gesture use is similar between younger and older adults, the use of representational gestures (e.g., drawing a line with fingers on the air to indicate a road) decreases with age. This study investigates whether this change in the production of representational gestures is related to individuals' working memory and/or mental imagery skills. We used three gesture tasks (daily activity description, story completion, and address description) to obtain spontaneous co-speech gestures from younger and older individuals (N = 60). Participants also completed the Corsi working memory task and a mental imagery task. Results showed that although the two age groups' overall gesture frequencies were similar across the three tasks, the younger adults used relatively higher proportions of representational gestures than the older adults only in the address description task. Regardless of age, the mental imagery but not working memory score was associated with the use of representational gestures only in this task. However, the use of spatial words in the address description task did not differ between the two age groups. The mental imagery or working memory scores did not associate with the spatial word use. These findings suggest that mental imagery can play a role in gesture production. Gesture and speech production might have separate timelines in terms of being affected by the ageing process, particularly for spatial content.


Subject(s)
Communication , Gestures , Memory, Short-Term , Aged , Aging , Comprehension , Humans , Speech
9.
Science ; 369(6510): 1424-1426, 2020 Sep 18.
Article in English | MEDLINE | ID: mdl-32943508
10.
Cogn Res Princ Implic ; 5(1): 25, 2020 06 03.
Article in English | MEDLINE | ID: mdl-32494941

ABSTRACT

BACKGROUND: This study investigated the impact of handedness on a common spatial abilities task, the mental rotation task (MRT). The influence of a right-handed world was contrasted with people's embodied experience with their own hands by testing both left- and right-handed people on an MRT of right- and left-hand stimuli. An additional consideration is the influence of matching the shape of the hand stimuli with the proprioception of one's own hands. Two orthogonal hypothesis axes were crossed to yield four competing hypotheses. One axis contrasted (i) embodied experience versus (ii) world knowledge; the other axis contrasted (a) the match between the visual image of a hand on the screen and one's own hand versus (b) the resemblance of the shape outline information from the hand stimuli with the proprioception of one's own hands. RESULTS: Among people with mixed handedness, right-handers performed more accurately for left-hand stimuli, while left-handers had a trend for higher accuracy for right-hand stimuli. For people with extreme handedness, right-handers outperformed left-handers. Regardless of group, there was no significant variation in performance for left-hand stimuli, with only right-hand stimuli producing significant variation. CONCLUSIONS: No hypothesis fully aligned with all the data. For left-hand stimuli, the consistent performance across groups does not provide support for embodied experience, while world knowledge might influence all groups similarly. Alternatively, the within-group variation for mixed-handed people supports embodied experience in the hand MRT, likely processed through visual-proprioceptive integration.


Subject(s)
Functional Laterality/physiology , Gestures , Hand , Imagination/physiology , Proprioception/physiology , Space Perception/physiology , Adult , Female , Humans , Male , Rotation
11.
Acta Psychol (Amst) ; 197: 131-142, 2019 Jun.
Article in English | MEDLINE | ID: mdl-31146090

ABSTRACT

In two experiments, we examined the role of gesture in reinterpreting a mental image. In Experiment 1, we found that participants gestured more about a figure they had learned through manual exploration than about a figure they had learned through vision. This supports claims that gestures emerge from the activation of perception-relevant actions during mental imagery. In Experiment 2, we investigated whether such gestures have a causal role in affecting the quality of mental imagery. Participants were randomly assigned to gesture, not gesture, or engage in a manual interference task as they attempted to reinterpret a figure they had learned through manual exploration. We found that manual interference significantly impaired participants' success on the task. Taken together, these results suggest that gestures reflect mental imaginings of interactions with a mental image and that these imaginings are critically important for mental manipulation and reinterpretation of that image. However, our results suggest that enacting the imagined movements in gesture is not critically important on this particular task.


Subject(s)
Gestures , Imagination/physiology , Movement/physiology , Touch/physiology , Adolescent , Adult , Female , Humans , Learning/physiology , Male , Random Allocation , Young Adult
12.
J Neurosci ; 39(30): 5966-5974, 2019 07 24.
Article in English | MEDLINE | ID: mdl-31126999

ABSTRACT

The middle temporal gyrus (MTG) has been shown to be recruited during the processing of words, but also during the observation of actions. Here we investigated how information related to words and gestures is organized along the MTG. To this aim, we measured the BOLD response in the MTG to video clips of gestures and spoken words in 17 healthy human adults (male and female). Gestures consisted of videos of an actress performing object-use pantomimes (iconic representations of object-directed actions; e.g., playing guitar), emblems (conventional gestures, e.g., thumb up), and meaningless gestures. Word stimuli (verbs, nouns) consisted of video clips of the same actress pronouncing words. We found a stronger response to meaningful compared with meaningless gestures along the whole left and large portions of the right MTG. Importantly, we observed a gradient, with posterior regions responding more strongly to gestures (pantomimes and emblems) than words and anterior regions showing a stronger response to words than gestures. In an intermediate region in the left hemisphere, the response was significantly higher to words and emblems (i.e., items with a greater arbitrariness of the sign-to-meaning mapping) than to pantomimes. These results show that the large-scale organization of information in the MTG is driven by the input modality and may also reflect the arbitrariness of the relationship between sign and meaning.SIGNIFICANCE STATEMENT Here we investigated the organizing principle of information in the middle temporal gyrus, taking into consideration the input-modality and the arbitrariness of the relationship between a sign and its meaning. We compared the middle temporal gyrus response during the processing of pantomimes, emblems, and spoken words. We found that posterior regions responded more strongly to pantomimes and emblems than to words, whereas anterior regions responded more strongly to words than to pantomimes and emblems. In an intermediate region, only in the left hemisphere, words and emblems evoked a stronger response than pantomimes. Our results identify two organizing principles of neural representation: the modality of communication (gestural or verbal) and the (arbitrariness of the) relationship between sign and meanings.


Subject(s)
Gestures , Language , Speech/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Photic Stimulation/methods , Random Allocation , Young Adult
13.
J Speech Lang Hear Res ; 62(2): 229-246, 2019 02 26.
Article in English | MEDLINE | ID: mdl-30950695

ABSTRACT

Purpose This study evaluated ultrasound visual biofeedback treatment for teaching new articulations to children with a wide variety of speech sound disorders. It was hypothesized that motor-based intervention incorporating ultrasound would lead to rapid acquisition of a range of target lingual gestures with generalization to untreated words. Method Twenty children aged 6-15 years with a range of mild to severe speech disorders affecting a variety of lingual targets enrolled in a case series with replication. Of these, 15 children completed the intervention. All of the children presented with a variety of errors. We therefore employed a target selection strategy to treat the most frequent lingual error. These individual speech targets were treated using ultrasound visual biofeedback as part of ten to twelve 1-hr intervention sessions. The primary outcome measure was percentage of target segments correct in untreated wordlists. Results Six children were treated for velar fronting; 3 children, for postalveolar fronting; 2 children, for backing alveolars to pharyngeal or glottal place; 1 child, for debuccalization (production of all onsets as [h]); 1 child, for vowel merger; and 2 children, for lateralized sibilants. Ten achieved the new articulation in the 1st or 2nd session of intervention, despite no children being readily stimulable for their target articulation before intervention. In terms of generalization, effect sizes for percentage of target segments correct ranged from no effect (5 children), small effect (1 child), medium effect (4 children), and large effect (5 children). Conclusions Ultrasound visual biofeedback can be used to treat a wide range of lingual errors in children with various speech sound disorders, from mild to severe. Visual feedback may be useful for establishing new articulations; however, generalization is more variable.


Subject(s)
Biofeedback, Psychology/methods , Gestures , Speech Sound Disorder/therapy , Speech Therapy/methods , Adolescent , Child , Female , Humans , Male , Phonation , Ultrasonic Waves , Vocabulary
14.
Psychon Bull Rev ; 26(3): 721-752, 2019 Jun.
Article in English | MEDLINE | ID: mdl-30511231

ABSTRACT

The Gesture as Simulated Action (GSA) framework was proposed to explain how gestures arise from embodied simulations of the motor and perceptual states that occur during speaking and thinking (Hostetter & Alibali, Psychonomic Bulletin & Review, 15, 495-514, 2008). In this review, we revisit the framework's six main predictions regarding gesture rates, gesture form, and the cognitive cost of inhibiting gesture. We find that the available evidence largely supports the main predictions of the framework. We also consider several challenges to the framework that have been raised, as well as several of the framework's limitations as it was originally proposed. We offer additional elaborations of the framework to address those challenges that fall within the framework's scope, and we conclude by identifying key directions for future work on how gestures arise from an embodied mind.


Subject(s)
Gestures , Movement , Cognition , Humans , Inhibition, Psychological , Models, Psychological , Psychological Theory , Speech , Thinking
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 231-234, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30440380

ABSTRACT

It is known that brain dynamics significantly changes during motor imagery tasks of upper limb involving different kind of interactions with an object. Nevertheless, an automatic discrimination of transitive (i.e., actions involving an object) and intransitive (i.e., meaningful gestures that do not include the use of objects) imaginary actions using EEG dynamics has not been performed yet. In this study we exploit measures of EEG spectra to automatically discern between imaginary transitive and intransitive movements of the upper limb. To this end, nonlinear support vector machine algorithms are used to properly combine EEG-derived features, while a recursive feature elimination procedure highlights the most discriminant cortical regions and associated EEG frequency oscillations. Results show the significance of $\gamma ( 30 -45$ Hz) oscillations over the fronto-occipital and ipsilateral-parietal areas for the automatic classification of transitive-intransitive imaginary upper limb movements with a satisfactory accuracy of 70.97%.


Subject(s)
Imagery, Psychotherapy , Support Vector Machine , Electroencephalography , Gestures , Movement
16.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 2651-2654, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30440952

ABSTRACT

An electromyogram (EMG) signal acquisition system capable of real time classification of several facial gestures is presented. The training data consist of the facial EMG collected from 10 individuals (5 female/5 male). A custom-designed sensor interface integrated circuit (IC) consisting of an amplifier and an ADC, implemented in 65nm CMOS technology, has been used for signal acquisition [1]. It consumes 3.8nW power from a 0.3V battery. Feature extraction and classification is performed in software every 300ms to give real-time feedback to the user. Discrete wavelet transforms (DWT) are used for feature extraction in the time-frequency domain. The dimensionality of the feature vector is reduced by selecting specific wavelet decomposition levels without compromising the accuracy, which reduces the computation cost of feature extraction in embedded implementations. A support vector machine (SVM) is used for the classification. Overall, the system is capable of identifying several jaw movements such as clenching, opening the jaw and resting in real-time from a single channel EMG data, which makes the system suitable for providing biofeedback during sleeping and awake states for stress monitoring, bruxism, and several orthodontic applications such as temporomandibular joint disorder (TMJD).


Subject(s)
Electromyography , Gestures , Movement , Biofeedback, Psychology , Female , Humans , Male , Wavelet Analysis
17.
Acta Psychol (Amst) ; 191: 190-200, 2018 Nov.
Article in English | MEDLINE | ID: mdl-30308442

ABSTRACT

Music presents a complex case of movement timing, as one to several dozen musicians coordinate their actions at short time-scales. This process is often directed by a conductor who provides a visual beat and guides the ensemble through tempo changes. The current experiment tested the ways in which audio-motor coordination is influenced by visual cues from a conductor's gestures, and how this influence might manifest in two ways: movements used to produce sound related to the music, and movements of the upper-body that do not directly affect sound output. We designed a virtual conductor that was derived from morphed motion capture recordings of human conductors. Two groups of participants (29 musicians and 28 nonmusicians, to test the generalizability of visuo-motor synchronization to non-experts) were shown the virtual conductor, a simple visual metronome, or a stationary circle while completing a drumming task that required synchronization with tempo-changing musical sequences. We measured asynchronies and temporal anticipation in the drumming task, as well as participants' upper-body movement using motion capture. Drumming results suggest the conductor generally improves synchronization by facilitating anticipation of tempo changes in the music. Motion capture results showed that the conductor visual cue elicited more structured head movements than the other two visual cues for nonmusicians only. Multiple regression analysis showed that the nonmusicians with less rigid movement and high anticipation had lower asynchronies. Thus, the visual cues provided by a conductor might serve to facilitate temporal anticipation and more synchronous movement in the general population, but might also cause rigid ancillary movements in some non-experts.


Subject(s)
Anticipation, Psychological/physiology , Auditory Perception/physiology , Cues , Movement/physiology , Music/psychology , Visual Perception/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Female , Gestures , Humans , Male , Middle Aged , Photic Stimulation/methods , Sound , Time Factors , Young Adult
18.
Neuropsychologia ; 117: 332-338, 2018 08.
Article in English | MEDLINE | ID: mdl-29932960

ABSTRACT

During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus - known for its role in relational memory and information integration - is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms.


Subject(s)
Amnesia/pathology , Amnesia/physiopathology , Gestures , Hippocampus/pathology , Psychomotor Performance/physiology , Speech/physiology , Acoustic Stimulation , Aged , Amnesia/diagnostic imaging , Female , Hippocampus/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Middle Aged
19.
J Cogn Neurosci ; 30(8): 1086-1097, 2018 08.
Article in English | MEDLINE | ID: mdl-29916792

ABSTRACT

Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.


Subject(s)
Alpha Rhythm , Beta Rhythm , Brain/physiology , Comprehension/physiology , Gestures , Semantics , Speech Acoustics , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Photic Stimulation , Speech , Young Adult
20.
Brain Struct Funct ; 223(7): 3073-3089, 2018 Sep.
Article in English | MEDLINE | ID: mdl-29737415

ABSTRACT

The semantic integration between gesture and speech (GSI) is mediated by the left posterior temporal sulcus/middle temporal gyrus (pSTS/MTG) and the left inferior frontal gyrus (IFG). Evidence from electroencephalography (EEG) suggests that oscillations in the alpha and beta bands may support processes at different stages of GSI. In the present study, we investigated the relationship between electrophysiological oscillations and blood-oxygen-level-dependent (BOLD) activity during GSI. In a simultaneous EEG-fMRI study, German participants (n = 19) were presented with videos of an actor either performing meaningful gestures in the context of a comprehensible German (GG) or incomprehensible Russian sentence (GR), or just speaking a German sentence (SG). EEG results revealed reduced alpha and beta power for the GG vs. SG conditions, while fMRI analyses showed BOLD increase in the left pSTS/MTG for GG > GR ∩ GG > SG. In time-window-based EEG-informed fMRI analyses, we further found a positive correlation between single-trial alpha power and BOLD signal in the left pSTS/MTG, the left IFG, and several sub-cortical regions. Moreover, the alpha-pSTS/MTG correlation was observed in an earlier time window in comparison to the alpha-IFG correlation, thus supporting a two-stage processing model of GSI. Our study shows that EEG-informed fMRI implies multiple roles of alpha oscillations during GSI, and that the method is a best candidate for multidimensional investigations on complex cognitive functions such as GSI.


Subject(s)
Brain Mapping/methods , Brain Waves , Brain/diagnostic imaging , Brain/physiology , Electroencephalography , Gestures , Magnetic Resonance Imaging , Speech Perception , Visual Perception , Acoustic Stimulation , Adult , Alpha Rhythm , Beta Rhythm , Cognition , Female , Germany , Humans , Male , Photic Stimulation , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL