Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
PLoS Comput Biol ; 20(6): e1012222, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38913743

RESUMO

Biological structures are defined by rigid elements, such as bones, and elastic elements, like muscles and membranes. Computer vision advances have enabled automatic tracking of moving animal skeletal poses. Such developments provide insights into complex time-varying dynamics of biological motion. Conversely, the elastic soft-tissues of organisms, like the nose of elephant seals, or the buccal sac of frogs, are poorly studied and no computer vision methods have been proposed. This leaves major gaps in different areas of biology. In primatology, most critically, the function of air sacs is widely debated; many open questions on the role of air sacs in the evolution of animal communication, including human speech, remain unanswered. To support the dynamic study of soft-tissue structures, we present a toolkit for the automated tracking of semi-circular elastic structures in biological video data. The toolkit contains unsupervised computer vision tools (using Hough transform) and supervised deep learning (by adapting DeepLabCut) methodology to track inflation of laryngeal air sacs or other biological spherical objects (e.g., gular cavities). Confirming the value of elastic kinematic analysis, we show that air sac inflation correlates with acoustic markers that likely inform about body size. Finally, we present a pre-processed audiovisual-kinematic dataset of 7+ hours of closeup audiovisual recordings of siamang (Symphalangus syndactylus) singing. This toolkit (https://github.com/WimPouw/AirSacTracker) aims to revitalize the study of non-skeletal morphological structures across multiple species.


Assuntos
Sacos Aéreos , Elasticidade , Animais , Sacos Aéreos/fisiologia , Sacos Aéreos/anatomia & histologia , Fenômenos Biomecânicos , Biologia Computacional/métodos , Aprendizado Profundo , Gravação em Vídeo/métodos
2.
Proc Natl Acad Sci U S A ; 117(21): 11364-11367, 2020 05 26.
Artigo em Inglês | MEDLINE | ID: mdl-32393618

RESUMO

We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear and not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory-vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another's dynamic physical states.


Assuntos
Extremidade Superior/fisiologia , Voz/fisiologia , Adulto , Percepção Auditiva , Feminino , Audição/fisiologia , Humanos , Masculino , Atividade Motora/fisiologia , Experimentação Humana não Terapêutica , Punho/fisiologia
3.
Neuroimage ; 264: 119734, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36343884

RESUMO

We present a dataset of behavioural and fMRI observations acquired in the context of humans involved in multimodal referential communication. The dataset contains audio/video and motion-tracking recordings of face-to-face, task-based communicative interactions in Dutch, as well as behavioural and neural correlates of participants' representations of dialogue referents. Seventy-one pairs of unacquainted participants performed two interleaved interactional tasks in which they described and located 16 novel geometrical objects (i.e., Fribbles) yielding spontaneous interactions of about one hour. We share high-quality video (from three cameras), audio (from head-mounted microphones), and motion-tracking (Kinect) data, as well as speech transcripts of the interactions. Before and after engaging in the face-to-face communicative interactions, participants' individual representations of the 16 Fribbles were estimated. Behaviourally, participants provided a written description (one to three words) for each Fribble and positioned them along 29 independent conceptual dimensions (e.g., rounded, human, audible). Neurally, fMRI signal evoked by each Fribble was measured during a one-back working-memory task. To enable functional hyperalignment across participants, the dataset also includes fMRI measurements obtained during visual presentation of eight animated movies (35 min total). We present analyses for the various types of data demonstrating their quality and consistency with earlier research. Besides high-resolution multimodal interactional data, this dataset includes different correlates of communicative referents, obtained before and after face-to-face dialogue, allowing for novel investigations into the relation between communicative behaviours and the representational space shared by communicators. This unique combination of data can be used for research in neuroscience, psychology, linguistics, and beyond.


Assuntos
Linguística , Fala , Humanos , Fala/fisiologia , Comunicação , Idioma , Imageamento por Ressonância Magnética
4.
Psychol Sci ; 32(8): 1227-1237, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34240647

RESUMO

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


Assuntos
Ilusões , Gestos , Mãos , Humanos , Língua de Sinais , Fala
5.
Perception ; 49(9): 905-925, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33002391

RESUMO

Most objects have well-defined affordances. Investigating perception of affordances of objects that were not created for a specific purpose would provide insight into how affordances are perceived. In addition, comparison of perception of affordances for such objects across different exploratory modalities (visual vs. haptic) would offer a strong test of the lawfulness of information about affordances (i.e., the invariance of such information over transformation). Along these lines, "feelies"- objects created by Gibson with no obvious function and unlike any common object-could shed light on the processes underlying affordance perception. This study showed that when observers reported potential uses for feelies, modality significantly influenced what kind of affordances were perceived. Specifically, visual exploration resulted in more noun labels (e.g., "toy") than haptic exploration which resulted in more verb labels (i.e., "throw"). These results suggested that overlapping, but distinct classes of action possibilities are perceivable using vision and haptics. Semantic network analyses revealed that visual exploration resulted in object-oriented responses focused on object identification, whereas haptic exploration resulted in action-oriented responses. Cluster analyses confirmed these results. Affordance labels produced in the visual condition were more consistent, used fewer descriptors, were less diverse, but more novel than in the haptic condition.


Assuntos
Atividade Motora/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
6.
Psychol Res ; 84(4): 966-980, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30552506

RESUMO

Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects' weight and manipulability. This pre-registered study (https://www.osf.io/9uh6q/) was designed to explore how gestures affect memory for sensorimotor information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced as different in weight). With this paradigm, a discrepancy can be induced between participants' conscious illusory perception of objects' weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants' heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed by the agent's conscious renderings about the actions they describe, rather than implicit motor routines.


Assuntos
Gestos , Ilusões/psicologia , Percepção de Peso , Adolescente , Adulto , Feminino , Humanos , Julgamento , Masculino , Memória , Resolução de Problemas , Percepção de Tamanho , Adulto Jovem
7.
Psychol Res ; 84(2): 502-513, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-30066133

RESUMO

During silent problem solving, hand gestures arise that have no communicative intent. The role of such co-thought gestures in cognition has been understudied in cognitive research as compared to co-speech gestures. We investigated whether gesticulation during silent problem solving supported subsequent performance in a Tower of Hanoi problem-solving task, in relation to visual working-memory capacity and task complexity. Seventy-six participants were assigned to either an instructed gesture condition or a condition that allowed them to gesture, but without explicit instructions to do so. This resulted in three gesture groups: (1) non-gesturing; (2) spontaneous gesturing; (3) instructed gesturing. In line with the embedded/extended cognition perspective on gesture, gesturing benefited complex problem-solving performance for participants with a lower visual working-memory capacity, but not for participants with a lower spatial working-memory capacity.


Assuntos
Gestos , Memória de Curto Prazo , Resolução de Problemas , Memória Espacial , Adolescente , Adulto , Cognição , Feminino , Humanos , Masculino , Adulto Jovem
8.
J Acoust Soc Am ; 148(3): 1231, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-33003900

RESUMO

Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.


Assuntos
Gestos , Fala , Humanos , Filogenia , Física , Sistema Respiratório
9.
Behav Res Methods ; 52(2): 723-740, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31659689

RESUMO

There is increasing evidence that hand gestures and speech synchronize their activity on multiple dimensions and timescales. For example, gesture's kinematic peaks (e.g., maximum speed) are coupled with prosodic markers in speech. Such coupling operates on very short timescales at the level of syllables (200 ms), and therefore requires high-resolution measurement of gesture kinematics and speech acoustics. High-resolution speech analysis is common for gesture studies, given that field's classic ties with (psycho)linguistics. However, the field has lagged behind in the objective study of gesture kinematics (e.g., as compared to research on instrumental action). Often kinematic peaks in gesture are measured by eye, where a "moment of maximum effort" is determined by several raters. In the present article, we provide a tutorial on more efficient methods to quantify the temporal properties of gesture kinematics, in which we focus on common challenges and possible solutions that come with the complexities of studying multimodal language. We further introduce and compare, using an actual gesture dataset (392 gesture events), the performance of two video-based motion-tracking methods (deep learning vs. pixel change) against a high-performance wired motion-tracking system (Polhemus Liberty). We show that the videography methods perform well in the temporal estimation of kinematic peaks, and thus provide a cheap alternative to expensive motion-tracking systems. We hope that the present article incites gesture researchers to embark on the widespread objective study of gesture kinematics and their relation to speech.


Assuntos
Gestos , Fala , Idioma , Linguística , Movimento (Física)
10.
Psychol Res ; 83(6): 1237-1250, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29242975

RESUMO

Is visual reinterpretation of bistable figures (e.g., duck/rabbit figure) in visual imagery possible? Current consensus suggests that it is in principle possible because of converging evidence of quasi-pictorial functioning of visual imagery. Yet, studies that have directly tested and found evidence for reinterpretation in visual imagery, allow for the possibility that reinterpretation was already achieved during memorization of the figure(s). One study resolved this issue, providing evidence for reinterpretation in visual imagery (Mast and Kosslyn, Cognition 86:57-70, 2002). However, participants in that study performed reinterpretations with aid of visual cues. Hence, reinterpretation was not performed with mental imagery alone. Therefore, in this study we assessed the possibility of reinterpretation without visual support. We further explored the possible role of haptic cues to assess the multimodal nature of mental imagery. Fifty-three participants were consecutively presented three to be remembered bistable 2-D figures (reinterpretable when rotated 180°), two of which were visually inspected and one was explored hapticly. After memorization of the figures, a visually bistable exemplar figure was presented to ensure understanding of the concept of visual bistability. During recall, 11 participants (out of 36; 30.6%) who did not spot bistability during memorization successfully performed reinterpretations when instructed to mentally rotate their visual image, but additional haptic cues during mental imagery did not inflate reinterpretation ability. This study validates previous findings that reinterpretation in visual imagery is possible.


Assuntos
Sinais (Psicologia) , Imaginação/fisiologia , Rememoração Mental/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Animais , Feminino , Humanos , Masculino , Coelhos , Adulto Jovem
12.
Proc Biol Sci ; 289(1979): 20221026, 2022 07 27.
Artigo em Inglês | MEDLINE | ID: mdl-35855599

Assuntos
Audição , Voz , Percepção
13.
Behav Brain Sci ; 40: e68, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-29342524

RESUMO

We observe a tension in the target article as it stresses an integrated gesture-speech system that can nevertheless consist of contradictory representational states, which are reflected by mismatches in gesture and speech or sign. Beyond problems of coherence, this prevents furthering our understanding of gesture-related learning. As a possible antidote, we invite a more dynamically embodied perspective to the stage.


Assuntos
Gestos , Língua de Sinais , Compreensão , Humanos , Idioma , Fala
14.
Cogn Process ; 17(3): 269-77, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26993293

RESUMO

Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.


Assuntos
Movimentos Oculares/fisiologia , Gestos , Memória de Curto Prazo/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Resolução de Problemas/fisiologia , Pensamento/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Testes Neuropsicológicos , Estimulação Luminosa , Adulto Jovem
15.
J Speech Lang Hear Res ; : 1-16, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38346144

RESUMO

PURPOSE: This study investigated whether temporal coupling was present between lower limb motion rate and different speech tempi during different exercise intensities. We hypothesized that increased physical workload would increase cycling rate and that this could account for previous findings of increased speech tempo during exercise. We also investigated whether the choice of speech task (read vs. spontaneous speech) affected results. METHOD: Forty-eight women who were ages 18-35 years participated. A within-participant design was used with fixed-order physical workload and counterbalanced speech task conditions. Motion capture and acoustic data were collected during exercise and at rest. Speech tempo was assessed using the amplitude envelope and two derived intrinsic mode functions that approximated syllable-like and footlike oscillations in the speech signal. Analyses were conducted with linear mixed-effects models. RESULTS: No direct entrainment between leg cycling rate and speech rate was observed. Leg cycling rate significantly increased from low to moderate workload for both speech tasks. All measures of speech tempo decreased when participants changed from rest to either low or moderate workload. CONCLUSIONS: Speech tempo does not show temporal coupling with the rate of self-generated leg motion at group level, which highlights the need to investigate potential faster scale momentary coupling. The unexpected finding that speech tempo decreases with increased physical workload may be explained by multiple mental and physical factors that are more diverse and individual than anticipated. The implication for real-world contexts is that even light physical activity-functionally equivalent to walking-may impact speech tempo.

16.
J Exp Psychol Gen ; 152(5): 1469-1483, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37053397

RESUMO

Aphasia is a profound language pathology hampering speech production and/or comprehension. People With Aphasia (PWA) use more manual gestures than Non-Brain Injured (NBI) individuals. This intuitively invokes the idea that gesture is compensatory in some way, but there is variable evidence of a gesture-boosting effect on speech processes. The status quo in gesture research with PWA is an emphasis on categorical analysis of gesture types, focusing on how often they are recruited, and whether more or less gesturing aids communication or speaking. However, there are increasingly louder calls for the investigation of gesture and speech as continuous entangled modes of expression. In NBI adults, expressive moments of gesture and speech are synchronized on the prosodic level. It has been neglected how this multimodal prosody is instantiated in PWA. In the current study, we perform the first acoustic-kinematic gesture-speech analysis in Persons With Aphasia (i.e., Wernicke's, Broca's, Anomic) relative to age-matched controls, where we apply several multimodal signal analysis methods. Specifically, we related the speech peaks (smoothed amplitude envelope change) with that of the nearest peaks in the gesture acceleration profile. We obtained that the magnitude of gesture versus speech peaks are positively related across the groups, though more variably for PWA, and such coupling was related to less severe Aphasia-related symptoms. No differences were found between controls and PWA in terms of temporal ordering of speech envelope versus acceleration peaks. Finally, we show that both gesture and speech have slower quasi-rhythmic structure, indicating that next to speech, gesture is slowed down too. The current results indicate that there is a basic gesture-speech coupling mechanism that is not fully reliant on core linguistic competences, as it is found relatively intact in PWA. This resonates with a recent biomechanical theory of gesture, which renders gesture-vocal coupling as fundamental and a priori to the (evolutionary) development of core linguistic competences. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Afasia , Gestos , Adulto , Humanos , Fala , Fenômenos Biomecânicos , Afasia/diagnóstico , Acústica
17.
Ann N Y Acad Sci ; 1515(1): 219-236, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35730069

RESUMO

In many musical styles, vocalists manually gesture while they sing. Coupling between gesture kinematics and vocalization has been examined in speech contexts, but it is an open question how these couple in music making. We examine this in a corpus of South Indian, Karnatak vocal music that includes motion-capture data. Through peak magnitude analysis (linear mixed regression) and continuous time-series analyses (generalized additive modeling), we assessed whether vocal trajectories around peaks in vertical velocity, speed, or acceleration were coupling with changes in vocal acoustics (namely, F0 and amplitude). Kinematic coupling was stronger for F0 change versus amplitude, pointing to F0's musical significance. Acceleration was the most predictive for F0 change and had the most reliable magnitude coupling, showing a one-third power relation. That acceleration, rather than other kinematics, is maximally predictive for vocalization is interesting because acceleration entails force transfers onto the body. As a theoretical contribution, we argue that gesturing in musical contexts should be understood in relation to the physical connections between gesturing and vocal production that are brought into harmony with the vocalists' (enculturated) performance goals. Gesture-vocal coupling should, therefore, be viewed as a neuro-bodily distributed aesthetic entanglement.


Assuntos
Música , Canto , Estética , Gestos , Humanos , Fala
18.
Cognition ; 222: 105015, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35033863

RESUMO

Conversational turn taking in humans involves incredibly rapid responding. The timing mechanisms underpinning such responses have been heavily debated, including questions such as who is doing the timing. Similar to findings on rhythmic tapping to a metronome, we show that floor transfer offsets (FTOs) in telephone conversations are serially dependent, such that FTOs are lag-1 negatively autocorrelated. Finding this serial dependence on a turn-by-turn basis (lag-1) rather than on the basis of two or more turns, suggests a counter-adjustment mechanism operating at the level of the dyad in FTOs during telephone conversations, rather than a more individualistic self-adjustment within speakers. This finding, if replicated, has major implications for models describing turn taking, and confirms the joint, dyadic nature of human conversational dynamics. Future research is needed to see how pervasive serial dependencies in FTOs are, such as for example in richer communicative face-to-face contexts where visual signals affect conversational timing.


Assuntos
Comunicação , Relações Interpessoais , Dioxigenase FTO Dependente de alfa-Cetoglutarato , Humanos , Telefone
19.
Neurosci Biobehav Rev ; 141: 104836, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36031008

RESUMO

Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory-vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal-motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.


Assuntos
Hominidae , Voz , Animais , Gestos , Humanos , Filogenia , Ratos
20.
Sci Rep ; 12(1): 19111, 2022 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-36351949

RESUMO

How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair-a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.


Assuntos
Gestos , Interação Social , Humanos , Fala , Idioma , Linguística
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA