Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
PLoS Comput Biol ; 20(6): e1012222, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38913743

RESUMO

Biological structures are defined by rigid elements, such as bones, and elastic elements, like muscles and membranes. Computer vision advances have enabled automatic tracking of moving animal skeletal poses. Such developments provide insights into complex time-varying dynamics of biological motion. Conversely, the elastic soft-tissues of organisms, like the nose of elephant seals, or the buccal sac of frogs, are poorly studied and no computer vision methods have been proposed. This leaves major gaps in different areas of biology. In primatology, most critically, the function of air sacs is widely debated; many open questions on the role of air sacs in the evolution of animal communication, including human speech, remain unanswered. To support the dynamic study of soft-tissue structures, we present a toolkit for the automated tracking of semi-circular elastic structures in biological video data. The toolkit contains unsupervised computer vision tools (using Hough transform) and supervised deep learning (by adapting DeepLabCut) methodology to track inflation of laryngeal air sacs or other biological spherical objects (e.g., gular cavities). Confirming the value of elastic kinematic analysis, we show that air sac inflation correlates with acoustic markers that likely inform about body size. Finally, we present a pre-processed audiovisual-kinematic dataset of 7+ hours of closeup audiovisual recordings of siamang (Symphalangus syndactylus) singing. This toolkit (https://github.com/WimPouw/AirSacTracker) aims to revitalize the study of non-skeletal morphological structures across multiple species.


Assuntos
Sacos Aéreos , Elasticidade , Animais , Sacos Aéreos/fisiologia , Sacos Aéreos/anatomia & histologia , Fenômenos Biomecânicos , Biologia Computacional/métodos , Aprendizado Profundo , Gravação em Vídeo/métodos
2.
J Speech Lang Hear Res ; : 1-16, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38346144

RESUMO

PURPOSE: This study investigated whether temporal coupling was present between lower limb motion rate and different speech tempi during different exercise intensities. We hypothesized that increased physical workload would increase cycling rate and that this could account for previous findings of increased speech tempo during exercise. We also investigated whether the choice of speech task (read vs. spontaneous speech) affected results. METHOD: Forty-eight women who were ages 18-35 years participated. A within-participant design was used with fixed-order physical workload and counterbalanced speech task conditions. Motion capture and acoustic data were collected during exercise and at rest. Speech tempo was assessed using the amplitude envelope and two derived intrinsic mode functions that approximated syllable-like and footlike oscillations in the speech signal. Analyses were conducted with linear mixed-effects models. RESULTS: No direct entrainment between leg cycling rate and speech rate was observed. Leg cycling rate significantly increased from low to moderate workload for both speech tasks. All measures of speech tempo decreased when participants changed from rest to either low or moderate workload. CONCLUSIONS: Speech tempo does not show temporal coupling with the rate of self-generated leg motion at group level, which highlights the need to investigate potential faster scale momentary coupling. The unexpected finding that speech tempo decreases with increased physical workload may be explained by multiple mental and physical factors that are more diverse and individual than anticipated. The implication for real-world contexts is that even light physical activity-functionally equivalent to walking-may impact speech tempo.

3.
J Exp Psychol Gen ; 152(5): 1469-1483, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37053397

RESUMO

Aphasia is a profound language pathology hampering speech production and/or comprehension. People With Aphasia (PWA) use more manual gestures than Non-Brain Injured (NBI) individuals. This intuitively invokes the idea that gesture is compensatory in some way, but there is variable evidence of a gesture-boosting effect on speech processes. The status quo in gesture research with PWA is an emphasis on categorical analysis of gesture types, focusing on how often they are recruited, and whether more or less gesturing aids communication or speaking. However, there are increasingly louder calls for the investigation of gesture and speech as continuous entangled modes of expression. In NBI adults, expressive moments of gesture and speech are synchronized on the prosodic level. It has been neglected how this multimodal prosody is instantiated in PWA. In the current study, we perform the first acoustic-kinematic gesture-speech analysis in Persons With Aphasia (i.e., Wernicke's, Broca's, Anomic) relative to age-matched controls, where we apply several multimodal signal analysis methods. Specifically, we related the speech peaks (smoothed amplitude envelope change) with that of the nearest peaks in the gesture acceleration profile. We obtained that the magnitude of gesture versus speech peaks are positively related across the groups, though more variably for PWA, and such coupling was related to less severe Aphasia-related symptoms. No differences were found between controls and PWA in terms of temporal ordering of speech envelope versus acceleration peaks. Finally, we show that both gesture and speech have slower quasi-rhythmic structure, indicating that next to speech, gesture is slowed down too. The current results indicate that there is a basic gesture-speech coupling mechanism that is not fully reliant on core linguistic competences, as it is found relatively intact in PWA. This resonates with a recent biomechanical theory of gesture, which renders gesture-vocal coupling as fundamental and a priori to the (evolutionary) development of core linguistic competences. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Afasia , Gestos , Adulto , Humanos , Fala , Fenômenos Biomecânicos , Afasia/diagnóstico , Acústica
4.
Neuroimage ; 264: 119734, 2022 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-36343884

RESUMO

We present a dataset of behavioural and fMRI observations acquired in the context of humans involved in multimodal referential communication. The dataset contains audio/video and motion-tracking recordings of face-to-face, task-based communicative interactions in Dutch, as well as behavioural and neural correlates of participants' representations of dialogue referents. Seventy-one pairs of unacquainted participants performed two interleaved interactional tasks in which they described and located 16 novel geometrical objects (i.e., Fribbles) yielding spontaneous interactions of about one hour. We share high-quality video (from three cameras), audio (from head-mounted microphones), and motion-tracking (Kinect) data, as well as speech transcripts of the interactions. Before and after engaging in the face-to-face communicative interactions, participants' individual representations of the 16 Fribbles were estimated. Behaviourally, participants provided a written description (one to three words) for each Fribble and positioned them along 29 independent conceptual dimensions (e.g., rounded, human, audible). Neurally, fMRI signal evoked by each Fribble was measured during a one-back working-memory task. To enable functional hyperalignment across participants, the dataset also includes fMRI measurements obtained during visual presentation of eight animated movies (35 min total). We present analyses for the various types of data demonstrating their quality and consistency with earlier research. Besides high-resolution multimodal interactional data, this dataset includes different correlates of communicative referents, obtained before and after face-to-face dialogue, allowing for novel investigations into the relation between communicative behaviours and the representational space shared by communicators. This unique combination of data can be used for research in neuroscience, psychology, linguistics, and beyond.


Assuntos
Linguística , Fala , Humanos , Fala/fisiologia , Comunicação , Idioma , Imageamento por Ressonância Magnética
5.
Sci Rep ; 12(1): 19111, 2022 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-36351949

RESUMO

How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair-a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.


Assuntos
Gestos , Interação Social , Humanos , Fala , Idioma , Linguística
6.
Neurosci Biobehav Rev ; 141: 104836, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36031008

RESUMO

Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory-vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal-motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.


Assuntos
Hominidae , Voz , Animais , Gestos , Humanos , Filogenia , Ratos
7.
Sci Rep ; 12(1): 14775, 2022 08 30.
Artigo em Inglês | MEDLINE | ID: mdl-36042321

RESUMO

Do communicative actions such as gestures fundamentally differ in their control mechanisms from other actions? Evidence for such fundamental differences comes from a classic gesture-speech coordination experiment performed with a person (IW) with deafferentation (McNeill, 2005). Although IW has lost both his primary source of information about body position (i.e., proprioception) and discriminative touch from the neck down, his gesture-speech coordination has been reported to be largely unaffected, even if his vision is blocked. This is surprising because, without vision, his object-directed actions almost completely break down. We examine the hypothesis that IW's gesture-speech coordination is supported by the biomechanical effects of gesturing on head posture and speech. We find that when vision is blocked, there are micro-scale increases in gesture-speech timing variability, consistent with IW's reported experience that gesturing is difficult without vision. Supporting the hypothesis that IW exploits biomechanical consequences of the act of gesturing, we find that: (1) gestures with larger physical impulses co-occur with greater head movement, (2) gesture-speech synchrony relates to larger gesture-concurrent head movements (i.e. for bimanual gestures), (3) when vision is blocked, gestures generate more physical impulse, and (4) moments of acoustic prominence couple more with peaks of physical impulse when vision is blocked. It can be concluded that IW's gesturing ability is not based on a specialized language-based feedforward control as originally concluded from previous research, but is still dependent on a varied means of recurrent feedback from the body.


Assuntos
Gestos , Fala , Fenômenos Biomecânicos/fisiologia , Retroalimentação Sensorial , Humanos , Postura , Fala/fisiologia
8.
Proc Biol Sci ; 289(1979): 20221026, 2022 07 27.
Artigo em Inglês | MEDLINE | ID: mdl-35855599

Assuntos
Audição , Voz , Percepção
9.
Ann N Y Acad Sci ; 1515(1): 219-236, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35730069

RESUMO

In many musical styles, vocalists manually gesture while they sing. Coupling between gesture kinematics and vocalization has been examined in speech contexts, but it is an open question how these couple in music making. We examine this in a corpus of South Indian, Karnatak vocal music that includes motion-capture data. Through peak magnitude analysis (linear mixed regression) and continuous time-series analyses (generalized additive modeling), we assessed whether vocal trajectories around peaks in vertical velocity, speed, or acceleration were coupling with changes in vocal acoustics (namely, F0 and amplitude). Kinematic coupling was stronger for F0 change versus amplitude, pointing to F0's musical significance. Acceleration was the most predictive for F0 change and had the most reliable magnitude coupling, showing a one-third power relation. That acceleration, rather than other kinematics, is maximally predictive for vocalization is interesting because acceleration entails force transfers onto the body. As a theoretical contribution, we argue that gesturing in musical contexts should be understood in relation to the physical connections between gesturing and vocal production that are brought into harmony with the vocalists' (enculturated) performance goals. Gesture-vocal coupling should, therefore, be viewed as a neuro-bodily distributed aesthetic entanglement.


Assuntos
Música , Canto , Estética , Gestos , Humanos , Fala
10.
Cognition ; 222: 105015, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35033863

RESUMO

Conversational turn taking in humans involves incredibly rapid responding. The timing mechanisms underpinning such responses have been heavily debated, including questions such as who is doing the timing. Similar to findings on rhythmic tapping to a metronome, we show that floor transfer offsets (FTOs) in telephone conversations are serially dependent, such that FTOs are lag-1 negatively autocorrelated. Finding this serial dependence on a turn-by-turn basis (lag-1) rather than on the basis of two or more turns, suggests a counter-adjustment mechanism operating at the level of the dyad in FTOs during telephone conversations, rather than a more individualistic self-adjustment within speakers. This finding, if replicated, has major implications for models describing turn taking, and confirms the joint, dyadic nature of human conversational dynamics. Future research is needed to see how pervasive serial dependencies in FTOs are, such as for example in richer communicative face-to-face contexts where visual signals affect conversational timing.


Assuntos
Comunicação , Relações Interpessoais , Dioxigenase FTO Dependente de alfa-Cetoglutarato , Humanos , Telefone
11.
Philos Trans R Soc Lond B Biol Sci ; 376(1835): 20200334, 2021 10 11.
Artigo em Inglês | MEDLINE | ID: mdl-34420378

RESUMO

It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.


Assuntos
Comunicação Animal , Comunicação , Periodicidade , Primatas/psicologia , Animais , Humanos
12.
Psychol Sci ; 32(8): 1227-1237, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34240647

RESUMO

When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.


Assuntos
Ilusões , Gestos , Mãos , Humanos , Língua de Sinais , Fala
13.
Cogn Sci ; 45(7): e13014, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-34288069

RESUMO

Silent gestures consist of complex multi-articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.


Assuntos
Gestos , Idioma , Fenômenos Biomecânicos , Humanos , Desenvolvimento da Linguagem , Linguística
14.
Ann N Y Acad Sci ; 1491(1): 89-105, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33336809

RESUMO

It is commonly understood that hand gesture and speech coordination in humans is culturally and cognitively acquired, rather than having a biological basis. Recently, however, the biomechanical physical coupling of arm movements to speech vocalization has been studied in steady-state vocalization and monosyllabic utterances, where forces produced during gesturing are transferred onto the tensioned body, leading to changes in respiratory-related activity and thereby affecting vocalization F0 and intensity. In the current experiment (n = 37), we extend this previous line of work to show that gesture-speech physics also impacts fluent speech. Compared with nonmovement, participants who are producing fluent self-formulated speech while rhythmically moving their limbs demonstrate heightened F0 and amplitude envelope, and such effects are more pronounced for higher-impulse arm versus lower-impulse wrist movement. We replicate that acoustic peaks arise especially during moments of peak impulse (i.e., the beat) of the movement, namely around deceleration phases of the movement. Finally, higher deceleration rates of higher-mass arm movements were related to higher peaks in acoustics. These results confirm a role for physical impulses of gesture affecting the speech system. We discuss the implications of gesture-speech physics for understanding of the emergence of communicative gesture, both ontogenetically and phylogenetically.


Assuntos
Gestos , Movimento/fisiologia , Fala/fisiologia , Adolescente , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Percepção de Movimento/fisiologia , Acústica da Fala , Adulto Jovem
15.
Perception ; 49(9): 905-925, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33002391

RESUMO

Most objects have well-defined affordances. Investigating perception of affordances of objects that were not created for a specific purpose would provide insight into how affordances are perceived. In addition, comparison of perception of affordances for such objects across different exploratory modalities (visual vs. haptic) would offer a strong test of the lawfulness of information about affordances (i.e., the invariance of such information over transformation). Along these lines, "feelies"- objects created by Gibson with no obvious function and unlike any common object-could shed light on the processes underlying affordance perception. This study showed that when observers reported potential uses for feelies, modality significantly influenced what kind of affordances were perceived. Specifically, visual exploration resulted in more noun labels (e.g., "toy") than haptic exploration which resulted in more verb labels (i.e., "throw"). These results suggested that overlapping, but distinct classes of action possibilities are perceivable using vision and haptics. Semantic network analyses revealed that visual exploration resulted in object-oriented responses focused on object identification, whereas haptic exploration resulted in action-oriented responses. Cluster analyses confirmed these results. Affordance labels produced in the visual condition were more consistent, used fewer descriptors, were less diverse, but more novel than in the haptic condition.


Assuntos
Atividade Motora/fisiologia , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
16.
J Acoust Soc Am ; 148(3): 1231, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-33003900

RESUMO

Expressive moments in communicative hand gestures often align with emphatic stress in speech. It has recently been found that acoustic markers of emphatic stress arise naturally during steady-state phonation when upper-limb movements impart physical impulses on the body, most likely affecting acoustics via respiratory activity. In this confirmatory study, participants (N = 29) repeatedly uttered consonant-vowel (/pa/) mono-syllables while moving in particular phase relations with speech, or not moving the upper limbs. This study shows that respiration-related activity is affected by (especially high-impulse) gesturing when vocalizations occur near peaks in physical impulse. This study further shows that gesture-induced moments of bodily impulses increase the amplitude envelope of speech, while not similarly affecting the Fundamental Frequency (F0). Finally, tight relations between respiration-related activity and vocalization were observed, even in the absence of movement, but even more so when upper-limb movement is present. The current findings expand a developing line of research showing that speech is modulated by functional biomechanical linkages between hand gestures and the respiratory system. This identification of gesture-speech biomechanics promises to provide an alternative phylogenetic, ontogenetic, and mechanistic explanatory route of why communicative upper limb movements co-occur with speech in humans.


Assuntos
Gestos , Fala , Humanos , Filogenia , Física , Sistema Respiratório
18.
Cogn Sci ; 44(9): e12889, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32893407

RESUMO

Speakers often use gesture to demonstrate how to perform actions-for example, they might show how to open the top of a jar by making a twisting motion above the jar. Yet it is unclear whether listeners learn as much from seeing such gestures as they learn from seeing actions that physically change the position of objects (i.e., actually opening the jar). Here, we examined participants' implicit and explicit understanding about a series of movements that demonstrated how to move a set of objects. The movements were either shown with actions that physically relocated each object or with gestures that represented the relocation without touching the objects. Further, the end location that was indicated for each object covaried with whether the object was grasped with one or two hands. We found that memory for the end location of each object was better after seeing the physical relocation of the objects, that is, after seeing action, than after seeing gesture, regardless of whether speech was absent (Experiment 1) or present (Experiment 2). However, gesture and action built similar implicit understanding of how a particular handgrasp corresponded with a particular end location. Although gestures miss the benefit of showing the end state of objects that have been acted upon, the data show that gestures are as good as action in building knowledge of how to perform an action.


Assuntos
Gestos , Compreensão , Humanos , Memória , Movimento , Fala
19.
Proc Natl Acad Sci U S A ; 117(21): 11364-11367, 2020 05 26.
Artigo em Inglês | MEDLINE | ID: mdl-32393618

RESUMO

We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear and not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory-vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another's dynamic physical states.


Assuntos
Extremidade Superior/fisiologia , Voz/fisiologia , Adulto , Percepção Auditiva , Feminino , Audição/fisiologia , Humanos , Masculino , Atividade Motora/fisiologia , Experimentação Humana não Terapêutica , Punho/fisiologia
20.
Psychol Res ; 84(4): 966-980, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30552506

RESUMO

Co-speech gestures have been proposed to strengthen sensorimotor knowledge related to objects' weight and manipulability. This pre-registered study (https://www.osf.io/9uh6q/) was designed to explore how gestures affect memory for sensorimotor information through the application of the visual-haptic size-weight illusion (i.e., objects weigh the same, but are experienced as different in weight). With this paradigm, a discrepancy can be induced between participants' conscious illusory perception of objects' weight and their implicit sensorimotor knowledge (i.e., veridical motor coordination). Depending on whether gestures reflect and strengthen either of these types of knowledge, gestures may respectively decrease or increase the magnitude of the size-weight illusion. Participants (N = 159) practiced a problem-solving task with small and large objects that were designed to induce a size-weight illusion, and then explained the task with or without co-speech gesture or completed a control task. Afterwards, participants judged the heaviness of objects from memory and then while holding them. Confirmatory analyses revealed an inverted size-weight illusion based on heaviness judgments from memory and we found gesturing did not affect judgments. However, exploratory analyses showed reliable correlations between participants' heaviness judgments from memory and (a) the number of gestures produced that simulated actions, and (b) the kinematics of the lifting phases of those gestures. These findings suggest that gestures emerge as sensorimotor imaginings that are governed by the agent's conscious renderings about the actions they describe, rather than implicit motor routines.


Assuntos
Gestos , Ilusões/psicologia , Percepção de Peso , Adolescente , Adulto , Feminino , Humanos , Julgamento , Masculino , Memória , Resolução de Problemas , Percepção de Tamanho , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA