Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 65
Filter
1.
Top Cogn Sci ; 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-38493475

ABSTRACT

Language is inherently multimodal. In spoken languages, combined spoken and visual signals (e.g., co-speech gestures) are an integral part of linguistic structure and language representation. This requires an extension of the parallel architecture, which needs to include the visual signals concomitant to speech. We present the evidence for the multimodality of language. In addition, we propose that distributional semantics might provide a format for integrating speech and co-speech gestures in a common semantic representation.

2.
J Exp Psychol Gen ; 152(9): 2623-2635, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37093667

ABSTRACT

Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying "The candle is here" and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners' comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech ("here") compared to when they express redundant information to speech (e.g., "right") and (b) gazing at gestures related to listeners' information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners' comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners' visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Attention , Comprehension , Cues , Gestures , Speech , Adolescent , Adult , Female , Humans , Male , Young Adult , Acoustic Stimulation , Photic Stimulation , Turkey , Saccades
3.
Cogn Sci ; 47(1): e13228, 2023 01.
Article in English | MEDLINE | ID: mdl-36607157

ABSTRACT

The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.


Subject(s)
Blindness , Language , Humans , Speech , Motion
4.
Mem Cognit ; 51(3): 582-600, 2023 04.
Article in English | MEDLINE | ID: mdl-35301680

ABSTRACT

Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk Isaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.


Subject(s)
Sign Language , Spatial Memory , Adult , Humans , Child , Language , Cognition , Hearing
5.
Front Psychol ; 14: 1305562, 2023.
Article in English | MEDLINE | ID: mdl-38303780

ABSTRACT

The present study investigated to what extent children, compared to adults, benefit from gestures to disambiguate degraded speech by manipulating speech signals and manual modality. Dutch-speaking adults (N = 20) and 6- and 7-year-old children (N = 15) were presented with a series of video clips in which an actor produced a Dutch action verb with or without an accompanying iconic gesture. Participants were then asked to repeat what they had heard. The speech signal was either clear or altered into 4- or 8-band noise-vocoded speech. Children had more difficulty than adults in disambiguating degraded speech in the speech-only condition. However, when presented with both speech and gestures, children reached a comparable level of accuracy to that of adults in the degraded-speech-only condition. Furthermore, for adults, the enhancement of gestures was greater in the 4-band condition than in the 8-band condition, whereas children showed the opposite pattern. Gestures help children to disambiguate degraded speech, but children need more phonological information than adults to benefit from use of gestures. Children's multimodal language integration needs to further develop to adapt flexibly to challenging situations such as degraded speech, as tested in our study, or instances where speech is heard with environmental noise or through a face mask.

6.
J Child Lang ; : 1-27, 2022 Dec 13.
Article in English | MEDLINE | ID: mdl-36510476

ABSTRACT

Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children's co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers' spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.

7.
Neuroimage ; 264: 119734, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36343884

ABSTRACT

We present a dataset of behavioural and fMRI observations acquired in the context of humans involved in multimodal referential communication. The dataset contains audio/video and motion-tracking recordings of face-to-face, task-based communicative interactions in Dutch, as well as behavioural and neural correlates of participants' representations of dialogue referents. Seventy-one pairs of unacquainted participants performed two interleaved interactional tasks in which they described and located 16 novel geometrical objects (i.e., Fribbles) yielding spontaneous interactions of about one hour. We share high-quality video (from three cameras), audio (from head-mounted microphones), and motion-tracking (Kinect) data, as well as speech transcripts of the interactions. Before and after engaging in the face-to-face communicative interactions, participants' individual representations of the 16 Fribbles were estimated. Behaviourally, participants provided a written description (one to three words) for each Fribble and positioned them along 29 independent conceptual dimensions (e.g., rounded, human, audible). Neurally, fMRI signal evoked by each Fribble was measured during a one-back working-memory task. To enable functional hyperalignment across participants, the dataset also includes fMRI measurements obtained during visual presentation of eight animated movies (35 min total). We present analyses for the various types of data demonstrating their quality and consistency with earlier research. Besides high-resolution multimodal interactional data, this dataset includes different correlates of communicative referents, obtained before and after face-to-face dialogue, allowing for novel investigations into the relation between communicative behaviours and the representational space shared by communicators. This unique combination of data can be used for research in neuroscience, psychology, linguistics, and beyond.


Subject(s)
Linguistics , Speech , Humans , Speech/physiology , Communication , Language , Magnetic Resonance Imaging
8.
Sci Rep ; 12(1): 19111, 2022 11 09.
Article in English | MEDLINE | ID: mdl-36351949

ABSTRACT

How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair-a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.


Subject(s)
Gestures , Social Interaction , Humans , Speech , Language , Linguistics
9.
Acta Psychol (Amst) ; 229: 103690, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35961184

ABSTRACT

Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee (common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D-models from building blocks on six consecutive trials. We induced mutually shared knowledge by either showing speaker and addressee the model beforehand, or not. Additionally, shared knowledge accumulated across the trials. Younger and crucially also older adults provided recipient-designed utterances, indicated by a significant reduction in the number of words and of gestures when common ground was present. Additionally, we observed a reduction in semantic content and a shift in cross-modal distribution of information across trials. Rather than age, individual differences in verbal and visual working memory and semantic fluency predicted the extent of addressee-based adaptations. Thus, in spatial tasks, individual cognitive abilities modulate the interactive language use of both younger and older adults.


Subject(s)
Memory, Short-Term , Semantics , Aged , Aging , Gestures , Humans , Individuality , Speech
10.
Cognition ; 225: 105127, 2022 08.
Article in English | MEDLINE | ID: mdl-35617850

ABSTRACT

Speakers' visual attention to events is guided by linguistic conceptualization of information in spoken language production and in language-specific ways. Does production of language-specific co-speech gestures further guide speakers' visual attention during message preparation? Here, we examine the link between visual attention and multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers' speech and gesture show language specificity with path of motion mostly expressed within the main verb accompanied by path gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not. Results strongly suggest that speakers' visual attention is guided by language-specific event encoding not only in speech but also in gesture. This provides evidence consistent with models that propose integration of speech and gesture at the conceptualization level of language production and suggests that the links between the eye and the mouth may be extended to the eye and the hand.


Subject(s)
Concept Formation , Gestures , Adult , Eye Movements , Humans , Perception , Speech
11.
Cogn Sci ; 46(5): e13133, 2022 05.
Article in English | MEDLINE | ID: mdl-35613353

ABSTRACT

Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality-specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality-specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality-specific adaptive capabilities.


Subject(s)
Gestures , Sign Language , Humans , Language , Language Development , Linguistics
12.
Soc Cogn Affect Neurosci ; 17(11): 1021-1034, 2022 11 02.
Article in English | MEDLINE | ID: mdl-35428885

ABSTRACT

Persons with and without autism process sensory information differently. Differences in sensory processing are directly relevant to social functioning and communicative abilities, which are known to be hampered in persons with autism. We collected functional magnetic resonance imaging data from 25 autistic individuals and 25 neurotypical individuals while they performed a silent gesture recognition task. We exploited brain network topology, a holistic quantification of how networks within the brain are organized to provide new insights into how visual communicative signals are processed in autistic and neurotypical individuals. Performing graph theoretical analysis, we calculated two network properties of the action observation network: 'local efficiency', as a measure of network segregation, and 'global efficiency', as a measure of network integration. We found that persons with autism and neurotypical persons differ in how the action observation network is organized. Persons with autism utilize a more clustered, local-processing-oriented network configuration (i.e. higher local efficiency) rather than the more integrative network organization seen in neurotypicals (i.e. higher global efficiency). These results shed new light on the complex interplay between social and sensory processing in autism.


Subject(s)
Autistic Disorder , Humans , Autistic Disorder/pathology , Gestures , Brain , Brain Mapping , Magnetic Resonance Imaging/methods
13.
Autism Res ; 14(12): 2640-2653, 2021 12.
Article in English | MEDLINE | ID: mdl-34536063

ABSTRACT

In human communication, social intentions and meaning are often revealed in the way we move. In this study, we investigate the flexibility of human communication in terms of kinematic modulation in a clinical population, namely, autistic individuals. The aim of this study was twofold: to assess (a) whether communicatively relevant kinematic features of gestures differ between autistic and neurotypical individuals, and (b) if autistic individuals use communicative kinematic modulation to support gesture recognition. We tested autistic and neurotypical individuals on a silent gesture production task and a gesture comprehension task. We measured movement during the gesture production task using a Kinect motion tracking device in order to determine if autistic individuals differed from neurotypical individuals in their gesture kinematics. For the gesture comprehension task, we assessed whether autistic individuals used communicatively relevant kinematic cues to support recognition. This was done by using stick-light figures as stimuli and testing for a correlation between the kinematics of these videos and recognition performance. We found that (a) silent gestures produced by autistic and neurotypical individuals differ in communicatively relevant kinematic features, such as the number of meaningful holds between movements, and (b) while autistic individuals are overall unimpaired at recognizing gestures, they processed repetition and complexity, measured as the amount of submovements perceived, differently than neurotypicals do. These findings highlight how subtle aspects of neurotypical behavior can be experienced differently by autistic individuals. They further demonstrate the relationship between movement kinematics and social interaction in high-functioning autistic individuals. LAY SUMMARY: Hand gestures are an important part of how we communicate, and the way that we move when gesturing can influence how easy a gesture is to understand. We studied how autistic and typical individuals produce and recognize hand gestures, and how this relates to movement characteristics. We found that autistic individuals moved differently when gesturing compared to typical individuals. In addition, while autistic individuals were not worse at recognizing gestures, they differed from typical individuals in how they interpreted certain movement characteristics.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Biomechanical Phenomena , Gestures , Humans , Perception
14.
J Cogn ; 4(1): 42, 2021.
Article in English | MEDLINE | ID: mdl-34514313

ABSTRACT

Language in its primary face-to-face context is multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). Thus, understanding how expressions in the vocal and visual modalities together contribute to our notions of language structure, use, processing, and transmission (i.e., acquisition, evolution, emergence) in different languages and cultures should be a fundamental goal of language sciences. This requires a new framework of language that brings together how arbitrary and non-arbitrary and motivated semiotic resources of language relate to each other. Current commentary evaluates such a proposal by Murgiano et al (2021) from a crosslinguistic perspective taking variation as well as systematicity in multimodal utterances into account.

15.
Sci Rep ; 11(1): 16721, 2021 08 18.
Article in English | MEDLINE | ID: mdl-34408178

ABSTRACT

In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.

16.
Cogn Sci ; 45(7): e13014, 2021 07.
Article in English | MEDLINE | ID: mdl-34288069

ABSTRACT

Silent gestures consist of complex multi-articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content.


Subject(s)
Gestures , Language , Biomechanical Phenomena , Humans , Language Development , Linguistics
17.
Psychol Sci ; 32(3): 424-436, 2021 03.
Article in English | MEDLINE | ID: mdl-33621474

ABSTRACT

Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals' expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals' speech and signs are shaped by two languages from different modalities.


Subject(s)
Multilingualism , Speech , Humans , Language , Linguistics , Sign Language
18.
Psychol Res ; 85(5): 1997-2011, 2021 Jul.
Article in English | MEDLINE | ID: mdl-32627053

ABSTRACT

When comprehending speech-in-noise (SiN), younger and older adults benefit from seeing the speaker's mouth, i.e. visible speech. Younger adults additionally benefit from manual iconic co-speech gestures. Here, we investigate to what extent younger and older adults benefit from perceiving both visual articulators while comprehending SiN, and whether this is modulated by working memory and inhibitory control. Twenty-eight younger and 28 older adults performed a word recognition task in three visual contexts: mouth blurred (speech-only), visible speech, or visible speech + iconic gesture. The speech signal was either clear or embedded in multitalker babble. Additionally, there were two visual-only conditions (visible speech, visible speech + gesture). Accuracy levels for both age groups were higher when both visual articulators were present compared to either one or none. However, older adults received a significantly smaller benefit than younger adults, although they performed equally well in speech-only and visual-only word recognition. Individual differences in verbal working memory and inhibitory control partly accounted for age-related performance differences. To conclude, perceiving iconic gestures in addition to visible speech improves younger and older adults' comprehension of SiN. Yet, the ability to benefit from this additional visual information is modulated by age and verbal working memory. Future research will have to show whether these findings extend beyond the single word level.


Subject(s)
Aging/psychology , Lipreading , Memory, Short-Term , Nonverbal Communication/psychology , Sign Language , Speech Perception , Age Factors , Aged , Comprehension , Gestures , Humans , Noise , Signal Detection, Psychological , Visual Perception , Young Adult
19.
Cogn Sci ; 44(11): e12911, 2020 11.
Article in English | MEDLINE | ID: mdl-33124090

ABSTRACT

When people are engaged in social interaction, they can repeat aspects of each other's communicative behavior, such as words or gestures. This kind of behavioral alignment has been studied across a wide range of disciplines and has been accounted for by diverging theories. In this paper, we review various operationalizations of lexical and gestural alignment. We reveal that scholars have fundamentally different takes on when and how behavior is considered to be aligned, which makes it difficult to compare findings and draw uniform conclusions. Furthermore, we show that scholars tend to focus on one particular dimension of alignment (traditionally, whether two instances of behavior overlap in form), while other dimensions remain understudied. This hampers theory testing and building, which requires a well-defined account of the factors that are central to or might enhance alignment. To capture the complex nature of alignment, we identify five key dimensions to formalize the relationship between any pair of behavior: time, sequence, meaning, form, and modality. We show how assumptions regarding the underlying mechanism of alignment (placed along the continuum of priming vs. grounding) pattern together with operationalizations in terms of the five dimensions. This integrative framework can help researchers in the field of alignment and related phenomena (including behavior matching, mimicry, entrainment, and accommodation) to formulate their hypotheses and operationalizations in a more transparent and systematic manner. The framework also enables us to discover unexplored research avenues and derive new hypotheses regarding alignment.


Subject(s)
Gestures , Interpersonal Relations , Humans , Time Factors
20.
J Exp Psychol Learn Mem Cogn ; 46(9): 1735-1753, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32352819

ABSTRACT

To talk about space, spoken languages rely on arbitrary and categorical forms (e.g., left, right). In sign languages, however, the visual-spatial modality allows for iconic encodings (motivated form-meaning mappings) of space in which form and location of the hands bear resemblance to the objects and spatial relations depicted. We assessed whether the iconic encodings in sign languages guide visual attention to spatial relations differently than spatial encodings in spoken languages during message preparation at the sentence level. Using a visual world production eye-tracking paradigm, we compared 20 deaf native signers of Sign-Language-of-the-Netherlands and 20 Dutch speakers' visual attention to describe left versus right configurations of objects (e.g., "pen is to the left/right of cup"). Participants viewed 4-picture displays in which each picture contained the same 2 objects but in different spatial relations (lateral [left/right], sagittal [front/behind], topological [in/on]) to each other. They described the target picture (left/right) highlighted by an arrow. During message preparation, signers, but not speakers, experienced increasing eye-gaze competition from other spatial configurations. This effect was absent during picture viewing prior to message preparation of relational encoding. Moreover, signers' visual attention to lateral and/or sagittal relations was predicted by the type of iconicity (i.e., object and space resemblance vs. space resemblance only) in their spatial descriptions. Findings are discussed in relation to how "thinking for speaking" differs from "thinking for signing" and how iconicity can mediate the link between language and human experience and guides signers' but not speakers' attention to visual aspects of the world. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Attention/physiology , Deafness/physiopathology , Fixation, Ocular/physiology , Sign Language , Space Perception/physiology , Speech/physiology , Visual Perception/physiology , Adult , Eye-Tracking Technology , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...